RESUMEN
The growing problem of unsolicited text messages (smishing) and data irregularities necessitates stronger spam detection solutions. This paper explores the development of a sophisticated model designed to identify smishing messages by understanding the complex relationships among words, images, and context-specific factors, areas that remain underexplored in existing research. To address this, we merge a UCI spam dataset of regular text messages with real-world spam data, leveraging OCR technology for comprehensive analysis. The study employs a combination of traditional machine learning models, including K-means, Non-Negative Matrix Factorization, and Gaussian Mixture Models, along with feature extraction techniques such as TF-IDF and PCA. Additionally, deep learning models like RNN-Flatten, LSTM, and Bi-LSTM are utilized. The selection of these models is driven by their complementary strengths in capturing both the linear and non-linear relationships inherent in smishing messages. Machine learning models are chosen for their efficiency in handling structured text data, while deep learning models are selected for their superior ability to capture sequential dependencies and contextual nuances. The performance of these models is rigorously evaluated using metrics like accuracy, precision, recall, and F1 score, enabling a comparative analysis between the machine learning and deep learning approaches. Notably, the K-means feature extraction with vectorizer achieved 91.01% accuracy, and the KNN-Flatten model reached 94.13% accuracy, emerging as the top performer. The rationale behind highlighting these models is their potential to significantly improve smishing detection rates. For instance, the high accuracy of the KNN-Flatten model suggests its applicability in real-time spam detection systems, but its computational complexity might limit scalability in large-scale deployments. Similarly, while K-means with vectorizer excels in accuracy, it may struggle with the dynamic and evolving nature of smishing attacks, necessitating continual retraining.
RESUMEN
Metastatic breast cancer (MBC) continues to be a leading cause of cancer-related deaths among women. This work introduces an innovative non-invasive breast cancer classification model designed to improve the identification of cancer metastases. While this study marks the initial exploration into predicting MBC, additional investigations are essential to validate the occurrence of MBC. Our approach combines the strengths of large language models (LLMs), specifically the bidirectional encoder representations from transformers (BERT) model, with the powerful capabilities of graph neural networks (GNNs) to predict MBC patients based on their histopathology reports. This paper introduces a BERT-GNN approach for metastatic breast cancer prediction (BG-MBC) that integrates graph information derived from the BERT model. In this model, nodes are constructed from patient medical records, while BERT embeddings are employed to vectorise representations of the words in histopathology reports, thereby capturing semantic information crucial for classification by employing three distinct approaches (namely univariate selection, extra trees classifier for feature importance, and Shapley values to identify the features that have the most significant impact). Identifying the most crucial 30 features out of 676 generated as embeddings during model training, our model further enhances its predictive capabilities. The BG-MBC model achieves outstanding accuracy, with a detection rate of 0.98 and an area under curve (AUC) of 0.98, in identifying MBC patients. This remarkable performance is credited to the model's utilisation of attention scores generated by the LLM from histopathology reports, effectively capturing pertinent features for classification.
RESUMEN
The sneaker industry is continuing to expand at a fast rate and will be worth over USD 120 billion in the next few years. This is, in part due to social media and online retailers building hype around releases of limited-edition sneakers, which are usually collaborations between well-known global icons and footwear companies. These limited-edition sneakers are typically released in low quantities using an online raffle system, meaning only a few people can get their hands on them. As expected, this causes their value to skyrocket and has created an extremely lucrative resale market for sneakers. This has given rise to numerous counterfeit sneakers flooding the resale market, resulting in online platforms having to hand-verify a sneaker's authenticity, which is an important but time-consuming procedure that slows the selling and buying process. To speed up the authentication process, Support Vector Machines and a convolutional neural network were used to classify images of fake and real sneakers and then their accuracies were compared to see which performed better. The results showed that the CNNs performed much better at this task than the SVMs with some accuracies over 95%. Therefore, a CNN is well equipped to be a sneaker authenticator and will be of great benefit to the reselling industry.
RESUMEN
With current and predicted economic pressures within English Children's Services in the UK, there is a growing discourse around the development of methods of analysis using existing data to make more effective interventions and policy decisions. Agent-Based modelling shows promise in aiding in this, with limitations that require novel methods to overcome. This can include challenges in managing model complexity, transparency, and validation; which may deter analysts from implementing such Agent-Based simulations. Children's Services specifically can gain from the expansion of modelling techniques available to them. Sensitivity analysis is a common step when analysing models that currently has methods with limitations regarding Agent-Based Models. This paper outlines an improved method of conducting Sensitivity Analysis to enable better utilisation of Agent-Based models (ABMs) within Children's Services. By using machine learning based regression in conjunction with the Nomadic Peoples Optimiser (NPO) a method of conducting sensitivity analysis tailored for ABMs is achieved. This paper demonstrates the effectiveness of the approach by drawing comparisons with common existing methods of sensitivity analysis, followed by a demonstration of an improved ABM design in the target use case.
RESUMEN
This article introduces a prototype laser communication system integrated with uncrewed aerial vehicles (UAVs), aimed at enhancing data connectivity in remote healthcare applications. Traditional radio frequency systems are limited by their range and reliability, particularly in challenging environments. By leveraging UAVs as relay points, the proposed system seeks to address these limitations, offering a novel solution for real-time, high-speed data transmission. The system has been empirically tested, showcasing its ability to maintain data transmission integrity under various conditions. Results indicate a substantial improvement in connectivity, with high data transmission success rate (DTSR) scores, even amidst environmental disturbances. This study underscores the system's potential for critical applications such as emergency response, public health monitoring, and extending services to remote or underserved areas.
RESUMEN
Cardiovascular diseases present a significant global health challenge that emphasizes the critical need for developing accurate and more effective detection methods. Several studies have contributed valuable insights in this field, but it is still necessary to advance the predictive models and address the gaps in the existing detection approaches. For instance, some of the previous studies have not considered the challenge of imbalanced datasets, which can lead to biased predictions, especially when the datasets include minority classes. This study's primary focus is the early detection of heart diseases, particularly myocardial infarction, using machine learning techniques. It tackles the challenge of imbalanced datasets by conducting a comprehensive literature review to identify effective strategies. Seven machine learning and deep learning classifiers, including K-Nearest Neighbors, Support Vector Machine, Logistic Regression, Convolutional Neural Network, Gradient Boost, XGBoost, and Random Forest, were deployed to enhance the accuracy of heart disease predictions. The research explores different classifiers and their performance, providing valuable insights for developing robust prediction models for myocardial infarction. The study's outcomes emphasize the effectiveness of meticulously fine-tuning an XGBoost model for cardiovascular diseases. This optimization yields remarkable results: 98.50% accuracy, 99.14% precision, 98.29% recall, and a 98.71% F1 score. Such optimization significantly enhances the model's diagnostic accuracy for heart disease.
RESUMEN
Floorplan energy assessments present a highly efficient method for evaluating the energy efficiency of residential properties without requiring physical presence. By employing computer modelling, an accurate determination of the building's heat loss or gain can be achieved, enabling planners and homeowners to devise energy-efficient renovation or redevelopment plans. However, the creation of an AI model for floorplan element detection necessitates the manual annotation of a substantial collection of floorplans, which poses a daunting task. This paper introduces a novel active learning model designed to detect and annotate the primary elements within floorplan images, aiming to assist energy assessors in automating the analysis of such images-an inherently challenging problem due to the time-intensive nature of the annotation process. Our active learning approach initially trained on a set of 500 annotated images and progressively learned from a larger dataset comprising 4500 unlabelled images. This iterative process resulted in mean average precision score of 0.833, precision score of 0.972, and recall score of 0.950. We make our dataset publicly available under a Creative Commons license.
RESUMEN
The retrieval of important information from a dataset requires applying a special data mining technique known as data clustering (DC). DC classifies similar objects into a groups of similar characteristics. Clustering involves grouping the data around k-cluster centres that typically are selected randomly. Recently, the issues behind DC have called for a search for an alternative solution. Recently, a nature-based optimization algorithm named Black Hole Algorithm (BHA) was developed to address the several well-known optimization problems. The BHA is a metaheuristic (population-based) that mimics the event around the natural phenomena of black holes, whereby an individual star represents the potential solutions revolving around the solution space. The original BHA algorithm showed better performance compared to other algorithms when applied to a benchmark dataset, despite its poor exploration capability. Hence, this paper presents a multi-population version of BHA as a generalization of the BHA called MBHA wherein the performance of the algorithm is not dependent on the best-found solution but a set of generated best solutions. The method formulated was subjected to testing using a set of nine widespread and popular benchmark test functions. The ensuing experimental outcomes indicated the highly precise results generated by the method compared to BHA and comparable algorithms in the study, as well as excellent robustness. Furthermore, the proposed MBHA achieved a high rate of convergence on six real datasets (collected from the UCL machine learning lab), making it suitable for DC problems. Lastly, the evaluations conclusively indicated the appropriateness of the proposed algorithm to resolve DC issues.
Asunto(s)
Algoritmos , Aprendizaje Automático , Análisis por Conglomerados , Minería de Datos/métodos , BenchmarkingRESUMEN
Cancer has received extensive recognition for its high mortality rate, with metastatic cancer being the top cause of cancer-related deaths. Metastatic cancer involves the spread of the primary tumor to other body organs. As much as the early detection of cancer is essential, the timely detection of metastasis, the identification of biomarkers, and treatment choice are valuable for improving the quality of life for metastatic cancer patients. This study reviews the existing studies on classical machine learning (ML) and deep learning (DL) in metastatic cancer research. Since the majority of metastatic cancer research data are collected in the formats of PET/CT and MRI image data, deep learning techniques are heavily involved. However, its black-box nature and expensive computational cost are notable concerns. Furthermore, existing models could be overestimated for their generality due to the non-diverse population in clinical trial datasets. Therefore, research gaps are itemized; follow-up studies should be carried out on metastatic cancer using machine learning and deep learning tools with data in a symmetric manner.
RESUMEN
Diagnosing diabetes early is critical as it helps patients live with the disease in a healthy way - through healthy eating, taking appropriate medical doses, and making patients more vigilant in their movements/activities to avoid wounds that are difficult to heal for diabetic patients. Data mining techniques are typically used to detect diabetes with high confidence to avoid misdiagnoses with other chronic diseases whose symptoms are similar to diabetes. Hidden Naïve Bayes is one of the algorithms for classification, which works under a data-mining model based on the assumption of conditional independence of the traditional Naïve Bayes. The results from this research study, which was conducted on the Pima Indian Diabetes (PID) dataset collection, show that the prediction accuracy of the HNB classifier achieved 82%. As a result, the discretization method increases the performance and accuracy of the HNB classifier.
Asunto(s)
Algoritmos , Diabetes Mellitus , Humanos , Teorema de Bayes , Diabetes Mellitus/diagnóstico , Minería de Datos , Pueblo PimaRESUMEN
The concept of molecular similarity has been commonly used in rational drug design, where structurally similar molecules are examined in molecular databases to retrieve functionally similar molecules. The most used conventional similarity methods used two-dimensional (2D) fingerprints to evaluate the similarity of molecules towards a target query. However, these descriptors include redundant and irrelevant features that might impact the performance of similarity searching methods. Thus, this study proposed a new approach for identifying the important features of molecules in chemical datasets based on the representation of the molecular features using Autoencoder (AE), with the aim of removing irrelevant and redundant features. The proposed approach experimented using the MDL Data Drug Report standard dataset (MDDR). Based on experimental findings, the proposed approach performed better than several existing benchmark similarity methods such as Tanimoto Similarity Method (TAN), Adapted Similarity Measure of Text Processing (ASMTP), and Quantum-Based Similarity Method (SQB). The results demonstrated that the performance achieved by the proposed approach has proven to be superior, particularly with the use of structurally heterogeneous datasets, where it yielded improved results compared to other previously used methods with the similar goal of improving molecular similarity searching.