Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Imaging ; 24(1): 248, 2024 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-39289621

RESUMO

Breast cancer prediction and diagnosis are critical for timely and effective treatment, significantly impacting patient outcomes. Machine learning algorithms have become powerful tools for improving the prediction and diagnosis of breast cancer. The Breast Cancer Prediction and Diagnosis Model (BCPM), which utilises machine learning techniques to improve the precision and efficiency of breast cancer diagnosis and prediction, is presented in this paper. BCPM collects comprehensive and high-quality data from diverse sources, including electronic medical records, clinical trials, and public datasets. Through rigorous pre-processing, the data is cleaned, inconsistencies are addressed, and missing values are handled. Feature scaling techniques are applied to normalize the data, ensuring fair comparison and equal importance among different features. Furthermore, feature-selection algorithms are utilized to identify the most relevant features that contribute to breast cancer projection and diagnosis, optimizing the model's efficiency. The BCPM employs numerous machine learning methods, such as logistic regression, random forests, decision trees, support vector machines, and neural networks, to generate accurate models. Area under the curve (AUC), sensitivity, specificity, and accuracy are only some of the metrics used to assess a model's performance once it has been trained on a subset of data. The BCPM holds promise in improving breast cancer prediction and diagnosis, aiding in personalized treatment planning, and ultimately taming patient results. By leveraging machine learning algorithms, the BCPM contributes to ongoing efforts in combating breast cancer and saving lives.


Assuntos
Neoplasias da Mama , Aprendizado de Máquina , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Algoritmos , Sensibilidade e Especificidade , Diagnóstico por Computador/métodos , Redes Neurais de Computação
2.
Sensors (Basel) ; 24(15)2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39123837

RESUMO

Node localization is critical for accessing diverse nodes that provide services in remote places. Single-anchor localization techniques suffer co-linearity, performing poorly. The reliable multiple anchor node selection method is computationally intensive and requires a lot of processing power and time to identify suitable anchor nodes. Node localization in wireless sensor networks (WSNs) is challenging due to the number and placement of anchors, as well as their communication capabilities. These senor nodes possess limited energy resources, which is a big concern in localization. In addition to convention optimization in WSNs, researchers have employed nature-inspired algorithms to localize unknown nodes in WSN. However, these methods take longer, require lots of processing power, and have higher localization error, with a greater number of beacon nodes and sensitivity to parameter selection affecting localization. This research employed a nature-inspired crow search algorithm (an improvement over other nature-inspired algorithms) for selecting the suitable number of anchor nodes from the population, reducing errors in localizing unknown nodes. Additionally, the weighted centroid method was proposed for identifying the exact location of an unknown node. This made the crow search weighted centroid localization (CS-WCL) algorithm a more trustworthy and efficient method for node localization in WSNs, with reduced average localization error (ALE) and energy consumption. CS-WCL outperformed WCL and distance vector (DV)-Hop, with a reduced ALE of 15% (from 32%) and varying communication radii from 20 m to 45 m. Also, the ALE against scalability was validated for CS-WCL against WCL and DV-Hop for a varying number of beacon nodes (from 3 to 2), reducing ALE to 2.59% (from 28.75%). Lastly, CS-WCL resulted in reduced energy consumption (from 120 mJ to 45 mJ) for varying network nodes from 30 to 300 against WCL and DV-Hop. Thus, CS-WCL outperformed other nature-inspired algorithms in node localization. These have been validated using MATLAB 2022b.

4.
Front Med (Lausanne) ; 11: 1412592, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39099597

RESUMO

Alzheimer's disease (AD) is a devastating brain disorder that steadily worsens over time. It is marked by a relentless decline in memory and cognitive abilities. As the disease progresses, it leads to a significant loss of mental function. Early detection of AD is essential to starting treatments that can mitigate the progression of this disease and enhance patients' quality of life. This study aims to observe AD's brain functional connectivity pattern to extract essential patterns through multivariate pattern analysis (MVPA) and analyze activity patterns across multiple brain voxels. The optimized feature extraction techniques are used to obtain the important features for performing the training on the models using several hybrid machine learning classifiers for performing binary classification and multi-class classification. The proposed approach using hybrid machine learning classification has been applied to two public datasets named the Open Access Series of Imaging Studies (OASIS) and the AD Neuroimaging Initiative (ADNI). The results are evaluated using performance metrics, and comparisons have been made to differentiate between different stages of AD using visualization tools.

5.
Data Brief ; 50: 109491, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37636132

RESUMO

The term quality of life (QoL) refers to a wide range of multifaceted concepts that often involve subjective assessments of both positive and negative aspects of life. It is difficult to quantify QoL as the word has varied meanings in different academic areas and may have different connotations in different circumstances. The five sectors most commonly associated with QoL, however, are Health, Education, Environmental Quality, Personal Security, Civic Engagement, and Work-Life Balance. An emerging issue that falls under environmental quality is visual pollution (VP) which, as detailed in this study, refers to disruptive presences that limit visual ability in public roads with an emphasis on excavation barriers, potholes, and dilapidated sidewalks. Quantifying VP has always been difficult due to its subjective nature and lack of a consistent set of rules for systematic assessment of visual pollution. This emphasizes the need for research and module development that will allow government agencies to automatically predict and detect VP. Our dataset was collected from different regions in the Kingdom of Saudi Arabia (KSA) via the Ministry of Municipal and Rural Affairs and Housing (MOMRAH) as a part of a VP campaign to improve Saudi Arabia's urban landscape. It consists of 34,460 RGB images separated into three distinct classes: excavation barriers, potholes, and dilapidated sidewalks. To annotate all images for detection (i.e., bounding box) and classification (i.e., classification label) tasks, the deep active learning strategy (DAL) is used where an initial 1,200 VP images (i.e., 400 images per class) are manually annotated by four experts. Images with more than one object increase the number of training object ROIs which are recorded to be 8,417 for excavation barriers, 25,975 for potholes, and 7,412 for dilapidated sidewalks. The MOMRAH dataset is publicly published to enrich the research domain with the new VP image dataset.

6.
Front Med (Lausanne) ; 10: 1349336, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38348235

RESUMO

Introduction: Oral Squamous Cell Carcinoma (OSCC) poses a significant challenge in oncology due to the absence of precise diagnostic tools, leading to delays in identifying the condition. Current diagnostic methods for OSCC have limitations in accuracy and efficiency, highlighting the need for more reliable approaches. This study aims to explore the discriminative potential of histopathological images of oral epithelium and OSCC. By utilizing a database containing 1224 images from 230 patients, captured at varying magnifications and publicly available, a customized deep learning model based on EfficientNetB3 was developed. The model's objective was to differentiate between normal epithelium and OSCC tissues by employing advanced techniques such as data augmentation, regularization, and optimization. Methods: The research utilized a histopathological imaging database for Oral Cancer analysis, incorporating 1224 images from 230 patients. These images, taken at various magnifications, formed the basis for training a specialized deep learning model built upon the EfficientNetB3 architecture. The model underwent training to distinguish between normal epithelium and OSCC tissues, employing sophisticated methodologies including data augmentation, regularization techniques, and optimization strategies. Results: The customized deep learning model achieved significant success, showcasing a remarkable 99% accuracy when tested on the dataset. This high accuracy underscores the model's efficacy in effectively discerning between normal epithelium and OSCC tissues. Furthermore, the model exhibited impressive precision, recall, and F1-score metrics, reinforcing its potential as a robust diagnostic tool for OSCC. Discussion: This research demonstrates the promising potential of employing deep learning models to address the diagnostic challenges associated with OSCC. The model's ability to achieve a 99% accuracy rate on the test dataset signifies a considerable leap forward in earlier and more accurate detection of OSCC. Leveraging advanced techniques in machine learning, such as data augmentation and optimization, has shown promising results in improving patient outcomes through timely and precise identification of OSCC.

7.
PLoS One ; 16(8): e0253383, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34437542

RESUMO

The dimensionality of the spatially distributed channels and the temporal resolution of electroencephalogram (EEG) based brain-computer interfaces (BCI) undermine emotion recognition models. Thus, prior to modeling such data, as the final stage of the learning pipeline, adequate preprocessing, transforming, and extracting temporal (i.e., time-series signals) and spatial (i.e., electrode channels) features are essential phases to recognize underlying human emotions. Conventionally, inter-subject variations are dealt with by avoiding the sources of variation (e.g., outliers) or turning the problem into a subject-deponent. We address this issue by preserving and learning from individual particularities in response to affective stimuli. This paper investigates and proposes a subject-independent emotion recognition framework that mitigates the subject-to-subject variability in such systems. Using an unsupervised feature selection algorithm, we reduce the feature space that is extracted from time-series signals. For the spatial features, we propose a subject-specific unsupervised learning algorithm that learns from inter-channel co-activation online. We tested this framework on real EEG benchmarks, namely DEAP, MAHNOB-HCI, and DREAMER. We train and test the selection outcomes using nested cross-validation and a support vector machine (SVM). We compared our results with the state-of-the-art subject-independent algorithms. Our results show an enhanced performance by accurately classifying human affection (i.e., based on valence and arousal) by 16%-27% compared to other studies. This work not only outperforms other subject-independent studies reported in the literature but also proposes an online analysis solution to affection recognition.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Emoções , Algoritmos , Eletrodos , Humanos , Máquina de Vetores de Suporte
8.
J Comput Biol ; 24(4): 280-288, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-27960065

RESUMO

Due to the significant amount of DNA data that are being generated by next-generation sequencing machines for genomes of lengths ranging from megabases to gigabases, there is an increasing need to compress such data to a less space and a faster transmission. Different implementations of Huffman encoding incorporating the characteristics of DNA sequences prove to better compress DNA data. These implementations center on the concepts of selecting frequent repeats so as to force a skewed Huffman tree, as well as the construction of multiple Huffman trees when encoding. The implementations demonstrate improvements on the compression ratios for five genomes with lengths ranging from 5 to 50 Mbp, compared with the standard Huffman tree algorithm. The research hence suggests an improvement on all such DNA sequence compression algorithms that use the conventional Huffman encoding. The research suggests an improvement on all DNA sequence compression algorithms that use the conventional Huffman encoding. Accompanying software is publicly available (AL-Okaily, 2016 ).


Assuntos
Algoritmos , Compressão de Dados/métodos , Análise de Sequência de DNA/métodos , Genômica , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA