RESUMO
The current approach to diagnosing and classifying brain tumors relies on the histological evaluation of biopsy samples, which is invasive, time-consuming, and susceptible to manual errors. These limitations underscore the pressing need for a fully automated, deep-learning-based multi-classification system for brain malignancies. This article aims to leverage a deep convolutional neural network (CNN) to enhance early detection and presents three distinct CNN models designed for different types of classification tasks. The first CNN model achieves an impressive detection accuracy of 99.53% for brain tumors. The second CNN model, with an accuracy of 93.81%, proficiently categorizes brain tumors into five distinct types: normal, glioma, meningioma, pituitary, and metastatic. Furthermore, the third CNN model demonstrates an accuracy of 98.56% in accurately classifying brain tumors into their different grades. To ensure optimal performance, a grid search optimization approach is employed to automatically fine-tune all the relevant hyperparameters of the CNN models. The utilization of large, publicly accessible clinical datasets results in robust and reliable classification outcomes. This article conducts a comprehensive comparison of the proposed models against classical models, such as AlexNet, DenseNet121, ResNet-101, VGG-19, and GoogleNet, reaffirming the superiority of the deep CNN-based approach in advancing the field of brain tumor classification and early detection.
Assuntos
Neoplasias Encefálicas , Glioma , Neoplasias Meníngeas , Humanos , Encéfalo , Neoplasias Encefálicas/diagnóstico por imagem , Redes Neurais de ComputaçãoRESUMO
The goal of this research is to create an ensemble deep learning model for Internet of Things (IoT) applications that specifically target remote patient monitoring (RPM) by integrating long short-term memory (LSTM) networks and convolutional neural networks (CNN). The work tackles important RPM concerns such early health issue diagnosis and accurate real-time physiological data collection and analysis using wearable IoT devices. By assessing important health factors like heart rate, blood pressure, pulse, temperature, activity level, weight management, respiration rate, medication adherence, sleep patterns, and oxygen levels, the suggested Remote Patient Monitor Model (RPMM) attains a noteworthy accuracy of 97.23%. The model's capacity to identify spatial and temporal relationships in health data is improved by novel techniques such as the use of CNN for spatial analysis and feature extraction and LSTM for temporal sequence modeling. Early intervention is made easier by this synergistic approach, which enhances trend identification and anomaly detection in vital signs. A variety of datasets are used to validate the model's robustness, highlighting its efficacy in remote patient care. This study shows how using ensemble models' advantages might improve health monitoring's precision and promptness, which would eventually benefit patients and ease the burden on healthcare systems.
Assuntos
Aprendizado Profundo , Internet das Coisas , Humanos , Monitorização Fisiológica/métodos , Dispositivos Eletrônicos Vestíveis , Redes Neurais de Computação , Frequência Cardíaca , Telemedicina , Tecnologia de Sensoriamento Remoto/métodosRESUMO
Brain tumors, characterized by the uncontrolled growth of abnormal cells, pose a significant threat to human health. Early detection is crucial for successful treatment and improved patient outcomes. Magnetic Resonance Imaging (MRI) is the primary diagnostic tool for brain tumors, providing detailed visualizations of the brain's intricate structures. However, the complexity and variability of tumor shapes and locations often challenge physicians in achieving accurate tumor segmentation on MRI images. Precise tumor segmentation is essential for effective treatment planning and prognosis. To address this challenge, we propose a novel hybrid deep learning technique, Convolutional Neural Network and ResNeXt101 (ConvNet-ResNeXt101), for automated tumor segmentation and classification. Our approach commences with data acquisition from the BRATS 2020 dataset, a benchmark collection of MRI images with corresponding tumor segmentations. Next, we employ batch normalization to smooth and enhance the collected data, followed by feature extraction using the AlexNet model. This involves extracting features based on tumor shape, position, shape, and surface characteristics. To select the most informative features for effective segmentation, we utilize an advanced meta-heuristics algorithm called Advanced Whale Optimization (AWO). AWO mimics the hunting behavior of humpback whales to iteratively search for the optimal feature subset. With the selected features, we perform image segmentation using the ConvNet-ResNeXt101 model. This deep learning architecture combines the strengths of ConvNet and ResNeXt101, a type of ConvNet with aggregated residual connections. Finally, we apply the same ConvNet-ResNeXt101 model for tumor classification, categorizing the segmented tumor into distinct types. Our experiments demonstrate the superior performance of our proposed ConvNet-ResNeXt101 model compared to existing approaches, achieving an accuracy of 99.27% for the tumor core class with a minimum learning elapsed time of 0.53 s.
Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , AlgoritmosRESUMO
Parkinson's Disease (PD) is a prevalent neurological condition characterized by motor and cognitive impairments, typically manifesting around the age of 50 and presenting symptoms such as gait difficulties and speech impairments. Although a cure remains elusive, symptom management through medication is possible. Timely detection is pivotal for effective disease management. In this study, we leverage Machine Learning (ML) and Deep Learning (DL) techniques, specifically K-Nearest Neighbor (KNN) and Feed-forward Neural Network (FNN) models, to differentiate between individuals with PD and healthy individuals based on voice signal characteristics. Our dataset, sourced from the University of California at Irvine (UCI), comprises 195 voice recordings collected from 31 patients. To optimize model performance, we employ various strategies including Synthetic Minority Over-sampling Technique (SMOTE) for addressing class imbalance, Feature Selection to identify the most relevant features, and hyperparameter tuning using RandomizedSearchCV. Our experimentation reveals that the FNN and KSVM models, trained on an 80-20 split of the dataset for training and testing respectively, yield the most promising results. The FNN model achieves an impressive overall accuracy of 99.11%, with 98.78% recall, 99.96% precision, and a 99.23% f1-score. Similarly, the KSVM model demonstrates strong performance with an overall accuracy of 95.89%, recall of 96.88%, precision of 98.71%, and an f1-score of 97.62%. Overall, our study showcases the efficacy of ML and DL techniques in accurately identifying PD from voice signals, underscoring the potential for these approaches to contribute significantly to early diagnosis and intervention strategies for Parkinson's Disease.
Assuntos
Aprendizado de Máquina , Doença de Parkinson , Doença de Parkinson/diagnóstico , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Redes Neurais de Computação , Voz , Aprendizado ProfundoRESUMO
Artificial intelligence-powered deep learning methods are being used to diagnose brain tumors with high accuracy, owing to their ability to process large amounts of data. Magnetic resonance imaging stands as the gold standard for brain tumor diagnosis using machine vision, surpassing computed tomography, ultrasound, and X-ray imaging in its effectiveness. Despite this, brain tumor diagnosis remains a challenging endeavour due to the intricate structure of the brain. This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning is a machine learning technique that allows us to repurpose pre-trained models on new tasks. This can be particularly useful for medical imaging tasks, where labelled data is often scarce. Four distinct transfer learning architectures were assessed in this study: ResNet152, VGG19, DenseNet169, and MobileNetv3. The models were trained and validated on a dataset from benchmark database: Kaggle. Five-fold cross validation was adopted for training and testing. To enhance the balance of the dataset and improve the performance of the models, image enhancement techniques were applied to the data for the four categories: pituitary, normal, meningioma, and glioma. MobileNetv3 achieved the highest accuracy of 99.75%, significantly outperforming other existing methods. This demonstrates the potential of deep transfer learning architectures to revolutionize the field of brain tumor diagnosis.
Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Neoplasias Meníngeas , Humanos , Inteligência Artificial , Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Aprendizado de MáquinaRESUMO
Nowadays, federated learning is one of the most prominent choices for making decisions. A significant benefit of federated learning is that, unlike deep learning, it is not necessary to share data samples with the model owner. The weight of the global model in traditional federated learning is created by averaging the weights of all clients or sites. In the proposed work, a novel method has been discussed to generate an optimized base model without hampering its performance, which is based on a genetic algorithm. Chromosome representation, crossover, and mutation-all the intermediate operations of the genetic algorithm have been illustrated with useful examples. After applying the genetic algorithm, there is a significant improvement in inference time and a huge reduction in storage space. Therefore, the model can be easily deployed on resource-constrained devices. For the experimental work, sports data has been used in balanced and unbalanced scenarios with various numbers of clients in a federated learning environment. In addition, we have used four famous deep learning architectures, such as AlexNet, VGG19, ResNet50, and EfficientNetB3, as the base model. We have achieved 92.34% accuracy with 9 clients in the balanced data set by using EfficientNetB3 as the base model using a GA-based approach. Moreover, after applying the genetic algorithm to optimize EfficientNetB3, there is an improvement in inference time and storage space by 20% and 2.35%, respectively.
Assuntos
Algoritmos , Esportes , Humanos , Aprendizado Profundo , Redes Neurais de ComputaçãoRESUMO
This article addresses automated segmentation and classification of COVID-19 and normal chest CT scan images. Segmentation is the preprocessing step for classification, and 12 DWT-PCA-based texture features extracted from the segmented image are utilized as input for the random forest machine-learning algorithm to classify COVID-19/non-COVID-19 disease. Diagnosing COVID-19 disease through an RT-PCR test is a time-consuming process. Sometimes, the RT-PCR test result is not accurate; that is, it has a false negative, which can cause a threat to the person's life due to delay in starting the specified treatment. At this moment, there is an urgent need to develop a reliable automatic COVID-19 detection tool that can detect COVID-19 disease from chest CT scan images within a shorter period and can help doctors to start COVID-19 treatment at the earliest. In this article, a variant of the whale optimization algorithm named improved whale optimization algorithm (IWOA) is introduced. The efficiency of the IWOA is tested for unimodal (F1-F7), multimodal (F8-F13), and fixed-dimension multimodal (F14-F23) benchmark functions and is compared with the whale optimization algorithm (WOA), salp swarm optimization (SSA), and sine cosine algorithm (SCA). The experiment is carried out in 30 trials and population size, and iterations are set as 30 and 100 under each trial. IWOA achieves faster convergence than WOA, SSA, and SCA and enhances the exploitation and exploration phases of WOA, avoiding local entrapment. IWOA, WOA, SSA, and SCA utilized Otsu's maximum between-class variance criteria as fitness function to compute optimal threshold values for multilevel medical CT scan image segmentation. Evaluation measures such as accuracy, specificity, precision, recall, Gmean, F_measure, SSIM, and 12 DWT-PCA-based texture features are computed. The experiment showed that the IWOA is efficient and achieved better segmentation evaluation measures and better segmentation mask in comparison with other methods. DWT-PCA-based texture features extracted from each of the 160 IWOA-, WOA-, SSA-, and SCA-based segmented images are fed into random forest for training, and random forest is tested with DWT-PCA-based texture features extracted from each of the 40 IWOA-, WOA-, SSA-, and SCA-based segmented images. Random forest has reported a promising classification accuracy of 97.49% for the DWT-PCA-based texture features, which are extracted from IWOA-based segmented images.