Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
Add more filters










Publication year range
2.
PLoS One ; 19(7): e0307187, 2024.
Article in English | MEDLINE | ID: mdl-39024353

ABSTRACT

In the urban scene segmentation, the "image-to-image translation issue" refers to the fundamental task of transforming input images into meaningful segmentation maps, which essentially involves translating the visual information present in the input image into semantic labels for different classes. When this translation process is inaccurate or incomplete, it can lead to failed segmentation results where the model struggles to correctly classify pixels into the appropriate semantic categories. The study proposed a conditional Generative Adversarial Network (cGAN), for creating high-resolution urban maps from satellite images. The method combines semantic and spatial data using cGAN framework to produce realistic urban scenes while maintaining crucial details. To assess the performance of the proposed method, extensive experiments are performed on benchmark datasets, the ISPRS Potsdam and Vaihingen datasets. Intersection over Union (IoU) and Pixel Accuracy are two quantitative metrics used to evaluate the segmentation accuracy of the produced maps. The proposed method outperforms traditional methods with an IoU of 87% and a Pixel Accuracy of 93%. The experimental findings show that the suggested cGAN-based method performs better than traditional techniques, attaining better segmentation accuracy and generating better urban maps with finely detailed information. The suggested approach provides a framework for resolving the image-to-image translation difficulties in urban scene segmentation, demonstrating the potential of cGANs for producing excellent urban maps from satellite data.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Satellite Imagery , Satellite Imagery/methods , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Humans , Algorithms
3.
Sci Rep ; 14(1): 15661, 2024 07 08.
Article in English | MEDLINE | ID: mdl-38977848

ABSTRACT

The goal of this research is to create an ensemble deep learning model for Internet of Things (IoT) applications that specifically target remote patient monitoring (RPM) by integrating long short-term memory (LSTM) networks and convolutional neural networks (CNN). The work tackles important RPM concerns such early health issue diagnosis and accurate real-time physiological data collection and analysis using wearable IoT devices. By assessing important health factors like heart rate, blood pressure, pulse, temperature, activity level, weight management, respiration rate, medication adherence, sleep patterns, and oxygen levels, the suggested Remote Patient Monitor Model (RPMM) attains a noteworthy accuracy of 97.23%. The model's capacity to identify spatial and temporal relationships in health data is improved by novel techniques such as the use of CNN for spatial analysis and feature extraction and LSTM for temporal sequence modeling. Early intervention is made easier by this synergistic approach, which enhances trend identification and anomaly detection in vital signs. A variety of datasets are used to validate the model's robustness, highlighting its efficacy in remote patient care. This study shows how using ensemble models' advantages might improve health monitoring's precision and promptness, which would eventually benefit patients and ease the burden on healthcare systems.


Subject(s)
Deep Learning , Internet of Things , Humans , Monitoring, Physiologic/methods , Wearable Electronic Devices , Neural Networks, Computer , Heart Rate , Telemedicine , Remote Sensing Technology/methods
4.
Sci Rep ; 14(1): 13813, 2024 06 15.
Article in English | MEDLINE | ID: mdl-38877028

ABSTRACT

Parkinson's Disease (PD) is a prevalent neurological condition characterized by motor and cognitive impairments, typically manifesting around the age of 50 and presenting symptoms such as gait difficulties and speech impairments. Although a cure remains elusive, symptom management through medication is possible. Timely detection is pivotal for effective disease management. In this study, we leverage Machine Learning (ML) and Deep Learning (DL) techniques, specifically K-Nearest Neighbor (KNN) and Feed-forward Neural Network (FNN) models, to differentiate between individuals with PD and healthy individuals based on voice signal characteristics. Our dataset, sourced from the University of California at Irvine (UCI), comprises 195 voice recordings collected from 31 patients. To optimize model performance, we employ various strategies including Synthetic Minority Over-sampling Technique (SMOTE) for addressing class imbalance, Feature Selection to identify the most relevant features, and hyperparameter tuning using RandomizedSearchCV. Our experimentation reveals that the FNN and KSVM models, trained on an 80-20 split of the dataset for training and testing respectively, yield the most promising results. The FNN model achieves an impressive overall accuracy of 99.11%, with 98.78% recall, 99.96% precision, and a 99.23% f1-score. Similarly, the KSVM model demonstrates strong performance with an overall accuracy of 95.89%, recall of 96.88%, precision of 98.71%, and an f1-score of 97.62%. Overall, our study showcases the efficacy of ML and DL techniques in accurately identifying PD from voice signals, underscoring the potential for these approaches to contribute significantly to early diagnosis and intervention strategies for Parkinson's Disease.


Subject(s)
Machine Learning , Parkinson Disease , Parkinson Disease/diagnosis , Humans , Male , Female , Middle Aged , Aged , Neural Networks, Computer , Voice , Deep Learning
5.
BMC Med Imaging ; 24(1): 147, 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38886661

ABSTRACT

Diagnosing brain tumors is a complex and time-consuming process that relies heavily on radiologists' expertise and interpretive skills. However, the advent of deep learning methodologies has revolutionized the field, offering more accurate and efficient assessments. Attention-based models have emerged as promising tools, focusing on salient features within complex medical imaging data. However, the precise impact of different attention mechanisms, such as channel-wise, spatial, or combined attention within the Channel-wise Attention Mode (CWAM), for brain tumor classification remains relatively unexplored. This study aims to address this gap by leveraging the power of ResNet101 coupled with CWAM (ResNet101-CWAM) for brain tumor classification. The results show that ResNet101-CWAM surpassed conventional deep learning classification methods like ConvNet, achieving exceptional performance metrics of 99.83% accuracy, 99.21% recall, 99.01% precision, 99.27% F1-score and 99.16% AUC on the same dataset. This enhanced capability holds significant implications for clinical decision-making, as accurate and efficient brain tumor classification is crucial for guiding treatment strategies and improving patient outcomes. Integrating ResNet101-CWAM into existing brain classification software platforms is a crucial step towards enhancing diagnostic accuracy and streamlining clinical workflows for physicians.


Subject(s)
Brain Neoplasms , Deep Learning , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/classification , Brain Neoplasms/pathology , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
6.
Sci Rep ; 14(1): 10812, 2024 05 11.
Article in English | MEDLINE | ID: mdl-38734714

ABSTRACT

Cervical cancer, the second most prevalent cancer affecting women, arises from abnormal cell growth in the cervix, a crucial anatomical structure within the uterus. The significance of early detection cannot be overstated, prompting the use of various screening methods such as Pap smears, colposcopy, and Human Papillomavirus (HPV) testing to identify potential risks and initiate timely intervention. These screening procedures encompass visual inspections, Pap smears, colposcopies, biopsies, and HPV-DNA testing, each demanding the specialized knowledge and skills of experienced physicians and pathologists due to the inherently subjective nature of cancer diagnosis. In response to the imperative for efficient and intelligent screening, this article introduces a groundbreaking methodology that leverages pre-trained deep neural network models, including Alexnet, Resnet-101, Resnet-152, and InceptionV3, for feature extraction. The fine-tuning of these models is accompanied by the integration of diverse machine learning algorithms, with ResNet152 showcasing exceptional performance, achieving an impressive accuracy rate of 98.08%. It is noteworthy that the SIPaKMeD dataset, publicly accessible and utilized in this study, contributes to the transparency and reproducibility of our findings. The proposed hybrid methodology combines aspects of DL and ML for cervical cancer classification. Most intricate and complicated features from images can be extracted through DL. Further various ML algorithms can be implemented on extracted features. This innovative approach not only holds promise for significantly improving cervical cancer detection but also underscores the transformative potential of intelligent automation within the realm of medical diagnostics, paving the way for more accurate and timely interventions.


Subject(s)
Deep Learning , Early Detection of Cancer , Uterine Cervical Neoplasms , Humans , Uterine Cervical Neoplasms/diagnosis , Uterine Cervical Neoplasms/pathology , Female , Early Detection of Cancer/methods , Neural Networks, Computer , Algorithms , Papanicolaou Test/methods , Colposcopy/methods
7.
Heliyon ; 10(9): e30241, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38720763

ABSTRACT

Parkinson's disease (PD) is an age-related neurodegenerative disorder characterized by motor deficits, including tremor, rigidity, bradykinesia, and postural instability. According to the World Health Organization, about 1 % of the global population has been diagnosed with PD, and this figure is expected to double by 2040. Early and accurate diagnosis of PD is critical to slowing down the progression of the disease and reducing long-term disability. Due to the complexity of the disease, it is difficult to accurately diagnose it using traditional clinical tests. Therefore, it has become necessary to develop intelligent diagnostic models that can accurately detect PD. This article introduces a novel hybrid approach for accurate prediction of PD using an ANFIS with two optimizers, namely Adam and PSO. ANFIS is a type of fuzzy logic system used for nonlinear function approximation and classification, while Adam optimizer has the ability to adaptively adjust the learning rate of each individual parameter in an ANFIS at each training step, which helps the model find a better solution more quickly. PSO is a metaheuristic approach inspired by the behavior of social animals such as birds. Combining these two methods has potential to provide improved accuracy and robustness in PD diagnosis compared to existing methods. The proposed method utilized the advantages of both optimization techniques and applied them on the developed ANFIS model to maximize its prediction accuracy. This system was developed by using an open access clinical and demographic data. The chosen parameters for the ANFIS were selected through a comparative experimental analysis to optimize the model considering the number of fuzzy membership functions, number of epochs of ANFIS, and number of particles of PSO. The performance of the two ANFIS models: ANFIS (Adam) and ANFIS (PSO) focusing at ANFIS parameters and various evaluation metrics are further analyzed in detail and presented, The experimental results showed that the proposed ANFIS (PSO) shows better results in terms of loss and precision, whereas, the ANFIS (Adam) showed the better results in terms of accuracy, f1-score and recall. Thus, this adaptive neural-fuzzy algorithm provides a promising strategy for the diagnosis of PD, and show that the proposed models show their suitability for many other practical applications.

8.
Sci Rep ; 14(1): 9388, 2024 04 24.
Article in English | MEDLINE | ID: mdl-38654051

ABSTRACT

Skin Cancer is caused due to the mutational differences in epidermis hormones and patch appearances. Many studies are focused on the design and development of effective approaches in diagnosis and categorization of skin cancer. The decisions are made on independent training dataset under limited editions and scenarios. In this research, the kaggle based datasets are optimized and categorized into a labeled data array towards indexing using Federated learning (FL). The technique is developed on grey wolf optimization algorithm to assure the dataset attribute dependencies are extracted and dimensional mapping is processed. The threshold value validation of the dimensional mapping datasets is effectively optimized and trained under the neural networking framework further expanded via federated learning standards. The technique has demonstrated 95.82% accuracy under GWO technique and 94.9% on inter-combination of Trained Neural Networking (TNN) framework and Recessive Learning (RL) in accuracy.


Subject(s)
Algorithms , Neural Networks, Computer , Skin Neoplasms , Humans , Skin Neoplasms/diagnosis , Machine Learning
10.
Sci Rep ; 14(1): 8738, 2024 04 16.
Article in English | MEDLINE | ID: mdl-38627421

ABSTRACT

Brain tumor glioblastoma is a disease that is caused for a child who has abnormal cells in the brain, which is found using MRI "Magnetic Resonance Imaging" brain image using a powerful magnetic field, radio waves, and a computer to produce detailed images of the body's internal structures it is a standard diagnostic tool for a wide range of medical conditions, from detecting brain and spinal cord injuries to identifying tumors and also in evaluating joint problems. This is treatable, and by enabling the factor for happening, the factor for dissolving the dead tissues. If the brain tumor glioblastoma is untreated, the child will go to death; to avoid this, the child has to treat the brain problem using the scan of MRI images. Using the neural network, brain-related difficulties have to be resolved. It is identified to make the diagnosis of glioblastoma. This research deals with the techniques of max rationalizing and min rationalizing images, and the method of boosted division time attribute extraction has been involved in diagnosing glioblastoma. The process of maximum and min rationalization is used to recognize the Brain tumor glioblastoma in the brain images for treatment efficiency. The image segment is created for image recognition. The method of boosted division time attribute extraction is used in image recognition with the help of MRI for image extraction. The proposed boosted division time attribute extraction method helps to recognize the fetal images and find Brain tumor glioblastoma with feasible accuracy using image rationalization against the brain tumor glioblastoma diagnosis. In addition, 45% of adults are affected by the tumor, 40% of children and 5% are in death situations. To reduce this ratio, in this study, the Brain tumor glioblastoma is identified and segmented to recognize the fetal images and find the Brain tumor glioblastoma diagnosis. Then the tumor grades were analyzed using the efficient method for the imaging MRI with the diagnosis result of partially high. The accuracy of the proposed TAE-PIS system is 98.12% which is higher when compared to other methods like Genetic algorithm, Convolution neural network, fuzzy-based minimum and maximum neural network and kernel-based support vector machine respectively. Experimental results show that the proposed method archives rate of 98.12% accuracy with low response time and compared with the Genetic algorithm (GA), Convolutional Neural Network (CNN), fuzzy-based minimum and maximum neural network (Fuzzy min-max NN), and kernel-based support vector machine. Specifically, the proposed method achieves a substantial improvement of 80.82%, 82.13%, 85.61%, and 87.03% compared to GA, CNN, Fuzzy min-max NN, and kernel-based support vector machine, respectively.


Subject(s)
Brain Neoplasms , Glioblastoma , Adult , Child , Humans , Glioblastoma/diagnostic imaging , Image Processing, Computer-Assisted/methods , Brain Neoplasms/pathology , Brain/diagnostic imaging , Brain/pathology , Algorithms
11.
Sci Rep ; 14(1): 7232, 2024 03 27.
Article in English | MEDLINE | ID: mdl-38538708

ABSTRACT

Artificial intelligence-powered deep learning methods are being used to diagnose brain tumors with high accuracy, owing to their ability to process large amounts of data. Magnetic resonance imaging stands as the gold standard for brain tumor diagnosis using machine vision, surpassing computed tomography, ultrasound, and X-ray imaging in its effectiveness. Despite this, brain tumor diagnosis remains a challenging endeavour due to the intricate structure of the brain. This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning is a machine learning technique that allows us to repurpose pre-trained models on new tasks. This can be particularly useful for medical imaging tasks, where labelled data is often scarce. Four distinct transfer learning architectures were assessed in this study: ResNet152, VGG19, DenseNet169, and MobileNetv3. The models were trained and validated on a dataset from benchmark database: Kaggle. Five-fold cross validation was adopted for training and testing. To enhance the balance of the dataset and improve the performance of the models, image enhancement techniques were applied to the data for the four categories: pituitary, normal, meningioma, and glioma. MobileNetv3 achieved the highest accuracy of 99.75%, significantly outperforming other existing methods. This demonstrates the potential of deep transfer learning architectures to revolutionize the field of brain tumor diagnosis.


Subject(s)
Brain Neoplasms , Deep Learning , Meningeal Neoplasms , Humans , Artificial Intelligence , Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Machine Learning
12.
Sci Rep ; 14(1): 4814, 2024 02 27.
Article in English | MEDLINE | ID: mdl-38413679

ABSTRACT

Our environment has been significantly impacted by climate change. According to previous research, insect catastrophes induced by global climate change killed many trees, inevitably contributing to forest fires. The condition of the forest is an essential indicator of forest fires. Analysis of aerial images of a forest can detect deceased and living trees at an early stage. Automated forest health diagnostics are crucial for monitoring and preserving forest ecosystem health. Combining Modified Generative Adversarial Networks (MGANs) and YOLOv5 (You Only Look Once version 5) is presented in this paper as a novel method for assessing forest health using aerial images. We also employ the Tabu Search Algorithm (TSA) to enhance the process of identifying and categorizing unhealthy forest areas. The proposed model provides synthetic data to supplement the limited labeled dataset, thereby resolving the frequent issue of data scarcity in forest health diagnosis tasks. This improvement enhances the model's ability to generalize to previously unobserved data, thereby increasing the overall precision and robustness of the forest health evaluation. In addition, YOLOv5 integration enables real-time object identification, enabling the model to recognize and pinpoint numerous tree species and potential health issues with exceptional speed and accuracy. The efficient architecture of YOLOv5 enables it to be deployed on devices with limited resources, enabling forest-monitoring applications on-site. We use the TSA to enhance the identification of unhealthy forest areas. The TSA method effectively investigates the search space, ensuring the model converges to a near-optimal solution, improving disease detection precision and decreasing false positives. We evaluated our MGAN-YOLOv5 method using a large dataset of aerial images of diverse forest habitats. The experimental results demonstrated impressive performance in diagnosing forest health automatically, achieving a detection precision of 98.66%, recall of 99.99%, F1 score of 97.77%, accuracy of 99.99%, response time of 3.543 ms and computational time of 5.987 ms. Significantly, our method outperforms all the compared target detection methods showcasing a minimum improvement of 2% in mAP.


Subject(s)
Ecosystem , Forests , Trees , Climate Change , Algorithms
13.
Sci Rep ; 14(1): 3656, 2024 02 13.
Article in English | MEDLINE | ID: mdl-38351141

ABSTRACT

Lung cancer is thought to be a genetic disease with a variety of unknown origins. Globocan2020 report tells in 2020 new cancer cases identified was 19.3 million and nearly 10.0 million died owed to cancer. GLOBOCAN envisages that the cancer cases will raised to 28.4 million in 2040. This charge is superior to the combined rates of the former generally prevalent malignancies, like breast, colorectal, and prostate cancers. For attribute selection in previous work, the information gain model was applied. Then, for lung cancer prediction, multilayer perceptron, random subspace, and sequential minimal optimization (SMO) are used. However, the total number of parameters in a multilayer perceptron can become extremely large. This is inefficient because of the duplication in such high dimensions, and SMO can become ineffective due to its calculating method and maintaining a single threshold value for prediction. To avoid these difficulties, our research presented a novel technique including Z-score normalization, levy flight cuckoo search optimization, and a weighted convolutional neural network for predicting lung cancer. This result findings show that the proposed technique is effective in precision, recall, and accuracy for the Kent Ridge Bio-Medical Dataset Repository.


Subject(s)
Lung Neoplasms , Prostatic Neoplasms , Humans , Male , Lung , Lung Neoplasms/diagnosis , Lung Neoplasms/genetics , Neural Networks, Computer , Thorax , Female
14.
BMC Med Imaging ; 24(1): 38, 2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38331800

ABSTRACT

Deep learning recently achieved advancement in the segmentation of medical images. In this regard, U-Net is the most predominant deep neural network, and its architecture is the most prevalent in the medical imaging society. Experiments conducted on difficult datasets directed us to the conclusion that the traditional U-Net framework appears to be deficient in certain respects, despite its overall excellence in segmenting multimodal medical images. Therefore, we propose several modifications to the existing cutting-edge U-Net model. The technical approach involves applying a Multi-Dimensional U-Convolutional Neural Network to achieve accurate segmentation of multimodal biomedical images, enhancing precision and comprehensiveness in identifying and analyzing structures across diverse imaging modalities. As a result of the enhancements, we propose a novel framework called Multi-Dimensional U-Convolutional Neural Network (MDU-CNN) as a potential successor to the U-Net framework. On a large set of multimodal medical images, we compared our proposed framework, MDU-CNN, to the classical U-Net. There have been small changes in the case of perfect images, and a huge improvement is obtained in the case of difficult images. We tested our model on five distinct datasets, each of which presented unique challenges, and found that it has obtained a better performance of 1.32%, 5.19%, 4.50%, 10.23% and 0.87%, respectively.


Subject(s)
Neural Networks, Computer , Societies, Medical , Humans , Image Processing, Computer-Assisted
15.
BMC Med Imaging ; 24(1): 21, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38243215

ABSTRACT

The current approach to diagnosing and classifying brain tumors relies on the histological evaluation of biopsy samples, which is invasive, time-consuming, and susceptible to manual errors. These limitations underscore the pressing need for a fully automated, deep-learning-based multi-classification system for brain malignancies. This article aims to leverage a deep convolutional neural network (CNN) to enhance early detection and presents three distinct CNN models designed for different types of classification tasks. The first CNN model achieves an impressive detection accuracy of 99.53% for brain tumors. The second CNN model, with an accuracy of 93.81%, proficiently categorizes brain tumors into five distinct types: normal, glioma, meningioma, pituitary, and metastatic. Furthermore, the third CNN model demonstrates an accuracy of 98.56% in accurately classifying brain tumors into their different grades. To ensure optimal performance, a grid search optimization approach is employed to automatically fine-tune all the relevant hyperparameters of the CNN models. The utilization of large, publicly accessible clinical datasets results in robust and reliable classification outcomes. This article conducts a comprehensive comparison of the proposed models against classical models, such as AlexNet, DenseNet121, ResNet-101, VGG-19, and GoogleNet, reaffirming the superiority of the deep CNN-based approach in advancing the field of brain tumor classification and early detection.


Subject(s)
Brain Neoplasms , Glioma , Meningeal Neoplasms , Humans , Brain , Brain Neoplasms/diagnostic imaging , Neural Networks, Computer
16.
BMC Bioinformatics ; 24(1): 458, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38053030

ABSTRACT

Intense sun exposure is a major risk factor for the development of melanoma, an abnormal proliferation of skin cells. Yet, this more prevalent type of skin cancer can also develop in less-exposed areas, such as those that are shaded. Melanoma is the sixth most common type of skin cancer. In recent years, computer-based methods for imaging and analyzing biological systems have made considerable strides. This work investigates the use of advanced machine learning methods, specifically ensemble models with Auto Correlogram Methods, Binary Pyramid Pattern Filter, and Color Layout Filter, to enhance the detection accuracy of Melanoma skin cancer. These results suggest that the Color Layout Filter model of the Attribute Selection Classifier provides the best overall performance. Statistics for ROC, PRC, Kappa, F-Measure, and Matthews Correlation Coefficient were as follows: 90.96% accuracy, 0.91 precision, 0.91 recall, 0.95 ROC, 0.87 PRC, 0.87 Kappa, 0.91 F-Measure, and 0.82 Matthews Correlation Coefficient. In addition, its margins of error are the smallest. The research found that the Attribute Selection Classifier performed well when used in conjunction with the Color Layout Filter to improve image quality.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Algorithms , Skin Neoplasms/diagnostic imaging , Melanoma/diagnostic imaging , Machine Learning , Melanoma, Cutaneous Malignant
17.
Sci Rep ; 13(1): 23029, 2023 12 27.
Article in English | MEDLINE | ID: mdl-38155247

ABSTRACT

Accurately classifying brain tumor types is critical for timely diagnosis and potentially saving lives. Magnetic Resonance Imaging (MRI) is a widely used non-invasive method for obtaining high-contrast grayscale brain images, primarily for tumor diagnosis. The application of Convolutional Neural Networks (CNNs) in deep learning has revolutionized diagnostic systems, leading to significant advancements in medical imaging interpretation. In this study, we employ a transfer learning-based fine-tuning approach using EfficientNets to classify brain tumors into three categories: glioma, meningioma, and pituitary tumors. We utilize the publicly accessible CE-MRI Figshare dataset to fine-tune five pre-trained models from the EfficientNets family, ranging from EfficientNetB0 to EfficientNetB4. Our approach involves a two-step process to refine the pre-trained EfficientNet model. First, we initialize the model with weights from the ImageNet dataset. Then, we add additional layers, including top layers and a fully connected layer, to enable tumor classification. We conduct various tests to assess the robustness of our fine-tuned EfficientNets in comparison to other pre-trained models. Additionally, we analyze the impact of data augmentation on the model's test accuracy. To gain insights into the model's decision-making, we employ Grad-CAM visualization to examine the attention maps generated by the most optimal model, effectively highlighting tumor locations within brain images. Our results reveal that using EfficientNetB2 as the underlying framework yields significant performance improvements. Specifically, the overall test accuracy, precision, recall, and F1-score were found to be 99.06%, 98.73%, 99.13%, and 98.79%, respectively.


Subject(s)
Brain Neoplasms , Deep Learning , Glioma , Meningeal Neoplasms , Humans , Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Glioma/diagnostic imaging
18.
PLoS One ; 18(10): e0291631, 2023.
Article in English | MEDLINE | ID: mdl-37792777

ABSTRACT

Medical data processing and analytics exert significant influence in furnishing dependable decision support for prospective biomedical applications. Given the sensitive nature of medical data, specialized techniques and frameworks tailored for application-centric processing are imperative. This article presents a conceptualization for the analysis and uniformitarian of datasets through the implementation of Federated Learning (FL). The realm of medical big data stems from diverse origins, necessitating the delineation of data provenance and attribute paradigms to facilitate feature extraction and dependency assessment. The architecture governing the data collection framework is intricately linked to remote data transmission, thereby engendering efficient customization oversight. The operational methodology unfolds across four strata: the data origin layer, data acquisition layer, data classification layer, and data optimization layer. Central to this endeavor are multi-objective optimal datasets (MooM), characterized by attribute-driven feature cartography and cluster categorization through the conduit of federated learning models. The orchestration of feature synchronization and parameter extraction transpires across multiple tiers of neural networking, culminating in the provisioning of a steadfast remedy through dataset standardization and labeling. The empirical findings reflect the efficacy of the proposed technique, boasting an impressive 97.34% accuracy rate in the disentanglement and clustering of telemedicine data, facilitated by the operational servers within the ambit of the federated model.


Subject(s)
Big Data , Learning , Prospective Studies , Concept Formation , Data Analysis
19.
BMC Med Imaging ; 23(1): 146, 2023 10 02.
Article in English | MEDLINE | ID: mdl-37784025

ABSTRACT

COVID-19, the global pandemic of twenty-first century, has caused major challenges and setbacks for researchers and medical infrastructure worldwide. The CoVID-19 influences on the patients respiratory system cause flooding of airways in the lungs. Multiple techniques have been proposed since the outbreak each of which is interdepended on features and larger training datasets. It is challenging scenario to consolidate larger datasets for accurate and reliable decision support. This research article proposes a chest X-Ray images classification approach based on feature thresholding in categorizing the CoVID-19 samples. The proposed approach uses the threshold value-based Feature Extraction (TVFx) technique and has been validated on 661-CoVID-19 X-Ray datasets in providing decision support for medical experts. The model has three layers of training datasets to attain a sequential pattern based on various learning features. The aligned feature-set of the proposed technique has successfully categorized CoVID-19 active samples into mild, serious, and extreme categories as per medical standards. The proposed technique has achieved an accuracy of 97.42% in categorizing and classifying given samples sets.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , X-Rays , Neural Networks, Computer , Pandemics , Thorax
20.
Sci Rep ; 13(1): 17574, 2023 10 16.
Article in English | MEDLINE | ID: mdl-37845403

ABSTRACT

The electroencephalogram (EEG) has emerged over the past few decades as one of the key tools used by clinicians to detect seizures and other neurological abnormalities of the human brain. The proper diagnosis of epilepsy is crucial due to its distinctive nature and the subsequent negative effects of epileptic seizures on patients. The classification of minimally pre-processed, raw multichannel EEG signal recordings is the foundation of this article's unique method for identifying seizures in pre-adult patients. The new method makes use of the automatic feature learning capabilities of a three-dimensional deep convolution auto-encoder (3D-DCAE) associated with a neural network-based classifier to build an integrated framework that endures training in a supervised manner to attain the highest level of classification precision among brain state signals, both ictal and interictal. A pair of models were created and evaluated for testing and assessing our method, utilizing three distinct EEG data section lengths, and a tenfold cross-validation procedure. Based on five evaluation criteria, the labelled hybrid convolutional auto-encoder (LHCAE) model, which utilizes a classifier based on bidirectional long short-term memory (Bi-LSTM) and an EEG segment length of 4 s, had the best efficiency. This proposed model has 99.08 ± 0.54% accuracy, 99.21 ± 0.50% sensitivity, 99.11 ± 0.57% specificity, 99.09 ± 0.55% precision, and an F1-score of 99.16 ± 0.58%, according to the publicly available Children's Hospital Boston (CHB) dataset. Based on the obtained outcomes, the proposed seizure classification model outperforms the other state-of-the-art method's performance in the same dataset.


Subject(s)
Deep Learning , Epilepsy , Child , Humans , Epilepsy/diagnosis , Seizures/diagnosis , Neural Networks, Computer , Brain/diagnostic imaging , Electroencephalography/methods , Signal Processing, Computer-Assisted , Algorithms
SELECTION OF CITATIONS
SEARCH DETAIL