Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 10812, 2024 05 11.
Article in English | MEDLINE | ID: mdl-38734714

ABSTRACT

Cervical cancer, the second most prevalent cancer affecting women, arises from abnormal cell growth in the cervix, a crucial anatomical structure within the uterus. The significance of early detection cannot be overstated, prompting the use of various screening methods such as Pap smears, colposcopy, and Human Papillomavirus (HPV) testing to identify potential risks and initiate timely intervention. These screening procedures encompass visual inspections, Pap smears, colposcopies, biopsies, and HPV-DNA testing, each demanding the specialized knowledge and skills of experienced physicians and pathologists due to the inherently subjective nature of cancer diagnosis. In response to the imperative for efficient and intelligent screening, this article introduces a groundbreaking methodology that leverages pre-trained deep neural network models, including Alexnet, Resnet-101, Resnet-152, and InceptionV3, for feature extraction. The fine-tuning of these models is accompanied by the integration of diverse machine learning algorithms, with ResNet152 showcasing exceptional performance, achieving an impressive accuracy rate of 98.08%. It is noteworthy that the SIPaKMeD dataset, publicly accessible and utilized in this study, contributes to the transparency and reproducibility of our findings. The proposed hybrid methodology combines aspects of DL and ML for cervical cancer classification. Most intricate and complicated features from images can be extracted through DL. Further various ML algorithms can be implemented on extracted features. This innovative approach not only holds promise for significantly improving cervical cancer detection but also underscores the transformative potential of intelligent automation within the realm of medical diagnostics, paving the way for more accurate and timely interventions.


Subject(s)
Deep Learning , Early Detection of Cancer , Uterine Cervical Neoplasms , Humans , Uterine Cervical Neoplasms/diagnosis , Uterine Cervical Neoplasms/pathology , Female , Early Detection of Cancer/methods , Neural Networks, Computer , Algorithms , Papanicolaou Test/methods , Colposcopy/methods
2.
Heliyon ; 10(9): e30241, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38720763

ABSTRACT

Parkinson's disease (PD) is an age-related neurodegenerative disorder characterized by motor deficits, including tremor, rigidity, bradykinesia, and postural instability. According to the World Health Organization, about 1 % of the global population has been diagnosed with PD, and this figure is expected to double by 2040. Early and accurate diagnosis of PD is critical to slowing down the progression of the disease and reducing long-term disability. Due to the complexity of the disease, it is difficult to accurately diagnose it using traditional clinical tests. Therefore, it has become necessary to develop intelligent diagnostic models that can accurately detect PD. This article introduces a novel hybrid approach for accurate prediction of PD using an ANFIS with two optimizers, namely Adam and PSO. ANFIS is a type of fuzzy logic system used for nonlinear function approximation and classification, while Adam optimizer has the ability to adaptively adjust the learning rate of each individual parameter in an ANFIS at each training step, which helps the model find a better solution more quickly. PSO is a metaheuristic approach inspired by the behavior of social animals such as birds. Combining these two methods has potential to provide improved accuracy and robustness in PD diagnosis compared to existing methods. The proposed method utilized the advantages of both optimization techniques and applied them on the developed ANFIS model to maximize its prediction accuracy. This system was developed by using an open access clinical and demographic data. The chosen parameters for the ANFIS were selected through a comparative experimental analysis to optimize the model considering the number of fuzzy membership functions, number of epochs of ANFIS, and number of particles of PSO. The performance of the two ANFIS models: ANFIS (Adam) and ANFIS (PSO) focusing at ANFIS parameters and various evaluation metrics are further analyzed in detail and presented, The experimental results showed that the proposed ANFIS (PSO) shows better results in terms of loss and precision, whereas, the ANFIS (Adam) showed the better results in terms of accuracy, f1-score and recall. Thus, this adaptive neural-fuzzy algorithm provides a promising strategy for the diagnosis of PD, and show that the proposed models show their suitability for many other practical applications.

3.
Sci Rep ; 14(1): 9388, 2024 04 24.
Article in English | MEDLINE | ID: mdl-38654051

ABSTRACT

Skin Cancer is caused due to the mutational differences in epidermis hormones and patch appearances. Many studies are focused on the design and development of effective approaches in diagnosis and categorization of skin cancer. The decisions are made on independent training dataset under limited editions and scenarios. In this research, the kaggle based datasets are optimized and categorized into a labeled data array towards indexing using Federated learning (FL). The technique is developed on grey wolf optimization algorithm to assure the dataset attribute dependencies are extracted and dimensional mapping is processed. The threshold value validation of the dimensional mapping datasets is effectively optimized and trained under the neural networking framework further expanded via federated learning standards. The technique has demonstrated 95.82% accuracy under GWO technique and 94.9% on inter-combination of Trained Neural Networking (TNN) framework and Recessive Learning (RL) in accuracy.


Subject(s)
Algorithms , Neural Networks, Computer , Skin Neoplasms , Humans , Skin Neoplasms/diagnosis , Machine Learning
4.
Sci Rep ; 14(1): 8738, 2024 04 16.
Article in English | MEDLINE | ID: mdl-38627421

ABSTRACT

Brain tumor glioblastoma is a disease that is caused for a child who has abnormal cells in the brain, which is found using MRI "Magnetic Resonance Imaging" brain image using a powerful magnetic field, radio waves, and a computer to produce detailed images of the body's internal structures it is a standard diagnostic tool for a wide range of medical conditions, from detecting brain and spinal cord injuries to identifying tumors and also in evaluating joint problems. This is treatable, and by enabling the factor for happening, the factor for dissolving the dead tissues. If the brain tumor glioblastoma is untreated, the child will go to death; to avoid this, the child has to treat the brain problem using the scan of MRI images. Using the neural network, brain-related difficulties have to be resolved. It is identified to make the diagnosis of glioblastoma. This research deals with the techniques of max rationalizing and min rationalizing images, and the method of boosted division time attribute extraction has been involved in diagnosing glioblastoma. The process of maximum and min rationalization is used to recognize the Brain tumor glioblastoma in the brain images for treatment efficiency. The image segment is created for image recognition. The method of boosted division time attribute extraction is used in image recognition with the help of MRI for image extraction. The proposed boosted division time attribute extraction method helps to recognize the fetal images and find Brain tumor glioblastoma with feasible accuracy using image rationalization against the brain tumor glioblastoma diagnosis. In addition, 45% of adults are affected by the tumor, 40% of children and 5% are in death situations. To reduce this ratio, in this study, the Brain tumor glioblastoma is identified and segmented to recognize the fetal images and find the Brain tumor glioblastoma diagnosis. Then the tumor grades were analyzed using the efficient method for the imaging MRI with the diagnosis result of partially high. The accuracy of the proposed TAE-PIS system is 98.12% which is higher when compared to other methods like Genetic algorithm, Convolution neural network, fuzzy-based minimum and maximum neural network and kernel-based support vector machine respectively. Experimental results show that the proposed method archives rate of 98.12% accuracy with low response time and compared with the Genetic algorithm (GA), Convolutional Neural Network (CNN), fuzzy-based minimum and maximum neural network (Fuzzy min-max NN), and kernel-based support vector machine. Specifically, the proposed method achieves a substantial improvement of 80.82%, 82.13%, 85.61%, and 87.03% compared to GA, CNN, Fuzzy min-max NN, and kernel-based support vector machine, respectively.


Subject(s)
Brain Neoplasms , Glioblastoma , Adult , Child , Humans , Glioblastoma/diagnostic imaging , Image Processing, Computer-Assisted/methods , Brain Neoplasms/pathology , Brain/diagnostic imaging , Brain/pathology , Algorithms
6.
Sci Rep ; 14(1): 7232, 2024 03 27.
Article in English | MEDLINE | ID: mdl-38538708

ABSTRACT

Artificial intelligence-powered deep learning methods are being used to diagnose brain tumors with high accuracy, owing to their ability to process large amounts of data. Magnetic resonance imaging stands as the gold standard for brain tumor diagnosis using machine vision, surpassing computed tomography, ultrasound, and X-ray imaging in its effectiveness. Despite this, brain tumor diagnosis remains a challenging endeavour due to the intricate structure of the brain. This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning is a machine learning technique that allows us to repurpose pre-trained models on new tasks. This can be particularly useful for medical imaging tasks, where labelled data is often scarce. Four distinct transfer learning architectures were assessed in this study: ResNet152, VGG19, DenseNet169, and MobileNetv3. The models were trained and validated on a dataset from benchmark database: Kaggle. Five-fold cross validation was adopted for training and testing. To enhance the balance of the dataset and improve the performance of the models, image enhancement techniques were applied to the data for the four categories: pituitary, normal, meningioma, and glioma. MobileNetv3 achieved the highest accuracy of 99.75%, significantly outperforming other existing methods. This demonstrates the potential of deep transfer learning architectures to revolutionize the field of brain tumor diagnosis.


Subject(s)
Brain Neoplasms , Deep Learning , Meningeal Neoplasms , Humans , Artificial Intelligence , Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Machine Learning
7.
Sci Rep ; 14(1): 4814, 2024 02 27.
Article in English | MEDLINE | ID: mdl-38413679

ABSTRACT

Our environment has been significantly impacted by climate change. According to previous research, insect catastrophes induced by global climate change killed many trees, inevitably contributing to forest fires. The condition of the forest is an essential indicator of forest fires. Analysis of aerial images of a forest can detect deceased and living trees at an early stage. Automated forest health diagnostics are crucial for monitoring and preserving forest ecosystem health. Combining Modified Generative Adversarial Networks (MGANs) and YOLOv5 (You Only Look Once version 5) is presented in this paper as a novel method for assessing forest health using aerial images. We also employ the Tabu Search Algorithm (TSA) to enhance the process of identifying and categorizing unhealthy forest areas. The proposed model provides synthetic data to supplement the limited labeled dataset, thereby resolving the frequent issue of data scarcity in forest health diagnosis tasks. This improvement enhances the model's ability to generalize to previously unobserved data, thereby increasing the overall precision and robustness of the forest health evaluation. In addition, YOLOv5 integration enables real-time object identification, enabling the model to recognize and pinpoint numerous tree species and potential health issues with exceptional speed and accuracy. The efficient architecture of YOLOv5 enables it to be deployed on devices with limited resources, enabling forest-monitoring applications on-site. We use the TSA to enhance the identification of unhealthy forest areas. The TSA method effectively investigates the search space, ensuring the model converges to a near-optimal solution, improving disease detection precision and decreasing false positives. We evaluated our MGAN-YOLOv5 method using a large dataset of aerial images of diverse forest habitats. The experimental results demonstrated impressive performance in diagnosing forest health automatically, achieving a detection precision of 98.66%, recall of 99.99%, F1 score of 97.77%, accuracy of 99.99%, response time of 3.543 ms and computational time of 5.987 ms. Significantly, our method outperforms all the compared target detection methods showcasing a minimum improvement of 2% in mAP.


Subject(s)
Ecosystem , Forests , Trees , Climate Change , Algorithms
8.
Sci Rep ; 14(1): 3656, 2024 02 13.
Article in English | MEDLINE | ID: mdl-38351141

ABSTRACT

Lung cancer is thought to be a genetic disease with a variety of unknown origins. Globocan2020 report tells in 2020 new cancer cases identified was 19.3 million and nearly 10.0 million died owed to cancer. GLOBOCAN envisages that the cancer cases will raised to 28.4 million in 2040. This charge is superior to the combined rates of the former generally prevalent malignancies, like breast, colorectal, and prostate cancers. For attribute selection in previous work, the information gain model was applied. Then, for lung cancer prediction, multilayer perceptron, random subspace, and sequential minimal optimization (SMO) are used. However, the total number of parameters in a multilayer perceptron can become extremely large. This is inefficient because of the duplication in such high dimensions, and SMO can become ineffective due to its calculating method and maintaining a single threshold value for prediction. To avoid these difficulties, our research presented a novel technique including Z-score normalization, levy flight cuckoo search optimization, and a weighted convolutional neural network for predicting lung cancer. This result findings show that the proposed technique is effective in precision, recall, and accuracy for the Kent Ridge Bio-Medical Dataset Repository.


Subject(s)
Lung Neoplasms , Prostatic Neoplasms , Humans , Male , Lung , Lung Neoplasms/diagnosis , Lung Neoplasms/genetics , Neural Networks, Computer , Thorax , Female
9.
BMC Med Imaging ; 24(1): 38, 2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38331800

ABSTRACT

Deep learning recently achieved advancement in the segmentation of medical images. In this regard, U-Net is the most predominant deep neural network, and its architecture is the most prevalent in the medical imaging society. Experiments conducted on difficult datasets directed us to the conclusion that the traditional U-Net framework appears to be deficient in certain respects, despite its overall excellence in segmenting multimodal medical images. Therefore, we propose several modifications to the existing cutting-edge U-Net model. The technical approach involves applying a Multi-Dimensional U-Convolutional Neural Network to achieve accurate segmentation of multimodal biomedical images, enhancing precision and comprehensiveness in identifying and analyzing structures across diverse imaging modalities. As a result of the enhancements, we propose a novel framework called Multi-Dimensional U-Convolutional Neural Network (MDU-CNN) as a potential successor to the U-Net framework. On a large set of multimodal medical images, we compared our proposed framework, MDU-CNN, to the classical U-Net. There have been small changes in the case of perfect images, and a huge improvement is obtained in the case of difficult images. We tested our model on five distinct datasets, each of which presented unique challenges, and found that it has obtained a better performance of 1.32%, 5.19%, 4.50%, 10.23% and 0.87%, respectively.


Subject(s)
Neural Networks, Computer , Societies, Medical , Humans , Image Processing, Computer-Assisted
10.
BMC Med Imaging ; 24(1): 21, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38243215

ABSTRACT

The current approach to diagnosing and classifying brain tumors relies on the histological evaluation of biopsy samples, which is invasive, time-consuming, and susceptible to manual errors. These limitations underscore the pressing need for a fully automated, deep-learning-based multi-classification system for brain malignancies. This article aims to leverage a deep convolutional neural network (CNN) to enhance early detection and presents three distinct CNN models designed for different types of classification tasks. The first CNN model achieves an impressive detection accuracy of 99.53% for brain tumors. The second CNN model, with an accuracy of 93.81%, proficiently categorizes brain tumors into five distinct types: normal, glioma, meningioma, pituitary, and metastatic. Furthermore, the third CNN model demonstrates an accuracy of 98.56% in accurately classifying brain tumors into their different grades. To ensure optimal performance, a grid search optimization approach is employed to automatically fine-tune all the relevant hyperparameters of the CNN models. The utilization of large, publicly accessible clinical datasets results in robust and reliable classification outcomes. This article conducts a comprehensive comparison of the proposed models against classical models, such as AlexNet, DenseNet121, ResNet-101, VGG-19, and GoogleNet, reaffirming the superiority of the deep CNN-based approach in advancing the field of brain tumor classification and early detection.


Subject(s)
Brain Neoplasms , Glioma , Meningeal Neoplasms , Humans , Brain , Brain Neoplasms/diagnostic imaging , Neural Networks, Computer
11.
BMC Bioinformatics ; 24(1): 458, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38053030

ABSTRACT

Intense sun exposure is a major risk factor for the development of melanoma, an abnormal proliferation of skin cells. Yet, this more prevalent type of skin cancer can also develop in less-exposed areas, such as those that are shaded. Melanoma is the sixth most common type of skin cancer. In recent years, computer-based methods for imaging and analyzing biological systems have made considerable strides. This work investigates the use of advanced machine learning methods, specifically ensemble models with Auto Correlogram Methods, Binary Pyramid Pattern Filter, and Color Layout Filter, to enhance the detection accuracy of Melanoma skin cancer. These results suggest that the Color Layout Filter model of the Attribute Selection Classifier provides the best overall performance. Statistics for ROC, PRC, Kappa, F-Measure, and Matthews Correlation Coefficient were as follows: 90.96% accuracy, 0.91 precision, 0.91 recall, 0.95 ROC, 0.87 PRC, 0.87 Kappa, 0.91 F-Measure, and 0.82 Matthews Correlation Coefficient. In addition, its margins of error are the smallest. The research found that the Attribute Selection Classifier performed well when used in conjunction with the Color Layout Filter to improve image quality.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Algorithms , Skin Neoplasms/diagnostic imaging , Melanoma/diagnostic imaging , Machine Learning , Melanoma, Cutaneous Malignant
12.
Sci Rep ; 13(1): 23029, 2023 12 27.
Article in English | MEDLINE | ID: mdl-38155247

ABSTRACT

Accurately classifying brain tumor types is critical for timely diagnosis and potentially saving lives. Magnetic Resonance Imaging (MRI) is a widely used non-invasive method for obtaining high-contrast grayscale brain images, primarily for tumor diagnosis. The application of Convolutional Neural Networks (CNNs) in deep learning has revolutionized diagnostic systems, leading to significant advancements in medical imaging interpretation. In this study, we employ a transfer learning-based fine-tuning approach using EfficientNets to classify brain tumors into three categories: glioma, meningioma, and pituitary tumors. We utilize the publicly accessible CE-MRI Figshare dataset to fine-tune five pre-trained models from the EfficientNets family, ranging from EfficientNetB0 to EfficientNetB4. Our approach involves a two-step process to refine the pre-trained EfficientNet model. First, we initialize the model with weights from the ImageNet dataset. Then, we add additional layers, including top layers and a fully connected layer, to enable tumor classification. We conduct various tests to assess the robustness of our fine-tuned EfficientNets in comparison to other pre-trained models. Additionally, we analyze the impact of data augmentation on the model's test accuracy. To gain insights into the model's decision-making, we employ Grad-CAM visualization to examine the attention maps generated by the most optimal model, effectively highlighting tumor locations within brain images. Our results reveal that using EfficientNetB2 as the underlying framework yields significant performance improvements. Specifically, the overall test accuracy, precision, recall, and F1-score were found to be 99.06%, 98.73%, 99.13%, and 98.79%, respectively.


Subject(s)
Brain Neoplasms , Deep Learning , Glioma , Meningeal Neoplasms , Humans , Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Glioma/diagnostic imaging
13.
Sci Rep ; 13(1): 17574, 2023 10 16.
Article in English | MEDLINE | ID: mdl-37845403

ABSTRACT

The electroencephalogram (EEG) has emerged over the past few decades as one of the key tools used by clinicians to detect seizures and other neurological abnormalities of the human brain. The proper diagnosis of epilepsy is crucial due to its distinctive nature and the subsequent negative effects of epileptic seizures on patients. The classification of minimally pre-processed, raw multichannel EEG signal recordings is the foundation of this article's unique method for identifying seizures in pre-adult patients. The new method makes use of the automatic feature learning capabilities of a three-dimensional deep convolution auto-encoder (3D-DCAE) associated with a neural network-based classifier to build an integrated framework that endures training in a supervised manner to attain the highest level of classification precision among brain state signals, both ictal and interictal. A pair of models were created and evaluated for testing and assessing our method, utilizing three distinct EEG data section lengths, and a tenfold cross-validation procedure. Based on five evaluation criteria, the labelled hybrid convolutional auto-encoder (LHCAE) model, which utilizes a classifier based on bidirectional long short-term memory (Bi-LSTM) and an EEG segment length of 4 s, had the best efficiency. This proposed model has 99.08 ± 0.54% accuracy, 99.21 ± 0.50% sensitivity, 99.11 ± 0.57% specificity, 99.09 ± 0.55% precision, and an F1-score of 99.16 ± 0.58%, according to the publicly available Children's Hospital Boston (CHB) dataset. Based on the obtained outcomes, the proposed seizure classification model outperforms the other state-of-the-art method's performance in the same dataset.


Subject(s)
Deep Learning , Epilepsy , Child , Humans , Epilepsy/diagnosis , Seizures/diagnosis , Neural Networks, Computer , Brain/diagnostic imaging , Electroencephalography/methods , Signal Processing, Computer-Assisted , Algorithms
14.
BMC Med Imaging ; 23(1): 146, 2023 10 02.
Article in English | MEDLINE | ID: mdl-37784025

ABSTRACT

COVID-19, the global pandemic of twenty-first century, has caused major challenges and setbacks for researchers and medical infrastructure worldwide. The CoVID-19 influences on the patients respiratory system cause flooding of airways in the lungs. Multiple techniques have been proposed since the outbreak each of which is interdepended on features and larger training datasets. It is challenging scenario to consolidate larger datasets for accurate and reliable decision support. This research article proposes a chest X-Ray images classification approach based on feature thresholding in categorizing the CoVID-19 samples. The proposed approach uses the threshold value-based Feature Extraction (TVFx) technique and has been validated on 661-CoVID-19 X-Ray datasets in providing decision support for medical experts. The model has three layers of training datasets to attain a sequential pattern based on various learning features. The aligned feature-set of the proposed technique has successfully categorized CoVID-19 active samples into mild, serious, and extreme categories as per medical standards. The proposed technique has achieved an accuracy of 97.42% in categorizing and classifying given samples sets.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , X-Rays , Neural Networks, Computer , Pandemics , Thorax
15.
BMC Bioinformatics ; 24(1): 382, 2023 Oct 10.
Article in English | MEDLINE | ID: mdl-37817066

ABSTRACT

An abnormal growth or fatty mass of cells in the brain is called a tumor. They can be either healthy (normal) or become cancerous, depending on the structure of their cells. This can result in increased pressure within the cranium, potentially causing damage to the brain or even death. As a result, diagnostic procedures such as computed tomography, magnetic resonance imaging, and positron emission tomography, as well as blood and urine tests, are used to identify brain tumors. However, these methods can be labor-intensive and sometimes yield inaccurate results. Instead of these time-consuming methods, deep learning models are employed because they are less time-consuming, require less expensive equipment, produce more accurate results, and are easy to set up. In this study, we propose a method based on transfer learning, utilizing the pre-trained VGG-19 model. This approach has been enhanced by applying a customized convolutional neural network framework and combining it with pre-processing methods, including normalization and data augmentation. For training and testing, our proposed model used 80% and 20% of the images from the dataset, respectively. Our proposed method achieved remarkable success, with an accuracy rate of 99.43%, a sensitivity of 98.73%, and a specificity of 97.21%. The dataset, sourced from Kaggle for training purposes, consists of 407 images, including 257 depicting brain tumors and 150 without tumors. These models could be utilized to develop clinically useful solutions for identifying brain tumors in CT images based on these outcomes.


Subject(s)
Brain Neoplasms , Neural Networks, Computer , Humans , Brain Neoplasms/diagnostic imaging , Tomography, X-Ray Computed , Magnetic Resonance Imaging , Brain
16.
PLoS One ; 18(10): e0291631, 2023.
Article in English | MEDLINE | ID: mdl-37792777

ABSTRACT

Medical data processing and analytics exert significant influence in furnishing dependable decision support for prospective biomedical applications. Given the sensitive nature of medical data, specialized techniques and frameworks tailored for application-centric processing are imperative. This article presents a conceptualization for the analysis and uniformitarian of datasets through the implementation of Federated Learning (FL). The realm of medical big data stems from diverse origins, necessitating the delineation of data provenance and attribute paradigms to facilitate feature extraction and dependency assessment. The architecture governing the data collection framework is intricately linked to remote data transmission, thereby engendering efficient customization oversight. The operational methodology unfolds across four strata: the data origin layer, data acquisition layer, data classification layer, and data optimization layer. Central to this endeavor are multi-objective optimal datasets (MooM), characterized by attribute-driven feature cartography and cluster categorization through the conduit of federated learning models. The orchestration of feature synchronization and parameter extraction transpires across multiple tiers of neural networking, culminating in the provisioning of a steadfast remedy through dataset standardization and labeling. The empirical findings reflect the efficacy of the proposed technique, boasting an impressive 97.34% accuracy rate in the disentanglement and clustering of telemedicine data, facilitated by the operational servers within the ambit of the federated model.


Subject(s)
Big Data , Learning , Prospective Studies , Concept Formation , Data Analysis
17.
Diagnostics (Basel) ; 13(18)2023 Sep 12.
Article in English | MEDLINE | ID: mdl-37761292

ABSTRACT

Breast cancer is the second leading cause of mortality among women. Early and accurate detection plays a crucial role in lowering its mortality rate. Timely detection and classification of breast cancer enable the most effective treatment. Convolutional neural networks (CNNs) have significantly improved the accuracy of tumor detection and classification in medical imaging compared to traditional methods. This study proposes a comprehensive classification technique for identifying breast cancer, utilizing a synthesized CNN, an enhanced optimization algorithm, and transfer learning. The primary goal is to assist radiologists in rapidly identifying anomalies. To overcome inherent limitations, we modified the Ant Colony Optimization (ACO) technique with opposition-based learning (OBL). The Enhanced Ant Colony Optimization (EACO) methodology was then employed to determine the optimal hyperparameter values for the CNN architecture. Our proposed framework combines the Residual Network-101 (ResNet101) CNN architecture with the EACO algorithm, resulting in a new model dubbed EACO-ResNet101. Experimental analysis was conducted on the MIAS and DDSM (CBIS-DDSM) mammographic datasets. Compared to conventional methods, our proposed model achieved an impressive accuracy of 98.63%, sensitivity of 98.76%, and specificity of 98.89% on the CBIS-DDSM dataset. On the MIAS dataset, the proposed model achieved a classification accuracy of 99.15%, a sensitivity of 97.86%, and a specificity of 98.88%. These results demonstrate the superiority of the proposed EACO-ResNet101 over current methodologies.

18.
Sci Rep ; 13(1): 13588, 2023 08 21.
Article in English | MEDLINE | ID: mdl-37604952

ABSTRACT

Heart disease is a significant global cause of mortality, and predicting it through clinical data analysis poses challenges. Machine learning (ML) has emerged as a valuable tool for diagnosing and predicting heart disease by analyzing healthcare data. Previous studies have extensively employed ML techniques in medical research for heart disease prediction. In this study, eight ML classifiers were utilized to identify crucial features that enhance the accuracy of heart disease prediction. Various combinations of features and well-known classification algorithms were employed to develop the prediction model. Neural network models, such as Naïve Bayes and Radial Basis Functions, were implemented, achieving accuracies of 94.78% and 90.78% respectively in heart disease prediction. Among the state-of-the-art methods for cardiovascular problem prediction, Learning Vector Quantization exhibited the highest accuracy rate of 98.7%. The motivation behind predicting Cardiovascular Heart Disease lies in its potential to save lives, improves health outcomes, and allocates healthcare resources efficiently. The key contributions encompass early intervention, personalized medicine, technological advancements, the impact on public health, and ongoing research, all of which collectively work toward reducing the burden of CHD on both individual patients and society as a whole.


Subject(s)
Cardiovascular Diseases , Cardiovascular System , Heart Diseases , Humans , Bayes Theorem , Heart , Heart Diseases/diagnosis , Cardiovascular Diseases/diagnosis
19.
Diagnostics (Basel) ; 13(8)2023 Apr 10.
Article in English | MEDLINE | ID: mdl-37189485

ABSTRACT

We developed a framework to detect and grade knee RA using digital X-radiation images and used it to demonstrate the ability of deep learning approaches to detect knee RA using a consensus-based decision (CBD) grading system. The study aimed to evaluate the efficiency with which a deep learning approach based on artificial intelligence (AI) can find and determine the severity of knee RA in digital X-radiation images. The study comprised people over 50 years with RA symptoms, such as knee joint pain, stiffness, crepitus, and functional impairments. The digitized X-radiation images of the people were obtained from the BioGPS database repository. We used 3172 digital X-radiation images of the knee joint from an anterior-posterior perspective. The trained Faster-CRNN architecture was used to identify the knee joint space narrowing (JSN) area in digital X-radiation images and extract the features using ResNet-101 with domain adaptation. In addition, we employed another well-trained model (VGG16 with domain adaptation) for knee RA severity classification. Medical experts graded the X-radiation images of the knee joint using a consensus-based decision score. We trained the enhanced-region proposal network (ERPN) using this manually extracted knee area as the test dataset image. An X-radiation image was fed into the final model, and a consensus decision was used to grade the outcome. The presented model correctly identified the marginal knee JSN region with 98.97% of accuracy, with a total knee RA intensity classification accuracy of 99.10%, with a sensitivity of 97.3%, a specificity of 98.2%, a precision of 98.1%, and a dice score of 90.1% compared with other conventional models.

20.
Diagnostics (Basel) ; 13(6)2023 Mar 17.
Article in English | MEDLINE | ID: mdl-36980463

ABSTRACT

To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.

SELECTION OF CITATIONS
SEARCH DETAIL
...