Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 85
Filter
1.
Front Comput Neurosci ; 18: 1393849, 2024.
Article in English | MEDLINE | ID: mdl-38725868

ABSTRACT

Alzheimer's disease (AD) is a neurodegenerative illness that impairs cognition, function, and behavior by causing irreversible damage to multiple brain areas, including the hippocampus. The suffering of the patients and their family members will be lessened with an early diagnosis of AD. The automatic diagnosis technique is widely required due to the shortage of medical experts and eases the burden of medical staff. The automatic artificial intelligence (AI)-based computerized method can help experts achieve better diagnosis accuracy and precision rates. This study proposes a new automated framework for AD stage prediction based on the ResNet-Self architecture and Fuzzy Entropy-controlled Path-Finding Algorithm (FEcPFA). A data augmentation technique has been utilized to resolve the dataset imbalance issue. In the next step, we proposed a new deep-learning model based on the self-attention module. A ResNet-50 architecture is modified and connected with a self-attention block for important information extraction. The hyperparameters were optimized using Bayesian optimization (BO) and then utilized to train the model, which was subsequently employed for feature extraction. The self-attention extracted features were optimized using the proposed FEcPFA. The best features were selected using FEcPFA and passed to the machine learning classifiers for the final classification. The experimental process utilized a publicly available MRI dataset and achieved an improved accuracy of 99.9%. The results were compared with state-of-the-art (SOTA) techniques, demonstrating the improvement of the proposed framework in terms of accuracy and time efficiency.

2.
BMC Med Inform Decis Mak ; 24(1): 92, 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38575951

ABSTRACT

Emerging from the convergence of digital twin technology and the metaverse, consumer health (MCH) is witnessing a transformative shift. The amalgamation of bioinformatics with healthcare Big Data has ushered in a new era of disease prediction models that harness comprehensive medical data, enabling the anticipation of illnesses even before the onset of symptoms. In this model, deep neural networks stand out because they improve accuracy remarkably by increasing network depth and making weight changes using gradient descent. Nonetheless, traditional methods face their own set of challenges, including the issues of gradient instability and slow training. In this case, the Broad Learning System (BLS) stands out as a good alternative. It gets around the problems with gradient descent and lets you quickly rebuild a model through incremental learning. One problem with BLS is that it has trouble extracting complex features from complex medical data. This makes it less useful in a wide range of healthcare situations. In response to these challenges, we introduce DAE-BLS, a novel hybrid model that marries Denoising AutoEncoder (DAE) noise reduction with the efficiency of BLS. This hybrid approach excels in robust feature extraction, particularly within the intricate and multifaceted world of medical data. Validation using diverse datasets yields impressive results, with accuracies reaching as high as 98.50%. DAE-BLS's ability to rapidly adapt through incremental learning holds great promise for accurate and agile disease prediction, especially within the complex and dynamic healthcare scenarios of today.


Subject(s)
Big Data , Technology , Humans , Computational Biology , Health Facilities , Neural Networks, Computer
3.
Front Oncol ; 14: 1347856, 2024.
Article in English | MEDLINE | ID: mdl-38454931

ABSTRACT

With over 2.1 million new cases of breast cancer diagnosed annually, the incidence and mortality rate of this disease pose severe global health issues for women. Identifying the disease's influence is the only practical way to lessen it immediately. Numerous research works have developed automated methods using different medical imaging to identify BC. Still, the precision of each strategy differs based on the available resources, the issue's nature, and the dataset being used. We proposed a novel deep bottleneck convolutional neural network with a quantum optimization algorithm for breast cancer classification and diagnosis from mammogram images. Two novel deep architectures named three-residual blocks bottleneck and four-residual blocks bottle have been proposed with parallel and single paths. Bayesian Optimization (BO) has been employed to initialize hyperparameter values and train the architectures on the selected dataset. Deep features are extracted from the global average pool layer of both models. After that, a kernel-based canonical correlation analysis and entropy technique is proposed for the extracted deep features fusion. The fused feature set is further refined using an optimization technique named quantum generalized normal distribution optimization. The selected features are finally classified using several neural network classifiers, such as bi-layered and wide-neural networks. The experimental process was conducted on a publicly available mammogram imaging dataset named INbreast, and a maximum accuracy of 96.5% was obtained. Moreover, for the proposed method, the sensitivity rate is 96.45, the precision rate is 96.5, the F1 score value is 96.64, the MCC value is 92.97%, and the Kappa value is 92.97%, respectively. The proposed architectures are further utilized for the diagnosis process of infected regions. In addition, a detailed comparison has been conducted with a few recent techniques showing the proposed framework's higher accuracy and precision rate.

4.
Sci Rep ; 14(1): 5895, 2024 03 11.
Article in English | MEDLINE | ID: mdl-38467755

ABSTRACT

A significant issue in computer-aided diagnosis (CAD) for medical applications is brain tumor classification. Radiologists could reliably detect tumors using machine learning algorithms without extensive surgery. However, a few important challenges arise, such as (i) the selection of the most important deep learning architecture for classification (ii) an expert in the field who can assess the output of deep learning models. These difficulties motivate us to propose an efficient and accurate system based on deep learning and evolutionary optimization for the classification of four types of brain modalities (t1 tumor, t1ce tumor, t2 tumor, and flair tumor) on a large-scale MRI database. Thus, a CNN architecture is modified based on domain knowledge and connected with an evolutionary optimization algorithm to select hyperparameters. In parallel, a Stack Encoder-Decoder network is designed with ten convolutional layers. The features of both models are extracted and optimized using an improved version of Grey Wolf with updated criteria of the Jaya algorithm. The improved version speeds up the learning process and improves the accuracy. Finally, the selected features are fused using a novel parallel pooling approach that is classified using machine learning and neural networks. Two datasets, BraTS2020 and BraTS2021, have been employed for the experimental tasks and obtained an improved average accuracy of 98% and a maximum single-classifier accuracy of 99%. Comparison is also conducted with several classifiers, techniques, and neural nets; the proposed method achieved improved performance.


Subject(s)
Brain Neoplasms , Deep Learning , Delayed Emergence from Anesthesia , Humans , Neural Networks, Computer , Brain/diagnostic imaging , Brain Neoplasms/diagnostic imaging
5.
Front Oncol ; 14: 1335740, 2024.
Article in English | MEDLINE | ID: mdl-38390266

ABSTRACT

Brain tumor classification is one of the most difficult tasks for clinical diagnosis and treatment in medical image analysis. Any errors that occur throughout the brain tumor diagnosis process may result in a shorter human life span. Nevertheless, most currently used techniques ignore certain features that have particular significance and relevance to the classification problem in favor of extracting and choosing deep significance features. One important area of research is the deep learning-based categorization of brain tumors using brain magnetic resonance imaging (MRI). This paper proposes an automated deep learning model and an optimal information fusion framework for classifying brain tumor from MRI images. The dataset used in this work was imbalanced, a key challenge for training selected networks. This imbalance in the training dataset impacts the performance of deep learning models because it causes the classifier performance to become biased in favor of the majority class. We designed a sparse autoencoder network to generate new images that resolve the problem of imbalance. After that, two pretrained neural networks were modified and the hyperparameters were initialized using Bayesian optimization, which was later utilized for the training process. After that, deep features were extracted from the global average pooling layer. The extracted features contain few irrelevant information; therefore, we proposed an improved Quantum Theory-based Marine Predator Optimization algorithm (QTbMPA). The proposed QTbMPA selects both networks' best features and finally fuses using a serial-based approach. The fused feature set is passed to neural network classifiers for the final classification. The proposed framework tested on an augmented Figshare dataset and an improved accuracy of 99.80%, a sensitivity rate of 99.83%, a false negative rate of 17%, and a precision rate of 99.83% is obtained. Comparison and ablation study show the improvement in the accuracy of this work.

6.
Math Biosci Eng ; 20(11): 19454-19467, 2023 Oct 20.
Article in English | MEDLINE | ID: mdl-38052609

ABSTRACT

Cancer occurrence rates are gradually rising in the population, which reasons a heavy diagnostic burden globally. The rate of colorectal (bowel) cancer (CC) is gradually rising, and is currently listed as the third most common cancer globally. Therefore, early screening and treatments with a recommended clinical protocol are necessary to trat cancer. The proposed research aim of this paper to develop a Deep-Learning Framework (DLF) to classify the colon histology slides into normal/cancer classes using deep-learning-based features. The stages of the framework include the following: (ⅰ) Image collection, resizing, and pre-processing; (ⅱ) Deep-Features (DF) extraction with a chosen scheme; (ⅲ) Binary classification with a 5-fold cross-validation; and (ⅳ) Verification of the clinical significance. This work classifies the considered image database using the follwing: (ⅰ) Individual DF, (ⅱ) Fused DF, and (ⅲ) Ensemble DF. The achieved results are separately verified using binary classifiers. The proposed work considered 4000 (2000 normal and 2000 cancer) histology slides for the examination. The result of this research confirms that the fused DF helps to achieve a detection accuracy of 99% with the K-Nearest Neighbor (KNN) classifier. In contrast, the individual and ensemble DF provide classification accuracies of 93.25 and 97.25%, respectively.


Subject(s)
Deep Learning , Neoplasms , Humans , Algorithms , Image Processing, Computer-Assisted/methods , Colon , Neoplasms/diagnosis
7.
Open Life Sci ; 18(1): 20220764, 2023.
Article in English | MEDLINE | ID: mdl-38027230

ABSTRACT

In the rapidly evolving landscape of agricultural technology, image processing has emerged as a powerful tool for addressing critical agricultural challenges, with a particular focus on the identification and management of crop diseases. This study is motivated by the imperative need to enhance agricultural sustainability and productivity through precise plant health monitoring. Our primary objective is to propose an innovative approach combining support vector machine (SVM) with advanced image processing techniques to achieve precise detection and classification of fig leaf diseases. Our methodology encompasses a step-by-step process, beginning with the acquisition of digital color images of diseased leaves, followed by denoising using the mean function and enhancement through Contrast-limited adaptive histogram equalization. The subsequent stages involve segmentation through the Fuzzy C Means algorithm, feature extraction via Principal Component Analysis, and disease classification, employing Particle Swarm Optimization (PSO) in conjunction with SVM, Backpropagation Neural Network, and Random Forest algorithms. The results of our study showcase the exceptional performance of the PSO SVM algorithm in accurately classifying and detecting fig leaf disease, demonstrating its potential for practical implementation in agriculture. This innovative approach not only underscores the significance of advanced image processing techniques but also highlights their substantial contributions to sustainable agriculture and plant disease mitigation. In conclusion, the integration of image processing and SVM-based classification offers a promising avenue for advancing crop disease management, ultimately bolstering agricultural productivity and global food security.

8.
Open Life Sci ; 18(1): 20220746, 2023.
Article in English | MEDLINE | ID: mdl-37954104

ABSTRACT

Lung cancer is a substantial health issue globally, and it is one of the main causes of mortality. Malignant mesothelioma (MM) is a common kind of lung cancer. The majority of patients with MM have no symptoms. In the diagnosis of any disease, etiology is crucial. MM risk factor detection procedures include positron emission tomography, magnetic resonance imaging, biopsies, X-rays, and blood tests, which are all necessary but costly and intrusive. Researchers primarily concentrated on the investigation of MM risk variables in the study. Mesothelioma symptoms were detected with the help of data from mesothelioma patients. The dataset, however, included both healthy and mesothelioma patients. Classification algorithms for MM illness diagnosis were carried out using computationally efficient data mining techniques. The support vector machine outperformed the multilayer perceptron ensembles (MLPE) neural network (NN) technique, yielding promising findings. With 99.87% classification accuracy achieved using 10-fold cross-validation over 5 runs, SVM is the best classification when contrasted to the MLPE NN, which achieves 99.56% classification accuracy. In addition, SPSS analysis is carried out for this study to collect pertinent and experimental data.

9.
Diagnostics (Basel) ; 13(19)2023 Sep 26.
Article in English | MEDLINE | ID: mdl-37835807

ABSTRACT

Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.

10.
Diagnostics (Basel) ; 13(17)2023 Sep 01.
Article in English | MEDLINE | ID: mdl-37685369

ABSTRACT

In recent times, DFU (diabetic foot ulcer) has become a universal health problem that affects many diabetes patients severely. DFU requires immediate proper treatment to avert amputation. Clinical examination of DFU is a tedious process and complex in nature. Concurrently, DL (deep learning) methodologies can show prominent outcomes in the classification of DFU because of their efficient learning capacity. Though traditional systems have tried using DL-based models to procure better performance, there is room for enhancement in accuracy. Therefore, the present study uses the AWSg-CNN (Adaptive Weighted Sub-gradient Convolutional Neural Network) method to classify DFU. A DFUC dataset is considered, and several processes are involved in the present study. Initially, the proposed method starts with pre-processing, excluding inconsistent and missing data, to enhance dataset quality and accuracy. Further, for classification, the proposed method utilizes the process of RIW (random initialization of weights) and log softmax with the ASGO (Adaptive Sub-gradient Optimizer) for effective performance. In this process, RIW efficiently learns the shift of feature space between the convolutional layers. To evade the underflow of gradients, the log softmax function is used. When logging softmax with the ASGO is used for the activation function, the gradient steps are controlled. An adaptive modification of the proximal function simplifies the learning rate significantly, and optimal proximal functions are produced. Due to such merits, the proposed method can perform better classification. The predicted results are displayed on the webpage through the HTML, CSS, and Flask frameworks. The effectiveness of the proposed system is evaluated with accuracy, recall, F1-score, and precision to confirm its effectual performance.

11.
Diagnostics (Basel) ; 13(18)2023 Sep 06.
Article in English | MEDLINE | ID: mdl-37761236

ABSTRACT

Background: Using artificial intelligence (AI) with the concept of a deep learning-based automated computer-aided diagnosis (CAD) system has shown improved performance for skin lesion classification. Although deep convolutional neural networks (DCNNs) have significantly improved many image classification tasks, it is still difficult to accurately classify skin lesions because of a lack of training data, inter-class similarity, intra-class variation, and the inability to concentrate on semantically significant lesion parts. Innovations: To address these issues, we proposed an automated deep learning and best feature selection framework for multiclass skin lesion classification in dermoscopy images. The proposed framework performs a preprocessing step at the initial step for contrast enhancement using a new technique that is based on dark channel haze and top-bottom filtering. Three pre-trained deep learning models are fine-tuned in the next step and trained using the transfer learning concept. In the fine-tuning process, we added and removed a few additional layers to lessen the parameters and later selected the hyperparameters using a genetic algorithm (GA) instead of manual assignment. The purpose of hyperparameter selection using GA is to improve the learning performance. After that, the deeper layer is selected for each network and deep features are extracted. The extracted deep features are fused using a novel serial correlation-based approach. This technique reduces the feature vector length to the serial-based approach, but there is little redundant information. We proposed an improved anti-Lion optimization algorithm for the best feature selection to address this issue. The selected features are finally classified using machine learning algorithms. Main Results: The experimental process was conducted using two publicly available datasets, ISIC2018 and ISIC2019. Employing these datasets, we obtained an accuracy of 96.1 and 99.9%, respectively. Comparison was also conducted with state-of-the-art techniques and shows the proposed framework improved accuracy. Conclusions: The proposed framework successfully enhances the contrast of the cancer region. Moreover, the selection of hyperparameters using the automated techniques improved the learning process of the proposed framework. The proposed fusion and improved version of the selection process maintains the best accuracy and shorten the computational time.

12.
Article in English | MEDLINE | ID: mdl-37436864

ABSTRACT

The proposed study is based on a feature and channel selection strategy that uses correlation filters for brain-computer interface (BCI) applications using electroencephalography (EEG)-functional near-infrared spectroscopy (fNIRS) brain imaging modalities. The proposed approach fuses the complementary information of the two modalities to train the classifier. The channels most closely correlated with brain activity are extracted using a correlation-based connectivity matrix for fNIRS and EEG separately. Furthermore, the training vector is formed through the identification and fusion of the statistical features of both modalities (i.e., slope, skewness, maximum, skewness, mean, and kurtosis) The constructed fused feature vector is passed through various filters (including ReliefF, minimum redundancy maximum relevance, chi-square test, analysis of variance, and Kruskal-Wallis filters) to remove redundant information before training. Traditional classifiers such as neural networks, support-vector machines, linear discriminant analysis, and ensembles were used for the purpose of training and testing. A publicly available dataset with motor imagery information was used for validation of the proposed approach. Our findings indicate that the proposed correlation-filter-based channel and feature selection framework significantly enhances the classification accuracy of hybrid EEG-fNIRS. The ReliefF-based filter outperformed other filters with the ensemble classifier with a high accuracy of 94.77 ± 4.26%. The statistical analysis also validated the significance (p < 0.01) of the results. A comparison of the proposed framework with the prior findings was also presented. Our results show that the proposed approach can be used in future EEG-fNIRS-based hybrid BCI applications.

13.
Diagnostics (Basel) ; 13(11)2023 May 29.
Article in English | MEDLINE | ID: mdl-37296750

ABSTRACT

Mental stress is known as a prime factor in road crashes. The devastation of these crashes often results in damage to humans, vehicles, and infrastructure. Likewise, persistent mental stress could lead to the development of mental, cardiovascular, and abdominal disorders. Preceding research in this domain mostly focuses on feature engineering and conventional machine learning approaches. These approaches recognize different levels of stress based on handcrafted features extracted from various modalities including physiological, physical, and contextual data. Acquiring good quality features from these modalities using feature engineering is often a difficult job. Recent developments in the form of deep learning (DL) algorithms have relieved feature engineering by automatically extracting and learning resilient features. This paper proposes different CNN and CNN-LSTSM-based fusion models using physiological signals (SRAD dataset) and multimodal data (AffectiveROAD dataset) for the driver's two and three stress levels. The fuzzy EDAS (evaluation based on distance from average solution) approach is used to evaluate the performance of the proposed models based on different classification metrics (accuracy, recall, precision, F-score, and specificity). Fuzzy EDAS performance estimation shows that the proposed CNN and hybrid CNN-LSTM models achieved the first ranks based on the fusion of BH, E4-Left (E4-L), and E4-Right (E4-R). Results showed the significance of multimodal data for designing an accurate and trustworthy stress recognition diagnosing model for real-world driving conditions. The proposed model can also be used for the diagnosis of the stress level of a subject during other daily life activities.

14.
Math Biosci Eng ; 20(6): 10404-10427, 2023 04 06.
Article in English | MEDLINE | ID: mdl-37322939

ABSTRACT

One of the most effective approaches for identifying breast cancer is histology, which is the meticulous inspection of tissues under a microscope. The kind of cancer cells, or whether they are cancerous (malignant) or non-cancerous, is typically determined by the type of tissue that is analyzed by the test performed by the technician (benign). The goal of this study was to automate IDC classification within breast cancer histology samples using a transfer learning technique. To improve our outcomes, we combined a Gradient Color Activation Mapping (Grad CAM) and image coloring mechanism with a discriminative fine-tuning methodology employing a one-cycle strategy using FastAI techniques. There have been lots of research studies related to deep transfer learning which use the same mechanism, but this report uses a transfer learning mechanism based on lightweight Squeeze Net architecture, a variant of CNN (Convolution neural network). This strategy demonstrates that fine-tuning on Squeeze Net makes it possible to achieve satisfactory results when transitioning generic features from natural images to medical images.


Subject(s)
Breast Neoplasms , Deep Learning , Humans , Female , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Neural Networks, Computer
15.
Front Oncol ; 13: 1151257, 2023.
Article in English | MEDLINE | ID: mdl-37346069

ABSTRACT

Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.

16.
Diagnostics (Basel) ; 13(8)2023 Apr 10.
Article in English | MEDLINE | ID: mdl-37189485

ABSTRACT

We developed a framework to detect and grade knee RA using digital X-radiation images and used it to demonstrate the ability of deep learning approaches to detect knee RA using a consensus-based decision (CBD) grading system. The study aimed to evaluate the efficiency with which a deep learning approach based on artificial intelligence (AI) can find and determine the severity of knee RA in digital X-radiation images. The study comprised people over 50 years with RA symptoms, such as knee joint pain, stiffness, crepitus, and functional impairments. The digitized X-radiation images of the people were obtained from the BioGPS database repository. We used 3172 digital X-radiation images of the knee joint from an anterior-posterior perspective. The trained Faster-CRNN architecture was used to identify the knee joint space narrowing (JSN) area in digital X-radiation images and extract the features using ResNet-101 with domain adaptation. In addition, we employed another well-trained model (VGG16 with domain adaptation) for knee RA severity classification. Medical experts graded the X-radiation images of the knee joint using a consensus-based decision score. We trained the enhanced-region proposal network (ERPN) using this manually extracted knee area as the test dataset image. An X-radiation image was fed into the final model, and a consensus decision was used to grade the outcome. The presented model correctly identified the marginal knee JSN region with 98.97% of accuracy, with a total knee RA intensity classification accuracy of 99.10%, with a sensitivity of 97.3%, a specificity of 98.2%, a precision of 98.1%, and a dice score of 90.1% compared with other conventional models.

17.
Sensors (Basel) ; 23(8)2023 Apr 14.
Article in English | MEDLINE | ID: mdl-37112323

ABSTRACT

With the most recent developments in wearable technology, the possibility of continually monitoring stress using various physiological factors has attracted much attention. By reducing the detrimental effects of chronic stress, early diagnosis of stress can enhance healthcare. Machine Learning (ML) models are trained for healthcare systems to track health status using adequate user data. Insufficient data is accessible, however, due to privacy concerns, making it challenging to use Artificial Intelligence (AI) models in the medical industry. This research aims to preserve the privacy of patient data while classifying wearable-based electrodermal activities. We propose a Federated Learning (FL) based approach using a Deep Neural Network (DNN) model. For experimentation, we use the Wearable Stress and Affect Detection (WESAD) dataset, which includes five data states: transient, baseline, stress, amusement, and meditation. We transform this raw dataset into a suitable form for the proposed methodology using the Synthetic Minority Oversampling Technique (SMOTE) and min-max normalization pre-processing methods. In the FL-based technique, the DNN algorithm is trained on the dataset individually after receiving model updates from two clients. To decrease the over-fitting effect, every client analyses the results three times. Accuracies, Precision, Recall, F1-scores, and Area Under the Receiver Operating Curve (AUROC) values are evaluated for each client. The experimental result shows the effectiveness of the federated learning-based technique on a DNN, reaching 86.82% accuracy while also providing privacy to the patient's data. Using the FL-based DNN model over a WESAD dataset improves the detection accuracy compared to the previous studies while also providing the privacy of patient data.


Subject(s)
Artificial Intelligence , Wrist , Humans , Galvanic Skin Response , Wrist Joint , Fitness Trackers
18.
Diagnostics (Basel) ; 13(7)2023 Mar 25.
Article in English | MEDLINE | ID: mdl-37046456

ABSTRACT

One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality rate. However, the manual diagnosis of this cancer using mammogram images is not an easy process and always requires an expert person. Several AI-based techniques have been suggested in the literature. However, still, they are facing several challenges, such as similarities between cancer and non-cancer regions, irrelevant feature extraction, and weak training models. In this work, we proposed a new automated computerized framework for breast cancer classification. The proposed framework improves the contrast using a novel enhancement technique called haze-reduced local-global. The enhanced images are later employed for the dataset augmentation. This step aimed at increasing the diversity of the dataset and improving the training capability of the selected deep learning model. After that, a pre-trained model named EfficientNet-b0 was employed and fine-tuned to add a few new layers. The fine-tuned model was trained separately on original and enhanced images using deep transfer learning concepts with static hyperparameters' initialization. Deep features were extracted from the average pooling layer in the next step and fused using a new serial-based approach. The fused features were later optimized using a feature selection algorithm known as Equilibrium-Jaya controlled Regula Falsi. The Regula Falsi was employed as a termination function in this algorithm. The selected features were finally classified using several machine learning classifiers. The experimental process was conducted on two publicly available datasets-CBIS-DDSM and INbreast. For these datasets, the achieved average accuracy is 95.4% and 99.7%. A comparison with state-of-the-art (SOTA) technology shows that the obtained proposed framework improved the accuracy. Moreover, the confidence interval-based analysis shows consistent results of the proposed framework.

19.
Diagnostics (Basel) ; 13(7)2023 Mar 28.
Article in English | MEDLINE | ID: mdl-37046503

ABSTRACT

The demand for the accurate and timely identification of melanoma as a major skin cancer type is increasing daily. Due to the advent of modern tools and computer vision techniques, it has become easier to perform analysis. Skin cancer classification and segmentation techniques require clear lesions segregated from the background for efficient results. Many studies resolve the matter partly. However, there exists plenty of room for new research in this field. Recently, many algorithms have been presented to preprocess skin lesions, aiding the segmentation algorithms to generate efficient outcomes. Nature-inspired algorithms and metaheuristics help to estimate the optimal parameter set in the search space. This research article proposes a hybrid metaheuristic preprocessor, BA-ABC, to improve the quality of images by enhancing their contrast and preserving the brightness. The statistical transformation function, which helps to improve the contrast, is based on a parameter set estimated through the proposed hybrid metaheuristic model for every image in the dataset. For experimentation purposes, we have utilised three publicly available datasets, ISIC-2016, 2017 and 2018. The efficacy of the presented model is validated through some state-of-the-art segmentation algorithms. The visual outcomes of the boundary estimation algorithms and performance matrix validate that the proposed model performs well. The proposed model improves the dice coefficient to 94.6% in the results.

20.
Comput Intell Neurosci ; 2023: 4776770, 2023.
Article in English | MEDLINE | ID: mdl-36864930

ABSTRACT

Malfunctions in the immune system cause multiple sclerosis (MS), which initiates mild to severe nerve damage. MS will disturb the signal communication between the brain and other body parts, and early diagnosis will help reduce the harshness of MS in humankind. Magnetic resonance imaging (MRI) supported MS detection is a standard clinical procedure in which the bio-image recorded with a chosen modality is considered to assess the severity of the disease. The proposed research aims to implement a convolutional neural network (CNN) supported scheme to detect MS lesions in the chosen brain MRI slices. The stages of this framework include (i) image collection and resizing, (ii) deep feature mining, (iii) hand-crafted feature mining, (iii) feature optimization with firefly algorithm, and (iv) serial feature integration and classification. In this work, five-fold cross-validation is executed, and the final result is considered for the assessment. The brain MRI slices with/without the skull section are examined separately, presenting the attained results. The experimental outcome of this study confirms that the VGG16 with random forest (RF) classifier offered a classification accuracy of >98% MRI with skull, and VGG16 with K-nearest neighbor (KNN) provided an accuracy of >98% without the skull.


Subject(s)
Multiple Sclerosis , Humans , Multiple Sclerosis/diagnostic imaging , Head , Brain/diagnostic imaging , Algorithms , Cluster Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...