Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 90
Filter
1.
Comput Biol Med ; 182: 109183, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39357134

ABSTRACT

Explainable artificial intelligence (XAI) aims to offer machine learning (ML) methods that enable people to comprehend, properly trust, and create more explainable models. In medical imaging, XAI has been adopted to interpret deep learning black box models to demonstrate the trustworthiness of machine decisions and predictions. In this work, we proposed a deep learning and explainable AI-based framework for segmenting and classifying brain tumors. The proposed framework consists of two parts. The first part, encoder-decoder-based DeepLabv3+ architecture, is implemented with Bayesian Optimization (BO) based hyperparameter initialization. The different scales are performed, and features are extracted through the Atrous Spatial Pyramid Pooling (ASPP) technique. The extracted features are passed to the output layer for tumor segmentation. In the second part of the proposed framework, two customized models have been proposed named Inverted Residual Bottleneck 96 layers (IRB-96) and Inverted Residual Bottleneck Self-Attention (IRB-Self). Both models are trained on the selected brain tumor datasets and extracted features from the global average pooling and self-attention layers. Features are fused using a serial approach, and classification is performed. The BO-based hyperparameters optimization of the neural network classifiers is performed and the classification results have been optimized. An XAI method named LIME is implemented to check the interpretability of the proposed models. The experimental process of the proposed framework was performed on the Figshare dataset, and an average segmentation accuracy of 92.68 % and classification accuracy of 95.42 % were obtained, respectively. Compared with state-of-the-art techniques, the proposed framework shows improved accuracy.

2.
Front Plant Sci ; 15: 1469685, 2024.
Article in English | MEDLINE | ID: mdl-39403618

ABSTRACT

Fruits and vegetables are among the most nutrient-dense cash crops worldwide. Diagnosing diseases in fruits and vegetables is a key challenge in maintaining agricultural products. Due to the similarity in disease colour, texture, and shape, it is difficult to recognize manually. Also, this process is time-consuming and requires an expert person. We proposed a novel deep learning and optimization framework for apple and cucumber leaf disease classification to consider the above challenges. In the proposed framework, a hybrid contrast enhancement technique is proposed based on the Bi-LSTM and Haze reduction to highlight the diseased part in the image. After that, two custom models named Bottleneck Residual with Self-Attention (BRwSA) and Inverted Bottleneck Residual with Self-Attention (IBRwSA) are proposed and trained on the selected datasets. After the training, testing images are employed, and deep features are extracted from the self-attention layer. Deep extracted features are fused using a concatenation approach that is further optimized in the next step using an improved human learning optimization algorithm. The purpose of this algorithm was to improve the classification accuracy and reduce the testing time. The selected features are finally classified using a shallow wide neural network (SWNN) classifier. In addition to that, both trained models are interpreted using an explainable AI technique such as LIME. Based on this approach, it is easy to interpret the inside strength of both models for apple and cucumber leaf disease classification and identification. A detailed experimental process was conducted on both datasets, Apple and Cucumber. On both datasets, the proposed framework obtained an accuracy of 94.8% and 94.9%, respectively. A comparison was also conducted using a few state-of-the-art techniques, and the proposed framework showed improved performance.

3.
J Neurosci Methods ; 410: 110247, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39128599

ABSTRACT

The prevalence of brain tumor disorders is currently a global issue. In general, radiography, which includes a large number of images, is an efficient method for diagnosing these life-threatening disorders. The biggest issue in this area is that it takes a radiologist a long time and is physically strenuous to look at all the images. As a result, research into developing systems based on machine learning to assist radiologists in diagnosis continues to rise daily. Convolutional neural networks (CNNs), one type of deep learning approach, have been pivotal in achieving state-of-the-art results in several medical imaging applications, including the identification of brain tumors. CNN hyperparameters are typically set manually for segmentation and classification, which might take a while and increase the chance of using suboptimal hyperparameters for both tasks. Bayesian optimization is a useful method for updating the deep CNN's optimal hyperparameters. The CNN network, however, can be considered a "black box" model because of how difficult it is to comprehend the information it stores because of its complexity. Therefore, this problem can be solved by using Explainable Artificial Intelligence (XAI) tools, which provide doctors with a realistic explanation of CNN's assessments. Implementation of deep learning-based systems in real-time diagnosis is still rare. One of the causes could be that these methods don't quantify the Uncertainty in the predictions, which could undermine trust in the AI-based diagnosis of diseases. To be used in real-time medical diagnosis, CNN-based models must be realistic and appealing, and uncertainty needs to be evaluated. So, a novel three-phase strategy is proposed for segmenting and classifying brain tumors. Segmentation of brain tumors using the DeeplabV3+ model is first performed with tuning of hyperparameters using Bayesian optimization. For classification, features from state-of-the-art deep learning models Darknet53 and mobilenetv2 are extracted and fed to SVM for classification, and hyperparameters of SVM are also optimized using a Bayesian approach. The second step is to understand whatever portion of the images CNN uses for feature extraction using XAI algorithms. Using confusion entropy, the Uncertainty of the Bayesian optimized classifier is finally quantified. Based on a Bayesian-optimized deep learning framework, the experimental findings demonstrate that the proposed method outperforms earlier techniques, achieving a 97 % classification accuracy and a 0.98 global accuracy.


Subject(s)
Bayes Theorem , Brain Neoplasms , Deep Learning , Magnetic Resonance Imaging , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/classification , Magnetic Resonance Imaging/methods , Magnetic Resonance Imaging/standards , Neural Networks, Computer , Neuroimaging/methods , Neuroimaging/standards
4.
SLAS Technol ; 29(4): 100147, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38796034

ABSTRACT

The 2019 novel coronavirus (renamed SARS-CoV-2, and generally referred to as the COVID-19 virus) has spread to 184 countries with over 1.5 million confirmed cases. Such a major viral outbreak demands early elucidation of taxonomic classification and origin of the virus genomic sequence, for strategic planning, containment, and treatment. The emerging global infectious COVID-19 disease by novel Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) presents critical threats to global public health and the economy since it was identified in late December 2019 in China. The virus has gone through various pathways of evolution. Due to the continued evolution of the SARS-CoV-2 pandemic, researchers worldwide are working to mitigate, suppress its spread, and better understand it by deploying deep learning and machine learning approaches. In a general computational context for biomedical data analysis, DNA sequence classification is a crucial challenge. Several machine and deep learning techniques have been used in recent years to complete this task with some success. The classification of DNA sequences is a key research area in bioinformatics as it enables researchers to conduct genomic analysis and detect possible diseases. In this paper, three state-of-the-art deep learning-based models are proposed using two DNA sequence conversion methods. We also proposed a novel multi-transformer deep learning model and pairwise features fusion technique for DNA sequence classification. Furthermore, deep features are extracted from the last layer of the multi-transformer and used in machine-learning models for DNA sequence classification. The k-mer and one-hot encoding sequence conversion techniques have been presented. The proposed multi-transformer achieved the highest performance in COVID DNA sequence classification. Automatic identification and classification of viruses are essential to avoid an outbreak like COVID-19. It also helps in detecting the effect of viruses and drug design.


Subject(s)
COVID-19 , Machine Learning , SARS-CoV-2 , SARS-CoV-2/genetics , SARS-CoV-2/classification , COVID-19/epidemiology , COVID-19/virology , Humans , Genome, Viral , Computational Biology/methods , Deep Learning , Sequence Analysis, DNA , DNA, Viral/genetics
5.
SLAS Technol ; 29(4): 100149, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38796035

ABSTRACT

OBJECTIVE: This study aims to diagnose Rotator Cuff Tears (RCT) and classify the severity of RCT in patients with Osteoporosis (OP) through the analysis of shoulder joint anteroposterior (AP) X-ray-based localized proximal humeral bone mineral density (BMD) measurements and clinical information based on machine learning (ML) models. METHODS: A retrospective cohort of 89 patients was analyzed, including 63 with both OP and RCT (OPRCT) and 26 with OP only. The study analyzed a series of shoulder radiographs from April 2021 to April 2023. Grayscale values were measured after plotting ROIs based on AP X-rays of shoulder joint. Five kinds of ML models were developed and compared based on their performance in predicting the occurrence and severity of RCT from ROIs' greyscale values and clinical information (age, gender, advantage side, lumbar BMD, and acromion morphology (AM)). Further analysis using SHAP values illustrated the significant impact of selected features on model predictions. RESULTS: R1-6 had a positive correlation with BMD respectively. The nine variables, including greyscale R1-6, age, BMD, and AM, were used in the prediction models. The RF model was determined to be superior in effectively diagnosing RCT in OP patients, with high AUC scores of 0.998, 0.889, and 0.95 in the training, validation, and testing sets, respectively. SHAP values revealed that the most influential factors on the diagnostic outcomes were the grayscale values of all cancellous bones in ROIs. A column-line graph prediction model based on nine variables was constructed, and DCA curves indicated that RCT prediction in OP patients was favored based on this model. Furthermore, the RF model was also the most superior in predicting the types of RCT within the OPRCT group, with an accuracy of 86.364% and 73.684% in the training and test sets, respectively. SHAP values indicated that the most significant factor affecting the predictive outcomes was the AM, followed by the grayscale values of the greater tubercle, among others. CONCLUSIONS: ML models, particularly the RF algorithm, show significant promise in diagnosing RCT occurrence and severity in OP patients using conventional shoulder X-rays based on the nine variables. This method presents a cost-effective, accessible, and non-invasive diagnostic strategy that has the potential to substantially enhance the early detection and management of RCT in OP patient population.


Subject(s)
Bone Density , Machine Learning , Osteoporosis , Rotator Cuff Injuries , Humans , Osteoporosis/diagnostic imaging , Osteoporosis/diagnosis , Male , Rotator Cuff Injuries/diagnostic imaging , Female , Retrospective Studies , Aged , Middle Aged , Shoulder Joint/diagnostic imaging , Radiography/methods , Aged, 80 and over
6.
Front Comput Neurosci ; 18: 1393849, 2024.
Article in English | MEDLINE | ID: mdl-38725868

ABSTRACT

Alzheimer's disease (AD) is a neurodegenerative illness that impairs cognition, function, and behavior by causing irreversible damage to multiple brain areas, including the hippocampus. The suffering of the patients and their family members will be lessened with an early diagnosis of AD. The automatic diagnosis technique is widely required due to the shortage of medical experts and eases the burden of medical staff. The automatic artificial intelligence (AI)-based computerized method can help experts achieve better diagnosis accuracy and precision rates. This study proposes a new automated framework for AD stage prediction based on the ResNet-Self architecture and Fuzzy Entropy-controlled Path-Finding Algorithm (FEcPFA). A data augmentation technique has been utilized to resolve the dataset imbalance issue. In the next step, we proposed a new deep-learning model based on the self-attention module. A ResNet-50 architecture is modified and connected with a self-attention block for important information extraction. The hyperparameters were optimized using Bayesian optimization (BO) and then utilized to train the model, which was subsequently employed for feature extraction. The self-attention extracted features were optimized using the proposed FEcPFA. The best features were selected using FEcPFA and passed to the machine learning classifiers for the final classification. The experimental process utilized a publicly available MRI dataset and achieved an improved accuracy of 99.9%. The results were compared with state-of-the-art (SOTA) techniques, demonstrating the improvement of the proposed framework in terms of accuracy and time efficiency.

7.
BMC Med Inform Decis Mak ; 24(1): 92, 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38575951

ABSTRACT

Emerging from the convergence of digital twin technology and the metaverse, consumer health (MCH) is witnessing a transformative shift. The amalgamation of bioinformatics with healthcare Big Data has ushered in a new era of disease prediction models that harness comprehensive medical data, enabling the anticipation of illnesses even before the onset of symptoms. In this model, deep neural networks stand out because they improve accuracy remarkably by increasing network depth and making weight changes using gradient descent. Nonetheless, traditional methods face their own set of challenges, including the issues of gradient instability and slow training. In this case, the Broad Learning System (BLS) stands out as a good alternative. It gets around the problems with gradient descent and lets you quickly rebuild a model through incremental learning. One problem with BLS is that it has trouble extracting complex features from complex medical data. This makes it less useful in a wide range of healthcare situations. In response to these challenges, we introduce DAE-BLS, a novel hybrid model that marries Denoising AutoEncoder (DAE) noise reduction with the efficiency of BLS. This hybrid approach excels in robust feature extraction, particularly within the intricate and multifaceted world of medical data. Validation using diverse datasets yields impressive results, with accuracies reaching as high as 98.50%. DAE-BLS's ability to rapidly adapt through incremental learning holds great promise for accurate and agile disease prediction, especially within the complex and dynamic healthcare scenarios of today.


Subject(s)
Big Data , Technology , Humans , Computational Biology , Health Facilities , Neural Networks, Computer
8.
Front Oncol ; 14: 1347856, 2024.
Article in English | MEDLINE | ID: mdl-38454931

ABSTRACT

With over 2.1 million new cases of breast cancer diagnosed annually, the incidence and mortality rate of this disease pose severe global health issues for women. Identifying the disease's influence is the only practical way to lessen it immediately. Numerous research works have developed automated methods using different medical imaging to identify BC. Still, the precision of each strategy differs based on the available resources, the issue's nature, and the dataset being used. We proposed a novel deep bottleneck convolutional neural network with a quantum optimization algorithm for breast cancer classification and diagnosis from mammogram images. Two novel deep architectures named three-residual blocks bottleneck and four-residual blocks bottle have been proposed with parallel and single paths. Bayesian Optimization (BO) has been employed to initialize hyperparameter values and train the architectures on the selected dataset. Deep features are extracted from the global average pool layer of both models. After that, a kernel-based canonical correlation analysis and entropy technique is proposed for the extracted deep features fusion. The fused feature set is further refined using an optimization technique named quantum generalized normal distribution optimization. The selected features are finally classified using several neural network classifiers, such as bi-layered and wide-neural networks. The experimental process was conducted on a publicly available mammogram imaging dataset named INbreast, and a maximum accuracy of 96.5% was obtained. Moreover, for the proposed method, the sensitivity rate is 96.45, the precision rate is 96.5, the F1 score value is 96.64, the MCC value is 92.97%, and the Kappa value is 92.97%, respectively. The proposed architectures are further utilized for the diagnosis process of infected regions. In addition, a detailed comparison has been conducted with a few recent techniques showing the proposed framework's higher accuracy and precision rate.

9.
Sci Rep ; 14(1): 5895, 2024 03 11.
Article in English | MEDLINE | ID: mdl-38467755

ABSTRACT

A significant issue in computer-aided diagnosis (CAD) for medical applications is brain tumor classification. Radiologists could reliably detect tumors using machine learning algorithms without extensive surgery. However, a few important challenges arise, such as (i) the selection of the most important deep learning architecture for classification (ii) an expert in the field who can assess the output of deep learning models. These difficulties motivate us to propose an efficient and accurate system based on deep learning and evolutionary optimization for the classification of four types of brain modalities (t1 tumor, t1ce tumor, t2 tumor, and flair tumor) on a large-scale MRI database. Thus, a CNN architecture is modified based on domain knowledge and connected with an evolutionary optimization algorithm to select hyperparameters. In parallel, a Stack Encoder-Decoder network is designed with ten convolutional layers. The features of both models are extracted and optimized using an improved version of Grey Wolf with updated criteria of the Jaya algorithm. The improved version speeds up the learning process and improves the accuracy. Finally, the selected features are fused using a novel parallel pooling approach that is classified using machine learning and neural networks. Two datasets, BraTS2020 and BraTS2021, have been employed for the experimental tasks and obtained an improved average accuracy of 98% and a maximum single-classifier accuracy of 99%. Comparison is also conducted with several classifiers, techniques, and neural nets; the proposed method achieved improved performance.


Subject(s)
Brain Neoplasms , Deep Learning , Delayed Emergence from Anesthesia , Humans , Neural Networks, Computer , Brain/diagnostic imaging , Brain Neoplasms/diagnostic imaging
10.
Front Oncol ; 14: 1335740, 2024.
Article in English | MEDLINE | ID: mdl-38390266

ABSTRACT

Brain tumor classification is one of the most difficult tasks for clinical diagnosis and treatment in medical image analysis. Any errors that occur throughout the brain tumor diagnosis process may result in a shorter human life span. Nevertheless, most currently used techniques ignore certain features that have particular significance and relevance to the classification problem in favor of extracting and choosing deep significance features. One important area of research is the deep learning-based categorization of brain tumors using brain magnetic resonance imaging (MRI). This paper proposes an automated deep learning model and an optimal information fusion framework for classifying brain tumor from MRI images. The dataset used in this work was imbalanced, a key challenge for training selected networks. This imbalance in the training dataset impacts the performance of deep learning models because it causes the classifier performance to become biased in favor of the majority class. We designed a sparse autoencoder network to generate new images that resolve the problem of imbalance. After that, two pretrained neural networks were modified and the hyperparameters were initialized using Bayesian optimization, which was later utilized for the training process. After that, deep features were extracted from the global average pooling layer. The extracted features contain few irrelevant information; therefore, we proposed an improved Quantum Theory-based Marine Predator Optimization algorithm (QTbMPA). The proposed QTbMPA selects both networks' best features and finally fuses using a serial-based approach. The fused feature set is passed to neural network classifiers for the final classification. The proposed framework tested on an augmented Figshare dataset and an improved accuracy of 99.80%, a sensitivity rate of 99.83%, a false negative rate of 17%, and a precision rate of 99.83% is obtained. Comparison and ablation study show the improvement in the accuracy of this work.

11.
Math Biosci Eng ; 20(11): 19454-19467, 2023 Oct 20.
Article in English | MEDLINE | ID: mdl-38052609

ABSTRACT

Cancer occurrence rates are gradually rising in the population, which reasons a heavy diagnostic burden globally. The rate of colorectal (bowel) cancer (CC) is gradually rising, and is currently listed as the third most common cancer globally. Therefore, early screening and treatments with a recommended clinical protocol are necessary to trat cancer. The proposed research aim of this paper to develop a Deep-Learning Framework (DLF) to classify the colon histology slides into normal/cancer classes using deep-learning-based features. The stages of the framework include the following: (ⅰ) Image collection, resizing, and pre-processing; (ⅱ) Deep-Features (DF) extraction with a chosen scheme; (ⅲ) Binary classification with a 5-fold cross-validation; and (ⅳ) Verification of the clinical significance. This work classifies the considered image database using the follwing: (ⅰ) Individual DF, (ⅱ) Fused DF, and (ⅲ) Ensemble DF. The achieved results are separately verified using binary classifiers. The proposed work considered 4000 (2000 normal and 2000 cancer) histology slides for the examination. The result of this research confirms that the fused DF helps to achieve a detection accuracy of 99% with the K-Nearest Neighbor (KNN) classifier. In contrast, the individual and ensemble DF provide classification accuracies of 93.25 and 97.25%, respectively.


Subject(s)
Deep Learning , Neoplasms , Humans , Algorithms , Image Processing, Computer-Assisted/methods , Colon , Neoplasms/diagnosis
12.
Open Life Sci ; 18(1): 20220764, 2023.
Article in English | MEDLINE | ID: mdl-38027230

ABSTRACT

In the rapidly evolving landscape of agricultural technology, image processing has emerged as a powerful tool for addressing critical agricultural challenges, with a particular focus on the identification and management of crop diseases. This study is motivated by the imperative need to enhance agricultural sustainability and productivity through precise plant health monitoring. Our primary objective is to propose an innovative approach combining support vector machine (SVM) with advanced image processing techniques to achieve precise detection and classification of fig leaf diseases. Our methodology encompasses a step-by-step process, beginning with the acquisition of digital color images of diseased leaves, followed by denoising using the mean function and enhancement through Contrast-limited adaptive histogram equalization. The subsequent stages involve segmentation through the Fuzzy C Means algorithm, feature extraction via Principal Component Analysis, and disease classification, employing Particle Swarm Optimization (PSO) in conjunction with SVM, Backpropagation Neural Network, and Random Forest algorithms. The results of our study showcase the exceptional performance of the PSO SVM algorithm in accurately classifying and detecting fig leaf disease, demonstrating its potential for practical implementation in agriculture. This innovative approach not only underscores the significance of advanced image processing techniques but also highlights their substantial contributions to sustainable agriculture and plant disease mitigation. In conclusion, the integration of image processing and SVM-based classification offers a promising avenue for advancing crop disease management, ultimately bolstering agricultural productivity and global food security.

13.
Open Life Sci ; 18(1): 20220746, 2023.
Article in English | MEDLINE | ID: mdl-37954104

ABSTRACT

Lung cancer is a substantial health issue globally, and it is one of the main causes of mortality. Malignant mesothelioma (MM) is a common kind of lung cancer. The majority of patients with MM have no symptoms. In the diagnosis of any disease, etiology is crucial. MM risk factor detection procedures include positron emission tomography, magnetic resonance imaging, biopsies, X-rays, and blood tests, which are all necessary but costly and intrusive. Researchers primarily concentrated on the investigation of MM risk variables in the study. Mesothelioma symptoms were detected with the help of data from mesothelioma patients. The dataset, however, included both healthy and mesothelioma patients. Classification algorithms for MM illness diagnosis were carried out using computationally efficient data mining techniques. The support vector machine outperformed the multilayer perceptron ensembles (MLPE) neural network (NN) technique, yielding promising findings. With 99.87% classification accuracy achieved using 10-fold cross-validation over 5 runs, SVM is the best classification when contrasted to the MLPE NN, which achieves 99.56% classification accuracy. In addition, SPSS analysis is carried out for this study to collect pertinent and experimental data.

14.
Diagnostics (Basel) ; 13(19)2023 Sep 26.
Article in English | MEDLINE | ID: mdl-37835807

ABSTRACT

Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.

15.
Diagnostics (Basel) ; 13(18)2023 Sep 06.
Article in English | MEDLINE | ID: mdl-37761236

ABSTRACT

Background: Using artificial intelligence (AI) with the concept of a deep learning-based automated computer-aided diagnosis (CAD) system has shown improved performance for skin lesion classification. Although deep convolutional neural networks (DCNNs) have significantly improved many image classification tasks, it is still difficult to accurately classify skin lesions because of a lack of training data, inter-class similarity, intra-class variation, and the inability to concentrate on semantically significant lesion parts. Innovations: To address these issues, we proposed an automated deep learning and best feature selection framework for multiclass skin lesion classification in dermoscopy images. The proposed framework performs a preprocessing step at the initial step for contrast enhancement using a new technique that is based on dark channel haze and top-bottom filtering. Three pre-trained deep learning models are fine-tuned in the next step and trained using the transfer learning concept. In the fine-tuning process, we added and removed a few additional layers to lessen the parameters and later selected the hyperparameters using a genetic algorithm (GA) instead of manual assignment. The purpose of hyperparameter selection using GA is to improve the learning performance. After that, the deeper layer is selected for each network and deep features are extracted. The extracted deep features are fused using a novel serial correlation-based approach. This technique reduces the feature vector length to the serial-based approach, but there is little redundant information. We proposed an improved anti-Lion optimization algorithm for the best feature selection to address this issue. The selected features are finally classified using machine learning algorithms. Main Results: The experimental process was conducted using two publicly available datasets, ISIC2018 and ISIC2019. Employing these datasets, we obtained an accuracy of 96.1 and 99.9%, respectively. Comparison was also conducted with state-of-the-art techniques and shows the proposed framework improved accuracy. Conclusions: The proposed framework successfully enhances the contrast of the cancer region. Moreover, the selection of hyperparameters using the automated techniques improved the learning process of the proposed framework. The proposed fusion and improved version of the selection process maintains the best accuracy and shorten the computational time.

16.
Diagnostics (Basel) ; 13(17)2023 Sep 01.
Article in English | MEDLINE | ID: mdl-37685369

ABSTRACT

In recent times, DFU (diabetic foot ulcer) has become a universal health problem that affects many diabetes patients severely. DFU requires immediate proper treatment to avert amputation. Clinical examination of DFU is a tedious process and complex in nature. Concurrently, DL (deep learning) methodologies can show prominent outcomes in the classification of DFU because of their efficient learning capacity. Though traditional systems have tried using DL-based models to procure better performance, there is room for enhancement in accuracy. Therefore, the present study uses the AWSg-CNN (Adaptive Weighted Sub-gradient Convolutional Neural Network) method to classify DFU. A DFUC dataset is considered, and several processes are involved in the present study. Initially, the proposed method starts with pre-processing, excluding inconsistent and missing data, to enhance dataset quality and accuracy. Further, for classification, the proposed method utilizes the process of RIW (random initialization of weights) and log softmax with the ASGO (Adaptive Sub-gradient Optimizer) for effective performance. In this process, RIW efficiently learns the shift of feature space between the convolutional layers. To evade the underflow of gradients, the log softmax function is used. When logging softmax with the ASGO is used for the activation function, the gradient steps are controlled. An adaptive modification of the proximal function simplifies the learning rate significantly, and optimal proximal functions are produced. Due to such merits, the proposed method can perform better classification. The predicted results are displayed on the webpage through the HTML, CSS, and Flask frameworks. The effectiveness of the proposed system is evaluated with accuracy, recall, F1-score, and precision to confirm its effectual performance.

17.
Article in English | MEDLINE | ID: mdl-37436864

ABSTRACT

The proposed study is based on a feature and channel selection strategy that uses correlation filters for brain-computer interface (BCI) applications using electroencephalography (EEG)-functional near-infrared spectroscopy (fNIRS) brain imaging modalities. The proposed approach fuses the complementary information of the two modalities to train the classifier. The channels most closely correlated with brain activity are extracted using a correlation-based connectivity matrix for fNIRS and EEG separately. Furthermore, the training vector is formed through the identification and fusion of the statistical features of both modalities (i.e., slope, skewness, maximum, skewness, mean, and kurtosis) The constructed fused feature vector is passed through various filters (including ReliefF, minimum redundancy maximum relevance, chi-square test, analysis of variance, and Kruskal-Wallis filters) to remove redundant information before training. Traditional classifiers such as neural networks, support-vector machines, linear discriminant analysis, and ensembles were used for the purpose of training and testing. A publicly available dataset with motor imagery information was used for validation of the proposed approach. Our findings indicate that the proposed correlation-filter-based channel and feature selection framework significantly enhances the classification accuracy of hybrid EEG-fNIRS. The ReliefF-based filter outperformed other filters with the ensemble classifier with a high accuracy of 94.77 ± 4.26%. The statistical analysis also validated the significance (p < 0.01) of the results. A comparison of the proposed framework with the prior findings was also presented. Our results show that the proposed approach can be used in future EEG-fNIRS-based hybrid BCI applications.

18.
Diagnostics (Basel) ; 13(11)2023 May 29.
Article in English | MEDLINE | ID: mdl-37296750

ABSTRACT

Mental stress is known as a prime factor in road crashes. The devastation of these crashes often results in damage to humans, vehicles, and infrastructure. Likewise, persistent mental stress could lead to the development of mental, cardiovascular, and abdominal disorders. Preceding research in this domain mostly focuses on feature engineering and conventional machine learning approaches. These approaches recognize different levels of stress based on handcrafted features extracted from various modalities including physiological, physical, and contextual data. Acquiring good quality features from these modalities using feature engineering is often a difficult job. Recent developments in the form of deep learning (DL) algorithms have relieved feature engineering by automatically extracting and learning resilient features. This paper proposes different CNN and CNN-LSTSM-based fusion models using physiological signals (SRAD dataset) and multimodal data (AffectiveROAD dataset) for the driver's two and three stress levels. The fuzzy EDAS (evaluation based on distance from average solution) approach is used to evaluate the performance of the proposed models based on different classification metrics (accuracy, recall, precision, F-score, and specificity). Fuzzy EDAS performance estimation shows that the proposed CNN and hybrid CNN-LSTM models achieved the first ranks based on the fusion of BH, E4-Left (E4-L), and E4-Right (E4-R). Results showed the significance of multimodal data for designing an accurate and trustworthy stress recognition diagnosing model for real-world driving conditions. The proposed model can also be used for the diagnosis of the stress level of a subject during other daily life activities.

19.
Math Biosci Eng ; 20(6): 10404-10427, 2023 04 06.
Article in English | MEDLINE | ID: mdl-37322939

ABSTRACT

One of the most effective approaches for identifying breast cancer is histology, which is the meticulous inspection of tissues under a microscope. The kind of cancer cells, or whether they are cancerous (malignant) or non-cancerous, is typically determined by the type of tissue that is analyzed by the test performed by the technician (benign). The goal of this study was to automate IDC classification within breast cancer histology samples using a transfer learning technique. To improve our outcomes, we combined a Gradient Color Activation Mapping (Grad CAM) and image coloring mechanism with a discriminative fine-tuning methodology employing a one-cycle strategy using FastAI techniques. There have been lots of research studies related to deep transfer learning which use the same mechanism, but this report uses a transfer learning mechanism based on lightweight Squeeze Net architecture, a variant of CNN (Convolution neural network). This strategy demonstrates that fine-tuning on Squeeze Net makes it possible to achieve satisfactory results when transitioning generic features from natural images to medical images.


Subject(s)
Breast Neoplasms , Deep Learning , Humans , Female , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Neural Networks, Computer
20.
Front Oncol ; 13: 1151257, 2023.
Article in English | MEDLINE | ID: mdl-37346069

ABSTRACT

Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.

SELECTION OF CITATIONS
SEARCH DETAIL