Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 121
Filter
1.
Front Plant Sci ; 15: 1402835, 2024.
Article in English | MEDLINE | ID: mdl-38988642

ABSTRACT

The agricultural sector is pivotal to food security and economic stability worldwide. Corn holds particular significance in the global food industry, especially in developing countries where agriculture is a cornerstone of the economy. However, corn crops are vulnerable to various diseases that can significantly reduce yields. Early detection and precise classification of these diseases are crucial to prevent damage and ensure high crop productivity. This study leverages the VGG16 deep learning (DL) model to classify corn leaves into four categories: healthy, blight, gray spot, and common rust. Despite the efficacy of DL models, they often face challenges related to the explainability of their decision-making processes. To address this, Layer-wise Relevance Propagation (LRP) is employed to enhance the model's transparency by generating intuitive and human-readable heat maps of input images. The proposed VGG16 model, augmented with LRP, outperformed previous state-of-the-art models in classifying corn leaf diseases. Simulation results demonstrated that the model not only achieved high accuracy but also provided interpretable results, highlighting critical regions in the images used for classification. By generating human-readable explanations, this approach ensures greater transparency and reliability in model performance, aiding farmers in improving their crop yields.

2.
BMC Med Imaging ; 24(1): 156, 2024 Jun 24.
Article in English | MEDLINE | ID: mdl-38910241

ABSTRACT

Parkinson's disease (PD) is challenging for clinicians to accurately diagnose in the early stages. Quantitative measures of brain health can be obtained safely and non-invasively using medical imaging techniques like magnetic resonance imaging (MRI) and single photon emission computed tomography (SPECT). For accurate diagnosis of PD, powerful machine learning and deep learning models as well as the effectiveness of medical imaging tools for assessing neurological health are required. This study proposes four deep learning models with a hybrid model for the early detection of PD. For the simulation study, two standard datasets are chosen. Further to improve the performance of the models, grey wolf optimization (GWO) is used to automatically fine-tune the hyperparameters of the models. The GWO-VGG16, GWO-DenseNet, GWO-DenseNet + LSTM, GWO-InceptionV3 and GWO-VGG16 + InceptionV3 are applied to the T1,T2-weighted and SPECT DaTscan datasets. All the models performed well and obtained near or above 99% accuracy. The highest accuracy of 99.94% and AUC of 99.99% is achieved by the hybrid model (GWO-VGG16 + InceptionV3) for T1,T2-weighted dataset and 100% accuracy and 99.92% AUC is recorded for GWO-VGG16 + InceptionV3 models using SPECT DaTscan dataset.


Subject(s)
Algorithms , Deep Learning , Magnetic Resonance Imaging , Parkinson Disease , Tomography, Emission-Computed, Single-Photon , Humans , Parkinson Disease/diagnostic imaging , Tomography, Emission-Computed, Single-Photon/methods , Magnetic Resonance Imaging/methods , Male , Female
3.
Ultrasound Med Biol ; 2024 Jun 22.
Article in English | MEDLINE | ID: mdl-38910034

ABSTRACT

BACKGROUND: Ultrasound image examination has become the preferred choice for diagnosing metabolic dysfunction-associated steatotic liver disease (MASLD) due to its non-invasive nature. Computer-aided diagnosis (CAD) technology can assist doctors in avoiding deviations in the detection and classification of MASLD. METHOD: We propose a hybrid model that integrates the pre-trained VGG16 network with an attention mechanism and a stacking ensemble learning model, which is capable of multi-scale feature aggregation based on the self-attention mechanism and multi-classification model fusion (Logistic regression, random forest, support vector machine) based on stacking ensemble learning. The proposed hybrid method achieves four classifications of normal, mild, moderate, and severe fatty liver based on ultrasound images. RESULT AND CONCLUSION: Our proposed hybrid model reaches an accuracy of 91.34% and exhibits superior robustness against interference, which is better than traditional neural network algorithms. Experimental results show that, compared with the pre-trained VGG16 model, adding the self-attention mechanism improves the accuracy by 3.02%. Using the stacking ensemble learning model as a classifier further increases the accuracy to 91.34%, exceeding any single classifier such as LR (89.86%) and SVM (90.34%) and RF (90.73%). The proposed hybrid method can effectively improve the efficiency and accuracy of MASLD ultrasound image detection.

4.
Diagnostics (Basel) ; 14(12)2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38928647

ABSTRACT

This study evaluates the efficacy of several Convolutional Neural Network (CNN) models for the classification of hearing loss in patients using preprocessed auditory brainstem response (ABR) image data. Specifically, we employed six CNN architectures-VGG16, VGG19, DenseNet121, DenseNet-201, AlexNet, and InceptionV3-to differentiate between patients with hearing loss and those with normal hearing. A dataset comprising 7990 preprocessed ABR images was utilized to assess the performance and accuracy of these models. Each model was systematically tested to determine its capability to accurately classify hearing loss. A comparative analysis of the models focused on metrics of accuracy and computational efficiency. The results indicated that the AlexNet model exhibited superior performance, achieving an accuracy of 95.93%. The findings from this research suggest that deep learning models, particularly AlexNet in this instance, hold significant potential for automating the diagnosis of hearing loss using ABR graph data. Future work will aim to refine these models to enhance their diagnostic accuracy and efficiency, fostering their practical application in clinical settings.

5.
Sensors (Basel) ; 24(11)2024 May 26.
Article in English | MEDLINE | ID: mdl-38894210

ABSTRACT

In hazardous environments like mining sites, mobile inspection robots play a crucial role in condition monitoring (CM) tasks, particularly by collecting various kinds of data, such as images. However, the sheer volume of collected image samples and existing noise pose challenges in processing and visualizing thermal anomalies. Recognizing these challenges, our study addresses the limitations of industrial big data analytics for mobile robot-generated image data. We present a novel, fully integrated approach involving a dimension reduction procedure. This includes a semantic segmentation technique utilizing the pre-trained VGG16 CNN architecture for feature selection, followed by random forest (RF) and extreme gradient boosting (XGBoost) classifiers for the prediction of the pixel class labels. We also explore unsupervised learning using the PCA-K-means method for dimension reduction and classification of unlabeled thermal defects based on anomaly severity. Our comprehensive methodology aims to efficiently handle image-based CM tasks in hazardous environments. To validate its practicality, we applied our approach in a real-world scenario, and the results confirm its robust performance in processing and visualizing thermal data collected by mobile inspection robots. This affirms the effectiveness of our methodology in enhancing the overall performance of CM processes.

6.
Bioengineering (Basel) ; 11(5)2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38790279

ABSTRACT

Brain cancer is a life-threatening disease requiring close attention. Early and accurate diagnosis using non-invasive medical imaging is critical for successful treatment and patient survival. However, manual diagnosis by radiologist experts is time-consuming and has limitations in processing large datasets efficiently. Therefore, efficient systems capable of analyzing vast amounts of medical data for early tumor detection are urgently needed. Deep learning (DL) with deep convolutional neural networks (DCNNs) emerges as a promising tool for understanding diseases like brain cancer through medical imaging modalities, especially MRI, which provides detailed soft tissue contrast for visualizing tumors and organs. DL techniques have become more and more popular in current research on brain tumor detection. Unlike traditional machine learning methods requiring manual feature extraction, DL models are adept at handling complex data like MRIs and excel in classification tasks, making them well-suited for medical image analysis applications. This study presents a novel Dual DCNN model that can accurately classify cancerous and non-cancerous MRI samples. Our Dual DCNN model uses two well-performed DL models, i.e., inceptionV3 and denseNet121. Features are extracted from these models by appending a global max pooling layer. The extracted features are then utilized to train the model with the addition of five fully connected layers and finally accurately classify MRI samples as cancerous or non-cancerous. The fully connected layers are retrained to learn the extracted features for better accuracy. The technique achieves 99%, 99%, 98%, and 99% of accuracy, precision, recall, and f1-scores, respectively. Furthermore, this study compares the Dual DCNN's performance against various well-known DL models, including DenseNet121, InceptionV3, ResNet architectures, EfficientNetB2, SqueezeNet, VGG16, AlexNet, and LeNet-5, with different learning rates. This study indicates that our proposed approach outperforms these established models in terms of performance.

7.
Heliyon ; 10(10): e30957, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38803954

ABSTRACT

A self-driving car is necessary to implement traffic intelligence because it can vastly enhance both the safety of driving and the comfort of the driver by adjusting to the circumstances of the road ahead. Road hazards such as potholes can be a big challenge for autonomous vehicles, increasing the risk of crashes and vehicle damage. Real-time identification of road potholes is required to solve this issue. To this end, various approaches have been tried, including notifying the appropriate authorities, utilizing vibration-based sensors, and engaging in three-dimensional laser imaging. Unfortunately, these approaches have several drawbacks, such as large initial expenditures and the possibility of being discovered. Transfer learning is considered a potential answer to the pressing necessity of automating the process of pothole identification. A Convolutional Neural Network (CNN) is constructed to categorize potholes effectively using the VGG-16 pre-trained model as a transfer learning model throughout the training process. A Super-Resolution Generative Adversarial Network (SRGAN) is suggested to enhance the image's overall quality. Experiments conducted with the suggested approach of classifying road potholes revealed a high accuracy rate of 97.3%, and its effectiveness was tested using various criteria. The developed transfer learning technique obtained the best accuracy rate compared to many other deep learning algorithms.

8.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38750436

ABSTRACT

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Subject(s)
Brain Neoplasms , Deep Learning , Magnetic Resonance Imaging , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/classification , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Machine Learning , Image Interpretation, Computer-Assisted/methods
9.
J Prosthodont ; 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38566564

ABSTRACT

PURPOSE: The study aimed to compare the performance of four pre-trained convolutional neural networks in recognizing seven distinct prosthodontic scenarios involving the maxilla, as a preliminary step in developing an artificial intelligence (AI)-powered prosthesis design system. MATERIALS AND METHODS: Seven distinct classes, including cleft palate, dentulous maxillectomy, edentulous maxillectomy, reconstructed maxillectomy, completely dentulous, partially edentulous, and completely edentulous, were considered for recognition. Utilizing transfer learning and fine-tuned hyperparameters, four AI models (VGG16, Inception-ResNet-V2, DenseNet-201, and Xception) were employed. The dataset, consisting of 3541 preprocessed intraoral occlusal images, was divided into training, validation, and test sets. Model performance metrics encompassed accuracy, precision, recall, F1 score, area under the receiver operating characteristic curve (AUC), and confusion matrix. RESULTS: VGG16, Inception-ResNet-V2, DenseNet-201, and Xception demonstrated comparable performance, with maximum test accuracies of 0.92, 0.90, 0.94, and 0.95, respectively. Xception and DenseNet-201 slightly outperformed the other models, particularly compared with InceptionResNet-V2. Precision, recall, and F1 scores exceeded 90% for most classes in Xception and DenseNet-201 and the average AUC values for all models ranged between 0.98 and 1.00. CONCLUSIONS: While DenseNet-201 and Xception demonstrated superior performance, all models consistently achieved diagnostic accuracy exceeding 90%, highlighting their potential in dental image analysis. This AI application could help work assignments based on difficulty levels and enable the development of an automated diagnosis system at patient admission. It also facilitates prosthesis designing by integrating necessary prosthesis morphology, oral function, and treatment difficulty. Furthermore, it tackles dataset size challenges in model optimization, providing valuable insights for future research.

10.
Front Zool ; 21(1): 10, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38561769

ABSTRACT

BACKGROUND: Rapid identification and classification of bats are critical for practical applications. However, species identification of bats is a typically detrimental and time-consuming manual task that depends on taxonomists and well-trained experts. Deep Convolutional Neural Networks (DCNNs) provide a practical approach for the extraction of the visual features and classification of objects, with potential application for bat classification. RESULTS: In this study, we investigated the capability of deep learning models to classify 7 horseshoe bat taxa (CHIROPTERA: Rhinolophus) from Southern China. We constructed an image dataset of 879 front, oblique, and lateral targeted facial images of live individuals collected during surveys between 2012 and 2021. All images were taken using a standard photograph protocol and setting aimed at enhancing the effectiveness of the DCNNs classification. The results demonstrated that our customized VGG16-CBAM model achieved up to 92.15% classification accuracy with better performance than other mainstream models. Furthermore, the Grad-CAM visualization reveals that the model pays more attention to the taxonomic key regions in the decision-making process, and these regions are often preferred by bat taxonomists for the classification of horseshoe bats, corroborating the validity of our methods. CONCLUSION: Our finding will inspire further research on image-based automatic classification of chiropteran species for early detection and potential application in taxonomy.

11.
Environ Monit Assess ; 196(4): 406, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38561525

ABSTRACT

This work introduces a novel approach to remotely count and monitor potato plants in high-altitude regions of India using an unmanned aerial vehicle (UAV) and an artificial intelligence (AI)-based deep learning (DL) network. The proposed methodology involves the use of a self-created AI model called PlantSegNet, which is based on VGG-16 and U-Net architectures, to analyze aerial RGB images captured by a UAV. To evaluate the proposed approach, a self-created dataset of aerial images from different planting blocks is used to train and test the PlantSegNet model. The experimental results demonstrate the effectiveness and validity of the proposed method in challenging environmental conditions. The proposed approach achieves pixel accuracy of 98.65%, a loss of 0.004, an Intersection over Union (IoU) of 0.95, and an F1-Score of 0.94. Comparing the proposed model with existing models, such as Mask-RCNN and U-NET, demonstrates that PlantSegNet outperforms both models in terms of performance parameters. The proposed methodology provides a reliable solution for remote crop counting in challenging terrain, which can be beneficial for farmers in the Himalayan regions of India. The methods and results presented in this paper offer a promising foundation for the development of advanced decision support systems for planning planting operations.


Subject(s)
Artificial Intelligence , Unmanned Aerial Devices , Humans , Environmental Monitoring , Farmers , India
12.
Heliyon ; 10(8): e29375, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38644855

ABSTRACT

In the context of Alzheimer's disease (AD), timely identification is paramount for effective management, acknowledging its chronic and irreversible nature, where medications can only impede its progression. Our study introduces a holistic solution, leveraging the hippocampus and the VGG16 model with transfer learning for early AD detection. The hippocampus, a pivotal early affected region linked to memory, plays a central role in classifying patients into three categories: cognitively normal (CN), representing individuals without cognitive impairment; mild cognitive impairment (MCI), indicative of a subtle decline in cognitive abilities; and AD, denoting Alzheimer's disease. Employing the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, our model undergoes training enriched by advanced image preprocessing techniques, achieving outstanding accuracy (testing 98.17 %, validation 97.52 %, training 99.62 %). The strategic use of transfer learning fortifies our competitive edge, incorporating the hippocampus approach and, notably, a progressive data augmentation technique. This innovative augmentation strategy gradually introduces augmentation factors during training, significantly elevating accuracy and enhancing the model's generalization ability. The study emphasizes practical application with a user-friendly website, empowering radiologists to predict class probabilities, track disease progression, and visualize patient images in both 2D and 3D formats, contributing significantly to the advancement of early AD detection.

13.
Heliyon ; 10(5): e26938, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38468922

ABSTRACT

Coronavirus disease (COVID-2019) is emerging in Wuhan, China in 2019. It has spread throughout the world since the year 2020. Millions of people were affected and caused death to them till now. To avoid the spreading of COVID-2019, various precautions and restrictions have been taken by all nations. At the same time, infected persons are needed to identify and isolate, and medical treatment should be provided to them. Due to a deficient number of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, a Chest X-ray image is becoming an effective technique for diagnosing COVID-19. In this work, the Hybrid Deep Learning CNN model is proposed for the diagnosis COVID-19 using chest X-rays. The proposed model consists of a heading model and a base model. The base model utilizes two pre-trained deep learning structures such as VGG16 and VGG19. The feature dimensions from these pre-trained models are reduced by incorporating different pooling layers, such as max and average. In the heading part, dense layers of size three with different activation functions are also added. A dropout layer is supplemented to avoid overfitting. The experimental analyses are conducted to identify the efficacy of the proposed hybrid deep learning with existing transfer learning architectures such as VGG16, VGG19, EfficientNetB0 and ResNet50 using a COVID-19 radiology database. Various classification techniques, such as K-Nearest Neighbor (KNN), Naive Bayes, Random Forest, Support Vector Machine (SVM), and Neural Network, were also used for the performance comparison of the proposed model. The hybrid deep learning model with average pooling layers, along with SVM-linear and neural networks, both achieved an accuracy of 92%.These proposed models can be employed to assist radiologists and physicians in avoiding misdiagnosis rates and to validate the positive COVID-19 infected cases.

14.
Heliyon ; 10(4): e26405, 2024 Feb 29.
Article in English | MEDLINE | ID: mdl-38434063

ABSTRACT

Alzheimer's disease(AD) poses a significant challenge due to its widespread prevalence and the lack of effective treatments, highlighting the urgent need for early detection. This research introduces an enhanced neural network, named ADnet, which is based on the VGG16 model, to detect Alzheimer's disease using two-dimensional MRI slices. ADNet incorporates several key improvements: it replaces traditional convolution with depthwise separable convolution to reduce model parameters, replaces the ReLU activation function with ELU to address potential issues with exploding gradients, and integrates the SE(Squeeze-and-Excitation) module to enhance feature extraction efficiency. In addition to the primary task of MRI feature extraction, ADnet is simultaneously trained on two auxiliary tasks: clinical dementia score regression and mental state score regression. Experimental results demonstrate that compared to the baseline VGG16, ADNet achieves a 4.18% accuracy improvement for AD vs. CN classification and a 6% improvement for MCI vs. CN classification. These findings highlight the effectiveness of ADnet in classifying Alzheimer's disease, providing crucial support for early diagnosis and intervention by medical professionals. The proposed enhancements represent advancements in neural network architecture and training strategies for improved AD classification.

15.
Acta Cytol ; 68(2): 160-170, 2024.
Article in English | MEDLINE | ID: mdl-38522415

ABSTRACT

INTRODUCTION: The application of artificial intelligence (AI) algorithms in serous fluid cytology is lacking due to the deficiency in standardized publicly available datasets. Here, we develop a novel public serous effusion cytology dataset. Furthermore, we apply AI algorithms on it to test its diagnostic utility and safety in clinical practice. METHODS: The work is divided into three phases. Phase 1 entails building the dataset based on the multitiered evidence-based classification system proposed by the International System (TIS) of serous fluid cytology along with ground-truth tissue diagnosis for malignancy. To ensure reliable results of future AI research on this dataset, we carefully consider all the steps of the preparation and staining from a real-world cytopathology perspective. In phase 2, we pay special consideration to the image acquisition pipeline to ensure image integrity. Then we utilize the power of transfer learning using the convolutional layers of the VGG16 deep learning model for feature extraction. Finally, in phase 3, we apply the random forest classifier on the constructed dataset. RESULTS: The dataset comprises 3,731 images distributed among the four TIS diagnostic categories. The model achieves 74% accuracy in this multiclass classification problem. Using a one-versus-all classifier, the fallout rate for images that are misclassified as negative for malignancy despite being a higher risk diagnosis is 0.13. Most of these misclassified images (77%) belong to the atypia of undetermined significance category in concordance with real-life statistics. CONCLUSION: This is the first and largest publicly available serous fluid cytology dataset based on a standardized diagnostic system. It is also the first dataset to include various types of effusions and pericardial fluid specimens. In addition, it is the first dataset to include the diagnostically challenging atypical categories. AI algorithms applied on this novel dataset show reliable results that can be incorporated into actual clinical practice with minimal risk of missing a diagnosis of malignancy. This work provides a foundation for researchers to develop and test further AI algorithms for the diagnosis of serous effusions.


Subject(s)
Cytodiagnosis , Humans , Cytodiagnosis/methods , Reproducibility of Results , Datasets as Topic , Algorithms , Artificial Intelligence , Deep Learning , Image Interpretation, Computer-Assisted/methods , Databases, Factual , Neoplasms/pathology , Neoplasms/diagnosis , Cytology
16.
Proteins ; 2024 Mar 23.
Article in English | MEDLINE | ID: mdl-38520179

ABSTRACT

During the last three decades, antimicrobial peptides (AMPs) have emerged as a promising therapeutic alternative to antibiotics. The approaches for designing AMPs span from experimental trial-and-error methods to synthetic hybrid peptide libraries. To overcome the exceedingly expensive and time-consuming process of designing effective AMPs, many computational and machine-learning tools for AMP prediction have been recently developed. In general, to encode the peptide sequences, featurization relies on approaches based on (a) amino acid (AA) composition, (b) physicochemical properties, (c) sequence similarity, and (d) structural properties. In this work, we present an image-based deep neural network model to predict AMPs, where we are using feature encoding based on Drude polarizable force-field atom types, which can capture the peptide properties more efficiently compared to conventional feature vectors. The proposed prediction model identifies short AMPs (≤30 AA) with promising accuracy and efficiency and can be used as a next-generation screening method for predicting new AMPs. The source code is publicly available at the Figshare server sAMP-VGG16.

17.
Math Biosci Eng ; 21(3): 4328-4350, 2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38549330

ABSTRACT

In the realm of medical imaging, the precise segmentation and classification of gliomas represent fundamental challenges with profound clinical implications. Leveraging the BraTS 2018 dataset as a standard benchmark, this study delves into the potential of advanced deep learning models for addressing these challenges. We propose a novel approach that integrates a customized U-Net for segmentation and VGG-16 for classification. The U-Net, with its tailored encoder-decoder pathways, accurately identifies glioma regions, thus improving tumor localization. The fine-tuned VGG-16, featuring a customized output layer, precisely differentiates between low-grade and high-grade gliomas. To ensure consistency in data pre-processing, a standardized methodology involving gamma correction, data augmentation, and normalization is introduced. This novel integration surpasses existing methods, offering significantly improved glioma diagnosis, validated by high segmentation dice scores (WT: 0.96, TC: 0.92, ET: 0.89), and a remarkable overall classification accuracy of 97.89%. The experimental findings underscore the potential of integrating deep learning-based methodologies for tumor segmentation and classification in enhancing glioma diagnosis and formulating subsequent treatment strategies.


Subject(s)
Glioma , Magnetic Resonance Imaging , Humans , Glioma/diagnostic imaging , Image Processing, Computer-Assisted
18.
Sci Rep ; 14(1): 6173, 2024 03 14.
Article in English | MEDLINE | ID: mdl-38486010

ABSTRACT

A kidney stone is a solid formation that can lead to kidney failure, severe pain, and reduced quality of life from urinary system blockages. While medical experts can interpret kidney-ureter-bladder (KUB) X-ray images, specific images pose challenges for human detection, requiring significant analysis time. Consequently, developing a detection system becomes crucial for accurately classifying KUB X-ray images. This article applies a transfer learning (TL) model with a pre-trained VGG16 empowered with explainable artificial intelligence (XAI) to establish a system that takes KUB X-ray images and accurately categorizes them as kidney stones or normal cases. The findings demonstrate that the model achieves a testing accuracy of 97.41% in identifying kidney stones or normal KUB X-rays in the dataset used. VGG16 model delivers highly accurate predictions but lacks fairness and explainability in their decision-making process. This study incorporates the Layer-Wise Relevance Propagation (LRP) technique, an explainable artificial intelligence (XAI) technique, to enhance the transparency and effectiveness of the model to address this concern. The XAI technique, specifically LRP, increases the model's fairness and transparency, facilitating human comprehension of the predictions. Consequently, XAI can play an important role in assisting doctors with the accurate identification of kidney stones, thereby facilitating the execution of effective treatment strategies.


Subject(s)
Artificial Intelligence , Kidney Calculi , Humans , X-Rays , Quality of Life , Kidney Calculi/diagnostic imaging , Fluoroscopy
19.
Math Biosci Eng ; 21(1): 1625-1649, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38303481

ABSTRACT

Fake face identity is a serious, potentially fatal issue that affects every industry from the banking and finance industry to the military and mission-critical applications. This is where the proposed system offers artificial intelligence (AI)-based supported fake face detection. The models were trained on an extensive dataset of real and fake face images, incorporating steps like sampling, preprocessing, pooling, normalization, vectorization, batch processing and model training, testing-, and classification via output activation. The proposed work performs the comparative analysis of the three fusion models, which can be integrated with Generative Adversarial Networks (GAN) based on the performance evaluation. The Model-3, which contains the combination of DenseNet-201+ResNet-102+Xception, offers the highest accuracy of 0.9797, and the Model-2 with the combination of DenseNet-201+ResNet-50+Inception V3 offers the lowest loss value of 0.1146; both are suitable for the GAN integration. Additionally, the Model-1 performs admirably, with an accuracy of 0.9542 and a loss value of 0.1416. A second dataset was also tested where the proposed Model-3 provided maximum accuracy of 86.42% with a minimum loss of 0.4054.


Subject(s)
Artificial Intelligence , Industry
20.
Int J Mol Sci ; 25(3)2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38338828

ABSTRACT

Skin cancer is a severe and potentially lethal disease, and early detection is critical for successful treatment. Traditional procedures for diagnosing skin cancer are expensive, time-intensive, and necessitate the expertise of a medical practitioner. In recent years, many researchers have developed artificial intelligence (AI) tools, including shallow and deep machine learning-based approaches, to diagnose skin cancer. However, AI-based skin cancer diagnosis faces challenges in complexity, low reproducibility, and explainability. To address these problems, we propose a novel Grid-Based Structural and Dimensional Explainable Deep Convolutional Neural Network for accurate and interpretable skin cancer classification. This model employs adaptive thresholding for extracting the region of interest (ROI), using its dynamic capabilities to enhance the accuracy of identifying cancerous regions. The VGG-16 architecture extracts the hierarchical characteristics of skin lesion images, leveraging its recognized capabilities for deep feature extraction. Our proposed model leverages a grid structure to capture spatial relationships within lesions, while the dimensional features extract relevant information from various image channels. An Adaptive Intelligent Coney Optimization (AICO) algorithm is employed for self-feature selected optimization and fine-tuning the hyperparameters, which dynamically adapts the model architecture to optimize feature extraction and classification. The model was trained and tested using the ISIC dataset of 10,015 dermascope images and the MNIST dataset of 2357 images of malignant and benign oncological diseases. The experimental results demonstrated that the model achieved accuracy and CSI values of 0.96 and 0.97 for TP 80 using the ISIC dataset, which is 17.70% and 16.49% more than lightweight CNN, 20.83% and 19.59% more than DenseNet, 18.75% and 17.53% more than CNN, 6.25% and 6.18% more than Efficient Net-B0, 5.21% and 5.15% over ECNN, 2.08% and 2.06% over COA-CAN, and 5.21% and 5.15% more than ARO-ECNN. Additionally, the AICO self-feature selected ECNN model exhibited minimal FPR and FNR of 0.03 and 0.02, respectively. The model attained a loss of 0.09 for ISIC and 0.18 for the MNIST dataset, indicating that the model proposed in this research outperforms existing techniques. The proposed model improves accuracy, interpretability, and robustness for skin cancer classification, ultimately aiding clinicians in early diagnosis and treatment.


Subject(s)
Bass , Facial Neoplasms , Skin Neoplasms , Animals , Artificial Intelligence , Reproducibility of Results , Skin , Neural Networks, Computer , Skin Neoplasms/diagnosis
SELECTION OF CITATIONS
SEARCH DETAIL
...