Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 131
Filtrar
1.
J Neurosci Methods ; : 110251, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39151656

RESUMO

BACKGROUND: Electroencephalography (EEG) and electrocorticography (ECoG) recordings have been used to decode finger movements by analyzing brain activity. Traditional methods focused on single bandpass power changes for movement decoding, utilizing machine learning models requiring manual feature extraction. NEW METHOD: This study introduces a 3D convolutional neural network (3D-CNN) model to decode finger movements using ECoG data. The model employs adaptive, explainable AI (xAI) techniques to interpret the physiological relevance of brain signals. ECoG signals from epilepsy patients during awake craniotomy were processed to extract power spectral density across multiple frequency bands. These data formed a 3D matrix used to train the 3D-CNN to predict finger trajectories. RESULTS: The 3D-CNN model showed significant accuracy in predicting finger movements, with root-mean-square error (RMSE) values of 0.26-0.38 for single finger movements and 0.20-0.24 for combined movements. Explainable AI techniques, Grad-CAM and SHAP, identified the high gamma (HG) band as crucial for movement prediction, showing specific cortical regions involved in different finger movements. These findings highlighted the physiological significance of the HG band in motor control. COMPARISON WITH EXISTING METHODS: The 3D-CNN model outperformed traditional machine learning approaches by effectively capturing spatial and temporal patterns in ECoG data. The use of xAI techniques provided clearer insights into the model's decision-making process, unlike the "black box" nature of standard deep learning models. CONCLUSIONS: The proposed 3D-CNN model, combined with xAI methods, enhances the decoding accuracy of finger movements from ECoG data. This approach offers a more efficient and interpretable solution for brain-computer interface (BCI) applications, emphasizing the HG band's role in motor control.

2.
Cogn Neurodyn ; 18(4): 1609-1625, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39104684

RESUMO

In this study, attention deficit hyperactivity disorder (ADHD), a childhood neurodevelopmental disorder, is being studied alongside its comorbidity, conduct disorder (CD), a behavioral disorder. Because ADHD and CD share commonalities, distinguishing them is difficult, thus increasing the risk of misdiagnosis. It is crucial that these two conditions are not mistakenly identified as the same because the treatment plan varies depending on whether the patient has CD or ADHD. Hence, this study proposes an electroencephalogram (EEG)-based deep learning system known as ADHD/CD-NET that is capable of objectively distinguishing ADHD, ADHD + CD, and CD. The 12-channel EEG signals were first segmented and converted into channel-wise continuous wavelet transform (CWT) correlation matrices. The resulting matrices were then used to train the convolutional neural network (CNN) model, and the model's performance was evaluated using 10-fold cross-validation. Gradient-weighted class activation mapping (Grad-CAM) was also used to provide explanations for the prediction result made by the 'black box' CNN model. Internal private dataset (45 ADHD, 62 ADHD + CD and 16 CD) and external public dataset (61 ADHD and 60 healthy controls) were used to evaluate ADHD/CD-NET. As a result, ADHD/CD-NET achieved classification accuracy, sensitivity, specificity, and precision of 93.70%, 90.83%, 95.35% and 91.85% for the internal evaluation, and 98.19%, 98.36%, 98.03% and 98.06% for the external evaluation. Grad-CAM also identified significant channels that contributed to the diagnosis outcome. Therefore, ADHD/CD-NET can perform temporal localization and choose significant EEG channels for diagnosis, thus providing objective analysis for mental health professionals and clinicians to consider when making a diagnosis. Supplementary Information: The online version contains supplementary material available at 10.1007/s11571-023-10028-2.

3.
BMC Med Inform Decis Mak ; 24(1): 222, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39112991

RESUMO

Lung and colon cancers are leading contributors to cancer-related fatalities globally, distinguished by unique histopathological traits discernible through medical imaging. Effective classification of these cancers is critical for accurate diagnosis and treatment. This study addresses critical challenges in the diagnostic imaging of lung and colon cancers, which are among the leading causes of cancer-related deaths worldwide. Recognizing the limitations of existing diagnostic methods, which often suffer from overfitting and poor generalizability, our research introduces a novel deep learning framework that synergistically combines the Xception and MobileNet architectures. This innovative ensemble model aims to enhance feature extraction, improve model robustness, and reduce overfitting.Our methodology involves training the hybrid model on a comprehensive dataset of histopathological images, followed by validation against a balanced test set. The results demonstrate an impressive classification accuracy of 99.44%, with perfect precision and recall in identifying certain cancerous and non-cancerous tissues, marking a significant improvement over traditional approach.The practical implications of these findings are profound. By integrating Gradient-weighted Class Activation Mapping (Grad-CAM), the model offers enhanced interpretability, allowing clinicians to visualize the diagnostic reasoning process. This transparency is vital for clinical acceptance and enables more personalized, accurate treatment planning. Our study not only pushes the boundaries of medical imaging technology but also sets the stage for future research aimed at expanding these techniques to other types of cancer diagnostics.


Assuntos
Neoplasias do Colo , Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/classificação , Neoplasias do Colo/diagnóstico por imagem , Neoplasias do Colo/classificação , Inteligência Artificial
4.
J Pathol Inform ; 15: 100389, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39161471

RESUMO

White blood cells (WBCs) are a vital component of the immune system. The efficient and precise classification of WBCs is crucial for medical professionals to diagnose diseases accurately. This study presents an enhanced convolutional neural network (CNN) for detecting blood cells with the help of various image pre-processing techniques. Various image pre-processing techniques, such as padding, thresholding, erosion, dilation, and masking, are utilized to minimize noise and improve feature enhancement. Additionally, performance is further enhanced by experimenting with various architectural structures and hyperparameters to optimize the proposed model. A comparative evaluation is conducted to compare the performance of the proposed model with three transfer learning models, including Inception V3, MobileNetV2, and DenseNet201.The results indicate that the proposed model outperforms existing models, achieving a testing accuracy of 99.12%, precision of 99%, and F1-score of 99%. In addition, We utilized SHAP (Shapley Additive explanations) and LIME (Local Interpretable Model-agnostic Explanations) techniques in our study to improve the interpretability of the proposed model, providing valuable insights into how the model makes decisions. Furthermore, the proposed model has been further explained using the Grad-CAM and Grad-CAM++ techniques, which is a class-discriminative localization approach, to improve trust and transparency. Grad-CAM++ performed slightly better than Grad-CAM in identifying the predicted area's location. Finally, the most efficient model has been integrated into an end-to-end (E2E) system, accessible through both web and Android platforms for medical professionals to classify blood cell.

5.
J Imaging ; 10(8)2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39194992

RESUMO

OBJECTIVE: In clinical practice, thyroid nodules are typically visually evaluated by expert physicians using 2D ultrasound images. Based on their assessment, a fine needle aspiration (FNA) may be recommended. However, visually classifying thyroid nodules from ultrasound images may lead to unnecessary fine needle aspirations for patients. The aim of this study is to develop an automatic thyroid ultrasound image classification system to prevent unnecessary FNAs. METHODS: An automatic computer-aided artificial intelligence system is proposed for classifying thyroid nodules using a fine-tuned deep learning model based on the DenseNet architecture, which incorporates an attention module. The dataset comprises 591 thyroid nodule images categorized based on the Bethesda score. Thyroid nodules are classified as either requiring FNA or not. The challenges encountered in this task include managing variability in image quality, addressing the presence of artifacts in ultrasound image datasets, tackling class imbalance, and ensuring model interpretability. We employed techniques such as data augmentation, class weighting, and gradient-weighted class activation maps (Grad-CAM) to enhance model performance and provide insights into decision making. RESULTS: Our approach achieved excellent results with an average accuracy of 0.94, F1-score of 0.93, and sensitivity of 0.96. The use of Grad-CAM gives insights on the decision making and then reinforce the reliability of the binary classification for the end-user perspective. CONCLUSIONS: We propose a deep learning architecture that effectively classifies thyroid nodules as requiring FNA or not from ultrasound images. Despite challenges related to image variability, class imbalance, and interpretability, our method demonstrated a high classification accuracy with minimal false negatives, showing its potential to reduce unnecessary FNAs in clinical settings.

6.
Sci Rep ; 14(1): 18141, 2024 08 05.
Artigo em Inglês | MEDLINE | ID: mdl-39103562

RESUMO

This study aimed to investigate the performance and factors affecting the species classification of convolutional neural network (CNN) architecture using whole-part and earlywood-part cross-sectional datasets of six Korean Quercus species. The accuracy of species classification for each condition was analyzed using the datasets, data augmentation, and optimizers-stochastic gradient descent (SGD), adaptive moment estimation (Adam), and root mean square propagation (RMSProp)-based on a CNN architecture with three to four convolutional layers. The model trained with the augmented dataset yielded significantly superior results in terms of classification accuracy compared to the model trained with the non-augmented dataset. The augmented dataset was the only factor affecting classification accuracy in the final five epochs. In contrast, four factors in the entire epoch, such as the Adam and SGD optimizers and the earlywood-part and whole-part datasets, affected species classification. The arrangement of earlywood vessels, broad rays, and axial parenchyma was identified as a major influential factor in the CNN species classification using gradient-weighted class activation mapping (Grad-CAM) analysis. The augmented whole-part dataset with the Adam optimizer achieved the highest classification accuracy of 85.7% during the final five epochs of the test phase.


Assuntos
Redes Neurais de Computação , Madeira , República da Coreia , Madeira/classificação , Quercus/classificação
7.
Br J Haematol ; 2024 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-39024119

RESUMO

Palpebral conjunctival hue alteration is used in non-invasive screening for anaemia, whereas it is a qualitative measure. This study constructed machine/deep learning models for predicting haemoglobin values using 150 palpebral conjunctival images taken by a smartphone. The median haemoglobin value was 13.1 g/dL, including 10 patients with <11 g/dL. A segmentation model using U-net was successfully constructed. The segmented images were subjected to non-convolutional neural network (CNN)-based and CNN-based regression models for predicting haemoglobin values. The correlation coefficients between the actual and predicted haemoglobin values were 0.38 and 0.44 in the non-CNN-based and CNN-based models, respectively. The sensitivity and specificity for anaemia detection were 13% and 98% for the non-CNN-based model and 20% and 99% for the CNN-based model. The performance of the CNN-based model did not improve with a mask layer guiding the model's attention towards the conjunctival regions, however, slightly improved with correction by the aspect ratio and exposure time of input images. The gradient-weighted class activation mapping heatmap indicated that the lower half area of the conjunctiva was crucial for haemoglobin value prediction. In conclusion, the CNN-based model had better results than the non-CNN-based model. The prediction accuracy would improve by using more input data with anaemia.

8.
J Neurosci Methods ; 410: 110227, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39038716

RESUMO

BACKGROUND: Accurately diagnosing brain tumors from MRI scans is crucial for effective treatment planning. While traditional methods heavily rely on radiologist expertise, the integration of AI, particularly Convolutional Neural Networks (CNNs), has shown promise in improving accuracy. However, the lack of transparency in AI decision-making processes presents a challenge for clinical adoption. METHODS: Recent advancements in deep learning, particularly the utilization of CNNs, have facilitated the development of models for medical image analysis. In this study, we employed the EfficientNetB0 architecture and integrated explainable AI techniques to enhance both accuracy and interpretability. Grad-CAM visualization was utilized to highlight significant areas in MRI scans influencing classification decisions. RESULTS: Our model achieved a classification accuracy of 98.72 % across four categories of brain tumors (Glioma, Meningioma, No Tumor, Pituitary), with precision and recall exceeding 97 % for all categories. The incorporation of explainable AI techniques was validated through visual inspection of Grad-CAM heatmaps, which aligned well with established diagnostic markers in MRI scans. CONCLUSION: The AI-enhanced EfficientNetB0 framework with explainable AI techniques significantly improves brain tumor classification accuracy to 98.72 %, offering clear visual insights into the decision-making process. This method enhances diagnostic reliability and trust, demonstrating substantial potential for clinical adoption in medical diagnostics.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Meningioma/diagnóstico por imagem , Glioma/diagnóstico por imagem , Neuroimagem/métodos , Neuroimagem/normas , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
9.
Front Physiol ; 15: 1342572, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39077759

RESUMO

Introduction: Brain tumors are abnormal cell growths in the brain, posing significant treatment challenges. Accurate early detection using non-invasive methods is crucial for effective treatment. This research focuses on improving the early detection of brain tumors in MRI images through advanced deep-learning techniques. The primary goal is to identify the most effective deep-learning model for classifying brain tumors from MRI data, enhancing diagnostic accuracy and reliability. Methods: The proposed method for brain tumor classification integrates segmentation using K-means++, feature extraction from the Spatial Gray Level Dependence Matrix (SGLDM), and classification with ResNet50, along with synthetic data augmentation to enhance model robustness. Segmentation isolates tumor regions, while SGLDM captures critical texture information. The ResNet50 model then classifies the tumors accurately. To further improve the interpretability of the classification results, Grad-CAM is employed, providing visual explanations by highlighting influential regions in the MRI images. Result: In terms of accuracy, sensitivity, and specificity, the evaluation on the Br35H::BrainTumorDetection2020 dataset showed superior performance of the suggested method compared to existing state-of-the-art approaches. This indicates its effectiveness in achieving higher precision in identifying and classifying brain tumors from MRI data, showcasing advancements in diagnostic reliability and efficacy. Discussion: The superior performance of the suggested method indicates its robustness in accurately classifying brain tumors from MRI images, achieving higher accuracy, sensitivity, and specificity compared to existing methods. The method's enhanced sensitivity ensures a greater detection rate of true positive cases, while its improved specificity reduces false positives, thereby optimizing clinical decision-making and patient care in neuro-oncology.

10.
Front Plant Sci ; 15: 1412988, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39036360

RESUMO

Plant diseases significantly impact crop productivity and quality, posing a serious threat to global agriculture. The process of identifying and categorizing these diseases is often time-consuming and prone to errors. This research addresses this issue by employing a convolutional neural network and support vector machine (CNN-SVM) hybrid model to classify diseases in four economically important crops: strawberries, peaches, cherries, and soybeans. The objective is to categorize 10 classes of diseases, with six diseased classes and four healthy classes, for these crops using the deep learning-based CNN-SVM model. Several pre-trained models, including VGG16, VGG19, DenseNet, Inception, MobileNetV2, MobileNet, Xception, and ShuffleNet, were also trained, achieving accuracy ranges from 53.82% to 98.8%. The proposed model, however, achieved an average accuracy of 99.09%. While the proposed model's accuracy is comparable to that of the VGG16 pre-trained model, its significantly lower number of trainable parameters makes it more efficient and distinctive. This research demonstrates the potential of the CNN-SVM model in enhancing the accuracy and efficiency of plant disease classification. The CNN-SVM model was selected over VGG16 and other models due to its superior performance metrics. The proposed model achieved a 99% F1-score, a 99.98% Area Under the Curve (AUC), and a 99% precision value, demonstrating its efficacy. Additionally, class activation maps were generated using the Gradient Weighted Class Activation Mapping (Grad-CAM) technique to provide a visual explanation of the detected diseases. A heatmap was created to highlight the regions requiring classification, further validating the model's accuracy and interpretability.

11.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39066028

RESUMO

This paper presents a deep learning approach for predicting rail corrugation based on on-board rolling-stock vertical acceleration and forward velocity measurements using One-Dimensional Convolutional Neural Networks (CNN-1D). The model's performance is examined in a 1:10 scale railway system at two different forward velocities. During both the training and test stages, the CNN-1D produced results with mean absolute percentage errors of less than 5% for both forward velocities, confirming its ability to reproduce the corrugation profile based on real-time acceleration and forward velocity measurements. Moreover, by using a Gradient-weighted Class Activation Mapping (Grad-CAM) technique, it is shown that the CNN-1D can distinguish various regions, including the transition from damaged to undamaged regions and one-sided or two-sided corrugated regions, while predicting corrugation. In summary, the results of this study reveal the potential of data-driven techniques such as CNN-1D in predicting rails' corrugation using online data from the dynamics of the rolling-stock, which can lead to more reliable and efficient maintenance and repair of railways.

12.
Diagnostics (Basel) ; 14(14)2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39061704

RESUMO

Deep learning architectures like ResNet and Inception have produced accurate predictions for classifying benign and malignant tumors in the healthcare domain. This enables healthcare institutions to make data-driven decisions and potentially enable early detection of malignancy by employing computer-vision-based deep learning algorithms. These CNN algorithms, in addition to requiring huge amounts of data, can identify higher- and lower-level features that are significant while classifying tumors into benign or malignant. However, the existing literature is limited in terms of the explainability of the resultant classification, and identifying the exact features that are of importance, which is essential in the decision-making process for healthcare practitioners. Thus, the motivation of this work is to implement a custom classifier on the ovarian tumor dataset, which exhibits high classification performance and subsequently interpret the classification results qualitatively, using various Explainable AI methods, to identify which pixels or regions of interest are given highest importance by the model for classification. The dataset comprises CT scanned images of ovarian tumors taken from to the axial, saggital and coronal planes. State-of-the-art architectures, including a modified ResNet50 derived from the standard pre-trained ResNet50, are implemented in the paper. When compared to the existing state-of-the-art techniques, the proposed modified ResNet50 exhibited a classification accuracy of 97.5 % on the test dataset without increasing the the complexity of the architecture. The results then were carried for interpretation using several explainable AI techniques. The results show that the shape and localized nature of the tumors play important roles for qualitatively determining the ability of the tumor to metastasize and thereafter to be classified as benign or malignant.

13.
Br J Hosp Med (Lond) ; 85(7): 1-13, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39078889

RESUMO

Aims/Background Cervical cancer continues to be a significant cause of cancer-related deaths among women, especially in low-resource settings where screening and follow-up care are lacking. The transcription factor zinc finger E-box-binding homeobox 2 (ZEB2) has been identified as a potential marker for tumour aggressiveness and cancer progression in cervical cancer tissues. Methods This study presents a hybrid deep learning system developed to classify cervical cancer images based on ZEB2 expression. The system integrates multiple convolutional neural network models-EfficientNet, DenseNet, and InceptionNet-using ensemble voting. We utilised the gradient-weighted class activation mapping (Grad-CAM) visualisation technique to improve the interpretability of the decisions made by the convolutional neural networks. The dataset consisted of 649 annotated images, which were divided into training, validation, and testing sets. Results The hybrid model exhibited a high classification accuracy of 94.4% on the test set. The Grad-CAM visualisations offered insights into the model's decision-making process, emphasising the image regions crucial for classifying ZEB2 expression levels. Conclusion The proposed hybrid deep learning model presents an effective and interpretable method for the classification of cervical cancer based on ZEB2 expression. This approach holds the potential to substantially aid in early diagnosis, thereby potentially enhancing patient outcomes and mitigating healthcare costs. Future endeavours will concentrate on enhancing the model's accuracy and investigating its applicability to other cancer types.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Homeobox 2 de Ligação a E-box com Dedos de Zinco , Humanos , Neoplasias do Colo do Útero/diagnóstico , Neoplasias do Colo do Útero/patologia , Feminino , Homeobox 2 de Ligação a E-box com Dedos de Zinco/metabolismo , Redes Neurais de Computação , Biomarcadores Tumorais/metabolismo
14.
Sci Rep ; 14(1): 16022, 2024 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-38992069

RESUMO

Crop diseases can significantly affect various aspects of crop cultivation, including crop yield, quality, production costs, and crop loss. The utilization of modern technologies such as image analysis via machine learning techniques enables early and precise detection of crop diseases, hence empowering farmers to effectively manage and avoid the occurrence of crop diseases. The proposed methodology involves the use of modified MobileNetV3Large model deployed on edge device for real-time monitoring of grape leaf disease while reducing computational memory demands and ensuring satisfactory classification performance. To enhance applicability of MobileNetV3Large, custom layers consisting of two dense layers were added, each followed by a dropout layer, helped mitigate overfitting and ensured that the model remains efficient. Comparisons among other models showed that the proposed model outperformed those with an average train and test accuracy of 99.66% and 99.42%, with a precision, recall, and F1 score of approximately 99.42%. The model was deployed on an edge device (Nvidia Jetson Nano) using a custom developed GUI app and predicted from both saved and real-time data with high confidence values. Grad-CAM visualization was used to identify and represent image areas that affect the convolutional neural network (CNN) classification decision-making process with high accuracy. This research contributes to the development of plant disease classification technologies for edge devices, which have the potential to enhance the ability of autonomous farming for farmers, agronomists, and researchers to monitor and mitigate plant diseases efficiently and effectively, with a positive impact on global food security.


Assuntos
Agricultura , Redes Neurais de Computação , Doenças das Plantas , Folhas de Planta , Vitis , Agricultura/métodos , Produtos Agrícolas/crescimento & desenvolvimento , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina
15.
Sci Rep ; 14(1): 17615, 2024 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-39080324

RESUMO

The process of brain tumour segmentation entails locating the tumour precisely in images. Magnetic Resonance Imaging (MRI) is typically used by doctors to find any brain tumours or tissue abnormalities. With the use of region-based Convolutional Neural Network (R-CNN) masks, Grad-CAM and transfer learning, this work offers an effective method for the detection of brain tumours. Helping doctors make extremely accurate diagnoses is the goal. A transfer learning-based model has been suggested that offers high sensitivity and accuracy scores for brain tumour detection when segmentation is done using R-CNN masks. To train the model, the Inception V3, VGG-16, and ResNet-50 architectures were utilised. The Brain MRI Images for Brain Tumour Detection dataset was utilised to develop this method. This work's performance is evaluated and reported in terms of recall, specificity, sensitivity, accuracy, precision, and F1 score. A thorough analysis has been done comparing the proposed model operating with three distinct architectures: VGG-16, Inception V3, and Resnet-50. Comparing the proposed model, which was influenced by the VGG-16, to related works also revealed its performance. Achieving high sensitivity and accuracy percentages was the main goal. Using this approach, an accuracy and sensitivity of around 99% were obtained, which was much greater than current efforts.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Sensibilidade e Especificidade
16.
Acad Radiol ; 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38871552

RESUMO

RATIONALE AND OBJECTIVES: to develop a deep learning radiomics graph network (DLRN) that integrates deep learning features extracted from gray scale ultrasonography, radiomics features and clinical features, for distinguishing parotid pleomorphic adenoma (PA) from adenolymphoma (AL) MATERIALS AND METHODS: A total of 287 patients (162 in training cohort, 70 in internal validation cohort and 55 in external validation cohort) from two centers with histologically confirmed PA or AL were enrolled. Deep transfer learning features and radiomics features extracted from gray scale ultrasound images were input to machine learning classifiers including logistic regression (LR), support vector machines (SVM), KNN, RandomForest (RF), ExtraTrees, XGBoost, LightGBM, and MLP to construct deep transfer learning radiomics (DTL) models and Rad models respectively. Deep learning radiomics (DLR) models were constructed by integrating the two features and DLR signatures were generated. Clinical features were further combined with the signatures to develop a DLRN model. The performance of these models was evaluated using receiver operating characteristic (ROC) curve analysis, calibration, decision curve analysis (DCA), and the Hosmer-Lemeshow test. RESULTS: In the internal validation cohort and external validation cohort, comparing to Clinic (AUC=0.767 and 0.777), Rad (AUC=0.841 and 0.748), DTL (AUC=0.740 and 0.825) and DLR (AUC=0.863 and 0.859), the DLRN model showed greatest discriminatory ability (AUC=0.908 and 0.908) showed optimal discriminatory ability. CONCLUSION: The DLRN model built based on gray scale ultrasonography significantly improved the diagnostic performance for benign salivary gland tumors. It can provide clinicians with a non-invasive and accurate diagnostic approach, which holds important clinical significance and value. Ensemble of multiple models helped alleviate overfitting on the small dataset compared to using Resnet50 alone.

17.
J Biomed Inform ; 156: 104673, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38862083

RESUMO

OBJECTIVE: Pneumothorax is an acute thoracic disease caused by abnormal air collection between the lungs and chest wall. Recently, artificial intelligence (AI), especially deep learning (DL), has been increasingly employed for automating the diagnostic process of pneumothorax. To address the opaqueness often associated with DL models, explainable artificial intelligence (XAI) methods have been introduced to outline regions related to pneumothorax. However, these explanations sometimes diverge from actual lesion areas, highlighting the need for further improvement. METHOD: We propose a template-guided approach to incorporate the clinical knowledge of pneumothorax into model explanations generated by XAI methods, thereby enhancing the quality of the explanations. Utilizing one lesion delineation created by radiologists, our approach first generates a template that represents potential areas of pneumothorax occurrence. This template is then superimposed on model explanations to filter out extraneous explanations that fall outside the template's boundaries. To validate its efficacy, we carried out a comparative analysis of three XAI methods (Saliency Map, Grad-CAM, and Integrated Gradients) with and without our template guidance when explaining two DL models (VGG-19 and ResNet-50) in two real-world datasets (SIIM-ACR and ChestX-Det). RESULTS: The proposed approach consistently improved baseline XAI methods across twelve benchmark scenarios built on three XAI methods, two DL models, and two datasets. The average incremental percentages, calculated by the performance improvements over the baseline performance, were 97.8% in Intersection over Union (IoU) and 94.1% in Dice Similarity Coefficient (DSC) when comparing model explanations and ground-truth lesion areas. We further visualized baseline and template-guided model explanations on radiographs to showcase the performance of our approach. CONCLUSIONS: In the context of pneumothorax diagnoses, we proposed a template-guided approach for improving model explanations. Our approach not only aligns model explanations more closely with clinical insights but also exhibits extensibility to other thoracic diseases. We anticipate that our template guidance will forge a novel approach to elucidating AI models by integrating clinical domain expertise.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Pneumotórax , Humanos , Pneumotórax/diagnóstico por imagem , Algoritmos , Tomografia Computadorizada por Raios X/métodos , Informática Médica/métodos
18.
Front Plant Sci ; 15: 1403713, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38911981

RESUMO

Introduction: Blackheart is one of the most common physiological diseases in potatoes during storage. In the initial stage, black spots only occur in tissues near the potato core and cannot be detected from an outward appearance. If not identified and removed in time, the disease will seriously undermine the quality and sale of theentire batch of potatoes. There is an urgent need to develop a method for early detection of blackheart in potatoes. Methods: This paper used visible-near infrared (Vis/NIR) spectroscopy to conduct online discriminant analysis on potatoes with varying degrees of blackheart and healthy potatoes to achieve real-time detection. An efficient and lightweight detection model was developed for detecting different degrees of blackheart in potatoes by introducing the depthwise convolution, pointwise convolution, and efficient channel attention modules into the ResNet model. Two discriminative models, the support vector machine (SVM) and the ResNet model were compared with the modified ResNet model. Results and discussion: The prediction accuracy for blackheart and healthy potatoes test sets reached 0.971 using the original spectrum combined with a modified ResNet model. Moreover, the modified ResNet model significantly reduced the number of parameters to 1434052, achieving a substantial 62.71% reduction in model complexity. Meanwhile, its performance was evidenced by a 4.18% improvement in accuracy. The Grad-CAM++ visualizations provided a qualitative assessment of the model's focus across different severity grades of blackheart condition, highlighting the importance of different wavelengths in the analysis. In these visualizations, the most significant features were predominantly found in the 650-750 nm range, with a notable peak near 700 nm. This peak was speculated to be associated with the vibrational activities of the C-H bond, specifically the fourth overtone of the C-H functional group, within the molecular structure of the potato components. This research demonstrated that the modified ResNet model combined with Vis/NIR could assist in the detection of different degrees of black in potatoes.

19.
IEEE J Transl Eng Health Med ; 12: 457-467, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38899144

RESUMO

OBJECTIVE: Pulmonary cavity lesion is one of the commonly seen lesions in lung caused by a variety of malignant and non-malignant diseases. Diagnosis of a cavity lesion is commonly based on accurate recognition of the typical morphological characteristics. A deep learning-based model to automatically detect, segment, and quantify the region of cavity lesion on CT scans has potential in clinical diagnosis, monitoring, and treatment efficacy assessment. METHODS: A weakly-supervised deep learning-based method named CSA2-ResNet was proposed to quantitatively characterize cavity lesions in this paper. The lung parenchyma was firstly segmented using a pretrained 2D segmentation model, and then the output with or without cavity lesions was fed into the developed deep neural network containing hybrid attention modules. Next, the visualized lesion was generated from the activation region of the classification network using gradient-weighted class activation mapping, and image processing was applied for post-processing to obtain the expected segmentation results of cavity lesions. Finally, the automatic characteristic measurement of cavity lesions (e.g., area and thickness) was developed and verified. RESULTS: the proposed weakly-supervised segmentation method achieved an accuracy, precision, specificity, recall, and F1-score of 98.48%, 96.80%, 97.20%, 100%, and 98.36%, respectively. There is a significant improvement (P < 0.05) compared to other methods. Quantitative characterization of morphology also obtained good analysis effects. CONCLUSIONS: The proposed easily-trained and high-performance deep learning model provides a fast and effective way for the diagnosis and dynamic monitoring of pulmonary cavity lesions in clinic. Clinical and Translational Impact Statement: This model used artificial intelligence to achieve the detection and quantitative analysis of pulmonary cavity lesions in CT scans. The morphological features revealed in experiments can be utilized as potential indicators for diagnosis and dynamic monitoring of patients with cavity lesions.


Assuntos
Aprendizado Profundo , Pulmão , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Pulmão/diagnóstico por imagem , Pulmão/patologia , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Pneumopatias/diagnóstico por imagem , Pneumopatias/patologia , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Algoritmos
20.
Sci Rep ; 14(1): 14276, 2024 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-38902523

RESUMO

Several studies have emphasised how positive and negative human papillomavirus (HPV+ and HPV-, respectively) oropharyngeal squamous cell carcinoma (OPSCC) has distinct molecular profiles, tumor characteristics, and disease outcomes. Different radiomics-based prediction models have been proposed, by also using innovative techniques such as Convolutional Neural Networks (CNNs). Although some of these models reached encouraging predictive performances, there evidence explaining the role of radiomic features in achieving a specific outcome is scarce. In this paper, we propose some preliminary results related to an explainable CNN-based model to predict HPV status in OPSCC patients. We extracted the Gross Tumor Volume (GTV) of pre-treatment CT images related to 499 patients (356 HPV+ and 143 HPV-) included into the OPC-Radiomics public dataset to train an end-to-end Inception-V3 CNN architecture. We also collected a multicentric dataset consisting of 92 patients (43 HPV+ , 49 HPV-), which was employed as an independent test set. Finally, we applied Gradient-weighted Class Activation Mapping (Grad-CAM) technique to highlight the most informative areas with respect to the predicted outcome. The proposed model reached an AUC value of 73.50% on the independent test. As a result of the Grad-CAM algorithm, the most informative areas related to the correctly classified HPV+ patients were located into the intratumoral area. Conversely, the most important areas referred to the tumor edges. Finally, since the proposed model provided additional information with respect to the accuracy of the classification given by the visualization of the areas of greatest interest for predictive purposes for each case examined, it could contribute to increase confidence in using computer-based predictive models in the actual clinical practice.


Assuntos
Redes Neurais de Computação , Neoplasias Orofaríngeas , Infecções por Papillomavirus , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Orofaríngeas/virologia , Neoplasias Orofaríngeas/diagnóstico por imagem , Neoplasias Orofaríngeas/patologia , Tomografia Computadorizada por Raios X/métodos , Infecções por Papillomavirus/diagnóstico por imagem , Infecções por Papillomavirus/virologia , Infecções por Papillomavirus/patologia , Masculino , Feminino , Papillomaviridae , Pessoa de Meia-Idade , Idoso , Carcinoma de Células Escamosas/diagnóstico por imagem , Carcinoma de Células Escamosas/virologia , Carcinoma de Células Escamosas/patologia , Carcinoma de Células Escamosas de Cabeça e Pescoço/virologia , Carcinoma de Células Escamosas de Cabeça e Pescoço/diagnóstico por imagem , Carcinoma de Células Escamosas de Cabeça e Pescoço/patologia , Carga Tumoral , Papillomavirus Humano
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA