Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 138
Filtrar
1.
Br J Haematol ; 205(4): 1590-1598, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39024119

RESUMO

Palpebral conjunctival hue alteration is used in non-invasive screening for anaemia, whereas it is a qualitative measure. This study constructed machine/deep learning models for predicting haemoglobin values using 150 palpebral conjunctival images taken by a smartphone. The median haemoglobin value was 13.1 g/dL, including 10 patients with <11 g/dL. A segmentation model using U-net was successfully constructed. The segmented images were subjected to non-convolutional neural network (CNN)-based and CNN-based regression models for predicting haemoglobin values. The correlation coefficients between the actual and predicted haemoglobin values were 0.38 and 0.44 in the non-CNN-based and CNN-based models, respectively. The sensitivity and specificity for anaemia detection were 13% and 98% for the non-CNN-based model and 20% and 99% for the CNN-based model. The performance of the CNN-based model did not improve with a mask layer guiding the model's attention towards the conjunctival regions, however, slightly improved with correction by the aspect ratio and exposure time of input images. The gradient-weighted class activation mapping heatmap indicated that the lower half area of the conjunctiva was crucial for haemoglobin value prediction. In conclusion, the CNN-based model had better results than the non-CNN-based model. The prediction accuracy would improve by using more input data with anaemia.


Assuntos
Anemia , Túnica Conjuntiva , Aprendizado Profundo , Hemoglobinas , Humanos , Túnica Conjuntiva/patologia , Hemoglobinas/análise , Feminino , Masculino , Pessoa de Meia-Idade , Anemia/diagnóstico , Anemia/sangue , Adulto , Idoso
2.
J Biomed Inform ; 156: 104673, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38862083

RESUMO

OBJECTIVE: Pneumothorax is an acute thoracic disease caused by abnormal air collection between the lungs and chest wall. Recently, artificial intelligence (AI), especially deep learning (DL), has been increasingly employed for automating the diagnostic process of pneumothorax. To address the opaqueness often associated with DL models, explainable artificial intelligence (XAI) methods have been introduced to outline regions related to pneumothorax. However, these explanations sometimes diverge from actual lesion areas, highlighting the need for further improvement. METHOD: We propose a template-guided approach to incorporate the clinical knowledge of pneumothorax into model explanations generated by XAI methods, thereby enhancing the quality of the explanations. Utilizing one lesion delineation created by radiologists, our approach first generates a template that represents potential areas of pneumothorax occurrence. This template is then superimposed on model explanations to filter out extraneous explanations that fall outside the template's boundaries. To validate its efficacy, we carried out a comparative analysis of three XAI methods (Saliency Map, Grad-CAM, and Integrated Gradients) with and without our template guidance when explaining two DL models (VGG-19 and ResNet-50) in two real-world datasets (SIIM-ACR and ChestX-Det). RESULTS: The proposed approach consistently improved baseline XAI methods across twelve benchmark scenarios built on three XAI methods, two DL models, and two datasets. The average incremental percentages, calculated by the performance improvements over the baseline performance, were 97.8% in Intersection over Union (IoU) and 94.1% in Dice Similarity Coefficient (DSC) when comparing model explanations and ground-truth lesion areas. We further visualized baseline and template-guided model explanations on radiographs to showcase the performance of our approach. CONCLUSIONS: In the context of pneumothorax diagnoses, we proposed a template-guided approach for improving model explanations. Our approach not only aligns model explanations more closely with clinical insights but also exhibits extensibility to other thoracic diseases. We anticipate that our template guidance will forge a novel approach to elucidating AI models by integrating clinical domain expertise.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Pneumotórax , Humanos , Pneumotórax/diagnóstico por imagem , Algoritmos , Tomografia Computadorizada por Raios X/métodos , Informática Médica/métodos
3.
Cereb Cortex ; 33(6): 2415-2425, 2023 03 10.
Artigo em Inglês | MEDLINE | ID: mdl-35641181

RESUMO

Major depressive disorder (MDD) is the second leading cause of disability worldwide. Currently, the structural magnetic resonance imaging-based MDD diagnosis models mainly utilize local grayscale information or morphological characteristics in a single site with small samples. Emerging evidence has demonstrated that different brain structures in different circuits have distinct developmental timing, but mature coordinately within the same functional circuit. Thus, establishing an attention-guided unified classification framework with deep learning and individual structural covariance networks in a large multisite dataset could facilitate developing an accurate diagnosis strategy. Our results showed that attention-guided classification could improve the classification accuracy from primary 75.1% to ultimate 76.54%. Furthermore, the discriminative features of regional covariance connectivities and local structural characteristics were found to be mainly located in prefrontal cortex, insula, superior temporal cortex, and cingulate cortex, which have been widely reported to be closely associated with depression. Our study demonstrated that our attention-guided unified deep learning framework may be an effective tool for MDD diagnosis. The identified covariance connectivities and structural features may serve as biomarkers for MDD.


Assuntos
Transtorno Depressivo Maior , Humanos , Encéfalo , Imageamento por Ressonância Magnética , Atenção , Redes Neurais de Computação
4.
BMC Med Imaging ; 24(1): 230, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39223507

RESUMO

Breast cancer is a leading cause of mortality among women globally, necessitating precise classification of breast ultrasound images for early diagnosis and treatment. Traditional methods using CNN architectures such as VGG, ResNet, and DenseNet, though somewhat effective, often struggle with class imbalances and subtle texture variations, leading to reduced accuracy for minority classes such as malignant tumors. To address these issues, we propose a methodology that leverages EfficientNet-B7, a scalable CNN architecture, combined with advanced data augmentation techniques to enhance minority class representation and improve model robustness. Our approach involves fine-tuning EfficientNet-B7 on the BUSI dataset, implementing RandomHorizontalFlip, RandomRotation, and ColorJitter to balance the dataset and improve model robustness. The training process includes early stopping to prevent overfitting and optimize performance metrics. Additionally, we integrate Explainable AI (XAI) techniques, such as Grad-CAM, to enhance the interpretability and transparency of the model's predictions, providing visual and quantitative insights into the features and regions of ultrasound images influencing classification outcomes. Our model achieves a classification accuracy of 99.14%, significantly outperforming existing CNN-based approaches in breast ultrasound image classification. The incorporation of XAI techniques enhances our understanding of the model's decision-making process, thereby increasing its reliability and facilitating clinical adoption. This comprehensive framework offers a robust and interpretable tool for the early detection and diagnosis of breast cancer, advancing the capabilities of automated diagnostic systems and supporting clinical decision-making processes.


Assuntos
Neoplasias da Mama , Ultrassonografia Mamária , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Ultrassonografia Mamária/métodos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Inteligência Artificial
5.
BMC Med Imaging ; 24(1): 107, 2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38734629

RESUMO

This study addresses the critical challenge of detecting brain tumors using MRI images, a pivotal task in medical diagnostics that demands high accuracy and interpretability. While deep learning has shown remarkable success in medical image analysis, there remains a substantial need for models that are not only accurate but also interpretable to healthcare professionals. The existing methodologies, predominantly deep learning-based, often act as black boxes, providing little insight into their decision-making process. This research introduces an integrated approach using ResNet50, a deep learning model, combined with Gradient-weighted Class Activation Mapping (Grad-CAM) to offer a transparent and explainable framework for brain tumor detection. We employed a dataset of MRI images, enhanced through data augmentation, to train and validate our model. The results demonstrate a significant improvement in model performance, with a testing accuracy of 98.52% and precision-recall metrics exceeding 98%, showcasing the model's effectiveness in distinguishing tumor presence. The application of Grad-CAM provides insightful visual explanations, illustrating the model's focus areas in making predictions. This fusion of high accuracy and explainability holds profound implications for medical diagnostics, offering a pathway towards more reliable and interpretable brain tumor detection tools.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos
6.
BMC Med Imaging ; 24(1): 118, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773391

RESUMO

Brain tumor diagnosis using MRI scans poses significant challenges due to the complex nature of tumor appearances and variations. Traditional methods often require extensive manual intervention and are prone to human error, leading to misdiagnosis and delayed treatment. Current approaches primarily include manual examination by radiologists and conventional machine learning techniques. These methods rely heavily on feature extraction and classification algorithms, which may not capture the intricate patterns present in brain MRI images. Conventional techniques often suffer from limited accuracy and generalizability, mainly due to the high variability in tumor appearance and the subjective nature of manual interpretation. Additionally, traditional machine learning models may struggle with the high-dimensional data inherent in MRI images. To address these limitations, our research introduces a deep learning-based model utilizing convolutional neural networks (CNNs).Our model employs a sequential CNN architecture with multiple convolutional, max-pooling, and dropout layers, followed by dense layers for classification. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The precision, recall, and F1-scores ranging from 97 to 98% with a roc-auc ranging from 99 to 100% for each tumor category further substantiate the model's effectiveness. Additionally, the utilization of Grad-CAM visualizations provides insights into the model's decision-making process, enhancing interpretability. This research addresses the pressing need for enhanced diagnostic accuracy in identifying brain tumors through MRI imaging, tackling challenges such as variability in tumor appearance and the need for rapid, reliable diagnostic tools.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Masculino , Feminino
7.
Skin Res Technol ; 30(9): e70040, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39221858

RESUMO

BACKGROUND: Skin cancer is one of the highly occurring diseases in human life. Early detection and treatment are the prime and necessary points to reduce the malignancy of infections. Deep learning techniques are supplementary tools to assist clinical experts in detecting and localizing skin lesions. Vision transformers (ViT) based on image segmentation classification using multiple classes provide fairly accurate detection and are gaining more popularity due to legitimate multiclass prediction capabilities. MATERIALS AND METHODS: In this research, we propose a new ViT Gradient-Weighted Class Activation Mapping (GradCAM) based architecture named ViT-GradCAM for detecting and classifying skin lesions by spreading ratio on the lesion's surface area. The proposed system is trained and validated using a HAM 10000 dataset by studying seven skin lesions. The database comprises 10 015 dermatoscopic images of varied sizes. The data preprocessing and data augmentation techniques are applied to overcome the class imbalance issues and improve the model's performance. RESULT: The proposed algorithm is based on ViT models that classify the dermatoscopic images into seven classes with an accuracy of 97.28%, precision of 98.51, recall of 95.2%, and an F1 score of 94.6, respectively. The proposed ViT-GradCAM obtains better and more accurate detection and classification than other state-of-the-art deep learning-based skin lesion detection models. The architecture of ViT-GradCAM is extensively visualized to highlight the actual pixels in essential regions associated with skin-specific pathologies. CONCLUSION: This research proposes an alternate solution to overcome the challenges of detecting and classifying skin lesions using ViTs and GradCAM, which play a significant role in detecting and classifying skin lesions accurately rather than relying solely on deep learning models.


Assuntos
Algoritmos , Aprendizado Profundo , Dermoscopia , Neoplasias Cutâneas , Humanos , Dermoscopia/métodos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/classificação , Neoplasias Cutâneas/patologia , Interpretação de Imagem Assistida por Computador/métodos , Bases de Dados Factuais , Pele/diagnóstico por imagem , Pele/patologia
8.
BMC Musculoskelet Disord ; 25(1): 250, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561697

RESUMO

BACKGROUND: Ankle fractures are prevalent injuries that necessitate precise diagnostic tools. Traditional diagnostic methods have limitations that can be addressed using machine learning techniques, with the potential to improve accuracy and expedite diagnoses. METHODS: We trained various deep learning architectures, notably the Adapted ResNet50 with SENet capabilities, to identify ankle fractures using a curated dataset of radiographic images. Model performance was evaluated using common metrics like accuracy, precision, and recall. Additionally, Grad-CAM visualizations were employed to interpret model decisions. RESULTS: The Adapted ResNet50 with SENet capabilities consistently outperformed other models, achieving an accuracy of 93%, AUC of 95%, and recall of 92%. Grad-CAM visualizations provided insights into areas of the radiographs that the model deemed significant in its decisions. CONCLUSIONS: The Adapted ResNet50 model enhanced with SENet capabilities demonstrated superior performance in detecting ankle fractures, offering a promising tool to complement traditional diagnostic methods. However, continuous refinement and expert validation are essential to ensure optimal application in clinical settings.


Assuntos
Fraturas do Tornozelo , Humanos , Fraturas do Tornozelo/diagnóstico por imagem , Benchmarking , Aprendizado de Máquina
9.
BMC Med Inform Decis Mak ; 24(1): 222, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39112991

RESUMO

Lung and colon cancers are leading contributors to cancer-related fatalities globally, distinguished by unique histopathological traits discernible through medical imaging. Effective classification of these cancers is critical for accurate diagnosis and treatment. This study addresses critical challenges in the diagnostic imaging of lung and colon cancers, which are among the leading causes of cancer-related deaths worldwide. Recognizing the limitations of existing diagnostic methods, which often suffer from overfitting and poor generalizability, our research introduces a novel deep learning framework that synergistically combines the Xception and MobileNet architectures. This innovative ensemble model aims to enhance feature extraction, improve model robustness, and reduce overfitting.Our methodology involves training the hybrid model on a comprehensive dataset of histopathological images, followed by validation against a balanced test set. The results demonstrate an impressive classification accuracy of 99.44%, with perfect precision and recall in identifying certain cancerous and non-cancerous tissues, marking a significant improvement over traditional approach.The practical implications of these findings are profound. By integrating Gradient-weighted Class Activation Mapping (Grad-CAM), the model offers enhanced interpretability, allowing clinicians to visualize the diagnostic reasoning process. This transparency is vital for clinical acceptance and enables more personalized, accurate treatment planning. Our study not only pushes the boundaries of medical imaging technology but also sets the stage for future research aimed at expanding these techniques to other types of cancer diagnostics.


Assuntos
Neoplasias do Colo , Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/classificação , Neoplasias do Colo/diagnóstico por imagem , Neoplasias do Colo/classificação , Inteligência Artificial
10.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39066028

RESUMO

This paper presents a deep learning approach for predicting rail corrugation based on on-board rolling-stock vertical acceleration and forward velocity measurements using One-Dimensional Convolutional Neural Networks (CNN-1D). The model's performance is examined in a 1:10 scale railway system at two different forward velocities. During both the training and test stages, the CNN-1D produced results with mean absolute percentage errors of less than 5% for both forward velocities, confirming its ability to reproduce the corrugation profile based on real-time acceleration and forward velocity measurements. Moreover, by using a Gradient-weighted Class Activation Mapping (Grad-CAM) technique, it is shown that the CNN-1D can distinguish various regions, including the transition from damaged to undamaged regions and one-sided or two-sided corrugated regions, while predicting corrugation. In summary, the results of this study reveal the potential of data-driven techniques such as CNN-1D in predicting rails' corrugation using online data from the dynamics of the rolling-stock, which can lead to more reliable and efficient maintenance and repair of railways.

11.
Sensors (Basel) ; 24(6)2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38544204

RESUMO

The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human-computer interaction. This paper tackles a well-known gap in the field, which is the lack of testing in the applicability and reliability of XAI evaluation metrics in the skeleton-based HAR domain. We have tested established XAI metrics, namely faithfulness and stability on Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM) to address this problem. This study introduces a perturbation method that produces variations within the error tolerance of motion sensor tracking, ensuring the resultant skeletal data points remain within the plausible output range of human movement as captured by the tracking device. We used the NTU RGB+D 60 dataset and the EfficientGCN architecture for HAR model training and testing. The evaluation involved systematically perturbing the 3D skeleton data by applying controlled displacements at different magnitudes to assess the impact on XAI metric performance across multiple action classes. Our findings reveal that faithfulness may not consistently serve as a reliable metric across all classes for the EfficientGCN model, indicating its limited applicability in certain contexts. In contrast, stability proves to be a more robust metric, showing dependability across different perturbation magnitudes. Additionally, CAM and Grad-CAM yielded almost identical explanations, leading to closely similar metric outcomes. This suggests a need for the exploration of additional metrics and the application of more diverse XAI methods to broaden the understanding and effectiveness of XAI in skeleton-based HAR.


Assuntos
Sistema Musculoesquelético , Humanos , Reprodutibilidade dos Testes , Movimento , Esqueleto , Atividades Humanas
12.
Sensors (Basel) ; 24(11)2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38894271

RESUMO

Considering the complex structure of Chinese characters, particularly the connections and intersections between strokes, there are challenges in low accuracy of Chinese character stroke extraction and recognition, as well as unclear segmentation. This study builds upon the YOLOv8n-seg model to propose the YOLOv8n-seg-CAA-BiFPN Chinese character stroke fine segmentation model. The proposed Coordinate-Aware Attention mechanism (CAA) divides the backbone network input feature map into four parts, applying different weights for horizontal, vertical, and channel attention to compute and fuse key information, thus capturing the contextual regularity of closely arranged stroke positions. The network's neck integrates an enhanced weighted bi-directional feature pyramid network (BiFPN), enhancing the fusion effect for features of strokes of various sizes. The Shape-IoU loss function is adopted in place of the traditional CIoU loss function, focusing on the shape and scale of stroke bounding boxes to optimize the bounding box regression process. Finally, the Grad-CAM++ technique is used to generate heatmaps of segmentation predictions, facilitating the visualization of effective features and a deeper understanding of the model's focus areas. Trained and tested on the public Chinese character stroke datasets CCSE-Kai and CCSE-HW, the model achieves an average accuracy of 84.71%, an average recall rate of 83.65%, and a mean average precision of 80.11%. Compared to the original YOLOv8n-seg and existing mainstream segmentation models like SegFormer, BiSeNetV2, and Mask R-CNN, the average accuracy improved by 3.50%, 4.35%, 10.56%, and 22.05%, respectively; the average recall rates improved by 4.42%, 9.32%, 15.64%, and 24.92%, respectively; and the mean average precision improved by 3.11%, 4.15%, 8.02%, and 19.33%, respectively. The results demonstrate that the YOLOv8n-seg-CAA-BiFPN network can accurately achieve Chinese character stroke segmentation.

13.
Lab Invest ; 103(6): 100127, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36889541

RESUMO

Neuropathologic assessment during autopsy is the gold standard for diagnosing neurodegenerative disorders. Neurodegenerative conditions, such as Alzheimer disease (AD) neuropathological change, are a continuous process from normal aging rather than categorical; therefore, diagnosing neurodegenerative disorders is a complicated task. We aimed to develop a pipeline for diagnosing AD and other tauopathies, including corticobasal degeneration (CBD), globular glial tauopathy, Pick disease, and progressive supranuclear palsy. We used a weakly supervised deep learning-based approach called clustering-constrained-attention multiple-instance learning (CLAM) on the whole-slide images (WSIs) of patients with AD (n = 30), CBD (n = 20), globular glial tauopathy (n = 10), Pick disease (n = 20), and progressive supranuclear palsy (n = 20), as well as nontauopathy controls (n = 21). Three sections (A: motor cortex; B: cingulate gyrus and superior frontal gyrus; and C: corpus striatum) that had been immunostained for phosphorylated tau were scanned and converted to WSIs. We evaluated 3 models (classic multiple-instance learning, single-attention-branch CLAM, and multiattention-branch CLAM) using 5-fold cross-validation. Attention-based interpretation analysis was performed to identify the morphologic features contributing to the classification. Within highly attended regions, we also augmented gradient-weighted class activation mapping to the model to visualize cellular-level evidence of the model's decisions. The multiattention-branch CLAM model using section B achieved the highest area under the curve (0.970 ± 0.037) and diagnostic accuracy (0.873 ± 0.087). A heatmap showed the highest attention in the gray matter of the superior frontal gyrus in patients with AD and the white matter of the cingulate gyrus in patients with CBD. Gradient-weighted class activation mapping showed the highest attention in characteristic tau lesions for each disease (eg, numerous tau-positive threads in the white matter inclusions for CBD). Our findings support the feasibility of deep learning-based approaches for the classification of neurodegenerative disorders on WSIs. Further investigation of this method, focusing on clinicopathologic correlations, is warranted.


Assuntos
Doença de Alzheimer , Aprendizado Profundo , Doenças Neurodegenerativas , Doença de Pick , Paralisia Supranuclear Progressiva , Tauopatias , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Paralisia Supranuclear Progressiva/diagnóstico por imagem , Paralisia Supranuclear Progressiva/patologia , Doença de Pick/patologia , Proteínas tau , Tauopatias/diagnóstico por imagem , Tauopatias/patologia
14.
BMC Infect Dis ; 23(1): 438, 2023 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-37370031

RESUMO

BACKGROUND: In May 2022, the World Health Organization (WHO) European Region announced an atypical Monkeypox epidemic in response to reports of numerous cases in some member countries unrelated to those where the illness is endemic. This issue has raised concerns about the widespread nature of this disease around the world. The experience with Coronavirus Disease 2019 (COVID-19) has increased awareness about pandemics among researchers and health authorities. METHODS: Deep Neural Networks (DNNs) have shown promising performance in detecting COVID-19 and predicting its outcomes. As a result, researchers have begun applying similar methods to detect Monkeypox disease. In this study, we utilize a dataset comprising skin images of three diseases: Monkeypox, Chickenpox, Measles, and Normal cases. We develop seven DNN models to identify Monkeypox from these images. Two scenarios of including two classes and four classes are implemented. RESULTS: The results show that our proposed DenseNet201-based architecture has the best performance, with Accuracy = 97.63%, F1-Score = 90.51%, and Area Under Curve (AUC) = 94.27% in two-class scenario; and Accuracy = 95.18%, F1-Score = 89.61%, AUC = 92.06% for four-class scenario. Comparing our study with previous studies with similar scenarios, shows that our proposed model demonstrates superior performance, particularly in terms of the F1-Score metric. For the sake of transparency and explainability, Local Interpretable Model-Agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-Cam) were developed to interpret the results. These techniques aim to provide insights into the decision-making process, thereby increasing the trust of clinicians. CONCLUSION: The DenseNet201 model outperforms the other models in terms of the confusion metrics, regardless of the scenario. One significant accomplishment of this study is the utilization of LIME and Grad-Cam to identify the affected areas and assess their significance in diagnosing diseases based on skin images. By incorporating these techniques, we enhance our understanding of the infected regions and their relevance in distinguishing Monkeypox from other similar diseases. Our proposed model can serve as a valuable auxiliary tool for diagnosing Monkeypox and distinguishing it from other related conditions.


Assuntos
COVID-19 , Mpox , Humanos , COVID-19/diagnóstico , Mpox/diagnóstico , Mpox/epidemiologia , Redes Neurais de Computação , Pandemias
15.
Anal Bioanal Chem ; 415(17): 3449-3462, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37195443

RESUMO

Early, express, and reliable detection of cancer can provide a favorable prognosis and decrease mortality. Tumor biomarkers have been proven to be closely related to tumor occurrence and development. Conventional tumor biomarker detection based on genomic, proteomic, and metabolomic methods is time and equipment-consuming and always needs a specific target marker. Surface-enhanced Raman scattering (SERS), as a non-invasive ultrasensitive and label-free vibrational spectroscopy technique, can detect cancer-related biomedical changes in biofluids. In this paper, 110 serum samples were collected from 30 healthy controls and 80 cancer patients (including 30 bladder cancer (BC), 30 adrenal cancer (AC), and 20 acute myeloid leukemia (AML)). One microliter of blood serum was mixed with 1 µl silver colloid and then was air-dried for SERS measurements. After spectral data augmentation, one-dimensional convolutional neural network (1D-CNN) was proposed for precise and rapid identification of healthy and three different cancers with high accuracy of 98.27%. After gradient-weighted class activation mapping (Grad-CAM) based spectral interpretation, the contributions of SERS peaks corresponding to biochemical substances indicated the most potential biomarkers, i.e., L-tyrosine in bladder cancer; acetoacetate and riboflavin in adrenal cancer and phospholipids, amide-I, and α-Helix in acute myeloid leukemia, which might provide an insight into the mechanism of intelligent diagnosis of different cancers based on label-free serum SERS. The integration of label-free SERS and deep learning has great potential for the rapid, reliable, and non-invasive detection of cancers, which may significantly improve the precise diagnosis in clinical practice.


Assuntos
Neoplasias das Glândulas Suprarrenais , Aprendizado Profundo , Neoplasias da Bexiga Urinária , Humanos , Proteômica , Neoplasias da Bexiga Urinária/diagnóstico , Biomarcadores Tumorais , Análise Espectral Raman
16.
Environ Res ; 231(Pt 1): 115996, 2023 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-37105290

RESUMO

Accurately determining the second-order rate constant with eaq- (keaq-) for organic compounds (OCs) is crucial in the eaq- induced advanced reduction processes (ARPs). In this study, we collected 867 keaq- values at different pHs from peer-reviewed publications and applied machine learning (ML) algorithm-XGBoost and deep learning (DL) algorithm-convolutional neural network (CNN) to predict keaq-. Our results demonstrated that the CNN model with transfer learning and data augmentation (CNN-TL&DA) greatly improved the prediction results and overcame over-fitting. Furthermore, we compared the ML/DL modeling methods and found that the CNN-TL&DA, which combined molecular images (MI), achieved the best overall performance (R2test = 0.896, RMSEtest = 0.362, MAEtest = 0.261) when compared to the XGBoost algorithm combined with Mordred descriptors (MD) (0.692, RMSEtest = 0.622, MAEtest = 0.399) and Morgan fingerprint (MF) (R2test = 0.512, RMSEtest = 0.783, MAEtest = 0.520). Moreover, the interpretation of the MD-XGBoost and MF-XGBoost models using the SHAP method revealed the significance of MDs (e.g., molecular size, branching, electron distribution, polarizability, and bond types), MFs (e.g, aromatic carbon, carbonyl oxygen, nitrogen, and halogen) and environmental conditions (e.g., pH) that effectively influence the keaq- prediction. The interpretation of the 2D molecular image-CNN (MI-CNN) models using the Grad-CAM method showed that they correctly identified key functional groups such as -CN, -NO2, and -X functional groups that can increase the keaq- values. Additionally, almost all electron-withdrawing groups and a small part of electron-donating groups for the MI-CNN model can be highlighted for estimating keaq-. Overall, our results suggest that the CNN approach has smaller errors when compared to ML algorithms, making it a promising candidate for predicting other rate constants.


Assuntos
Aprendizado Profundo , Elétrons , Redes Neurais de Computação , Aprendizado de Máquina , Algoritmos
17.
BMC Med Inform Decis Mak ; 23(1): 225, 2023 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-37853371

RESUMO

BACKGROUND: Saliency-based algorithms are able to explain the relationship between input image pixels and deep-learning model predictions. However, it may be difficult to assess the clinical value of the most important image features and the model predictions derived from the raw saliency map. This study proposes to enhance the interpretability of saliency-based deep learning model for survival classification of patients with gliomas, by extracting domain knowledge-based information from the raw saliency maps. MATERIALS AND METHODS: Our study includes presurgical T1-weighted (pre- and post-contrast), T2-weighted and T2-FLAIR MRIs of 147 glioma patients from the BraTs 2020 challenge dataset aligned to the SRI 24 anatomical atlas. Each image exam includes a segmentation mask and the information of overall survival (OS) from time of diagnosis (in days). This dataset was divided into training ([Formula: see text]) and validation ([Formula: see text]) datasets. The extent of surgical resection for all patients was gross total resection. We categorized the data into 42 short (mean [Formula: see text] days), 30 medium ([Formula: see text] days), and 46 long ([Formula: see text] days) survivors. A 3D convolutional neural network (CNN) trained on brain tumour MRI volumes classified all patients based on expected prognosis of either short-term, medium-term, or long-term survival. We extend the popular 2D Gradient-weighted Class Activation Mapping (Grad-CAM), for the generation of saliency map, to 3D and combined it with the anatomical atlas, to extract brain regions, brain volume and probability map that reveal domain knowledge-based information. RESULTS: For each OS class, a larger tumor volume was associated with a shorter OS. There were 10, 7 and 27 tumor locations in brain regions that uniquely associate with the short-term, medium-term, and long-term survival, respectively. Tumors located in the transverse temporal gyrus, fusiform, and palladium are associated with short, medium and long-term survival, respectively. The visual and textual information displayed during OS prediction highlights tumor location and the contribution of different brain regions to the prediction of OS. This algorithm design feature assists the physician in analyzing and understanding different model prediction stages. CONCLUSIONS: Domain knowledge-based information extracted from the saliency map can enhance the interpretability of deep learning models. Our findings show that tumors overlapping eloquent brain regions are associated with short patient survival.


Assuntos
Aprendizado Profundo , Glioma , Humanos , Glioma/diagnóstico por imagem , Glioma/patologia , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Neuroimagem
18.
Sensors (Basel) ; 23(16)2023 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-37631825

RESUMO

A thyroid nodule, a common abnormal growth within the thyroid gland, is often identified through ultrasound imaging of the neck. These growths may be solid- or fluid-filled, and their treatment is influenced by factors such as size and location. The Thyroid Imaging Reporting and Data System (TI-RADS) is a classification method that categorizes thyroid nodules into risk levels based on features such as size, echogenicity, margin, shape, and calcification. It guides clinicians in deciding whether a biopsy or other further evaluation is needed. Machine learning (ML) can complement TI-RADS classification, thereby improving the detection of malignant tumors. When combined with expert rules (TI-RADS) and explanations, ML models may uncover elements that TI-RADS misses, especially when TI-RADS training data are scarce. In this paper, we present an automated system for classifying thyroid nodules according to TI-RADS and assessing malignancy effectively. We use ResNet-101 and DenseNet-201 models to classify thyroid nodules according to TI-RADS and malignancy. By analyzing the models' last layer using the Grad-CAM algorithm, we demonstrate that these models can identify risk areas and detect nodule features relevant to the TI-RADS score. By integrating Grad-CAM results with feature probability calculations, we provide a precise heat map, visualizing specific features within the nodule and potentially assisting doctors in their assessments. Our experiments show that the utilization of ResNet-101 and DenseNet-201 models, in conjunction with Grad-CAM visualization analysis, improves TI-RADS classification accuracy by up to 10%. This enhancement, achieved through iterative analysis and re-training, underscores the potential of machine learning in advancing thyroid nodule diagnosis, offering a promising direction for further exploration and clinical application.


Assuntos
Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Pescoço , Projetos de Pesquisa , Algoritmos
19.
Sensors (Basel) ; 23(4)2023 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-36850487

RESUMO

Leaf numbers are vital in estimating the yield of crops. Traditional manual leaf-counting is tedious, costly, and an enormous job. Recent convolutional neural network-based approaches achieve promising results for rosette plants. However, there is a lack of effective solutions to tackle leaf counting for monocot plants, such as sorghum and maize. The existing approaches often require substantial training datasets and annotations, thus incurring significant overheads for labeling. Moreover, these approaches can easily fail when leaf structures are occluded in images. To address these issues, we present a new deep neural network-based method that does not require any effort to label leaf structures explicitly and achieves superior performance even with severe leaf occlusions in images. Our method extracts leaf skeletons to gain more topological information and applies augmentation to enhance structural variety in the original images. Then, we feed the combination of original images, derived skeletons, and augmentations into a regression model, transferred from Inception-Resnet-V2, for leaf-counting. We find that leaf tips are important in our regression model through an input modification method and a Grad-CAM method. The superiority of the proposed method is validated via comparison with the existing approaches conducted on a similar dataset. The results show that our method does not only improve the accuracy of leaf-counting, with overlaps and occlusions, but also lower the training cost, with fewer annotations compared to the previous state-of-the-art approaches.The robustness of the proposed method against the noise effect is also verified by removing the environmental noises during the image preprocessing and reducing the effect of the noises introduced by skeletonization, with satisfactory outcomes.


Assuntos
Produtos Agrícolas , Grão Comestível , Redes Neurais de Computação , Folhas de Planta , Esqueleto
20.
Sensors (Basel) ; 23(9)2023 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-37177747

RESUMO

The paper was devoted to the application of saliency analysis methods in the performance analysis of deep neural networks used for the binary classification of brain tumours. We have presented the basic issues related to deep learning techniques. A significant challenge in using deep learning methods is the ability to explain the decision-making process of the network. To ensure accurate results, the deep network being used must undergo extensive training to produce high-quality predictions. There are various network architectures that differ in their properties and number of parameters. Consequently, an intriguing question is how these different networks arrive at similar or distinct decisions based on the same set of prerequisites. Therefore, three widely used deep convolutional networks have been discussed, such as VGG16, ResNet50 and EfficientNetB7, which were used as backbone models. We have customized the output layer of these pre-trained models with a softmax layer. In addition, an additional network has been described that was used to assess the saliency areas obtained. For each of the above networks, many tests have been performed using key metrics, including statistical evaluation of the impact of class activation mapping (CAM) and gradient-weighted class activation mapping (Grad-CAM) on network performance on a publicly available dataset of brain tumour X-ray images.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Humanos , Redes Neurais de Computação , Neoplasias Encefálicas/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa