Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 19(7): e0302413, 2024.
Article in English | MEDLINE | ID: mdl-38976703

ABSTRACT

During the COVID-19 pandemic, pneumonia was the leading cause of respiratory failure and death. In addition to SARS-COV-2, it can be caused by several other bacterial and viral agents. Even today, variants of SARS-COV-2 are endemic and COVID-19 cases are common in many places. The symptoms of COVID-19 are highly diverse and robust, ranging from invisible to severe respiratory failure. Current detection methods for the disease are time-consuming and expensive with low accuracy and precision. To address such situations, we have designed a framework for COVID-19 and Pneumonia detection using multiple deep learning algorithms further accompanied by a deployment scheme. In this study, we have utilized four prominent deep learning models, which are VGG-19, ResNet-50, Inception V3 and Xception, on two separate datasets of CT scan and X-ray images (COVID/Non-COVID) to identify the best models for the detection of COVID-19. We achieved accuracies ranging from 86% to 99% depending on the model and dataset. To further validate our findings, we have applied the four distinct models on two more supplementary datasets of X-ray images of bacterial pneumonia and viral pneumonia. Additionally, we have implemented a flask app to visualize the outcome of our framework to show the identified COVID and Non-COVID images. The findings of this study will be helpful to develop an AI-driven automated tool for the cost effective and faster detection and better management of COVID-19 patients.


Subject(s)
COVID-19 , Deep Learning , SARS-CoV-2 , Tomography, X-Ray Computed , COVID-19/diagnostic imaging , Humans , Tomography, X-Ray Computed/methods , SARS-CoV-2/isolation & purification , Pneumonia, Viral/diagnostic imaging , Pandemics , Algorithms , Pneumonia/diagnostic imaging , Pneumonia/diagnosis , Coronavirus Infections/diagnostic imaging , Coronavirus Infections/diagnosis , Internet , Betacoronavirus
2.
Med Image Anal ; 95: 103159, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38663318

ABSTRACT

We have developed a United framework that integrates three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning), enabling collaborative learning among the three learning ingredients and yielding three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this collaboration, we redesigned nine prominent self-supervised methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, TransVW, MoCo, BYOL, PCRL, and Swin UNETR, and augmented each with its missing components in a United framework for 3D medical imaging. However, such a United framework increases model complexity, making 3D pretraining difficult. To overcome this difficulty, we propose stepwise incremental pretraining, a strategy that unifies the pretraining, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning. Last, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models pretraining, resulting in significant performance gains and annotation cost reduction via transfer learning in six target tasks, ranging from classification to segmentation, across diseases, organs, datasets, and modalities. This performance improvement is attributed to the synergy of the three SSL ingredients in our United framework unleashed through stepwise incremental pretraining. Our codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.


Subject(s)
Imaging, Three-Dimensional , Supervised Machine Learning , Humans , Imaging, Three-Dimensional/methods , Algorithms
3.
Med Image Anal ; 91: 102988, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37924750

ABSTRACT

Pulmonary Embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients death. This disorder is commonly diagnosed using Computed Tomography Pulmonary Angiography (CTPA). Deep learning holds great promise for the Computer-aided Diagnosis (CAD) of PE. However, numerous deep learning methods, such as Convolutional Neural Networks (CNN) and Transformer-based models, exist for a given task, causing great confusion regarding the development of CAD systems for PE. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis based on four datasets. First, we use the RSNA PE dataset, which includes (weak) slice-level and exam-level labels, for PE classification and diagnosis, respectively. At the slice level, we compare CNNs with the Vision Transformer (ViT) and the Swin Transformer. We also investigate the impact of self-supervised versus (fully) supervised ImageNet pre-training, and transfer learning over training models from scratch. Additionally, at the exam level, we compare sequence model learning with our proposed transformer-based architecture, Embedding-based ViT (E-ViT). For the second and third datasets, we utilize the CAD-PE Challenge Dataset and Ferdowsi University of Mashad's PE Dataset, where we convert (strong) clot-level masks into slice-level annotations to evaluate the optimal CNN model for slice-level PE classification. Finally, we use our in-house PE-CAD dataset, which contains (strong) clot-level masks. Here, we investigate the impact of our vessel-oriented image representations and self-supervised pre-training on PE false positive reduction at the clot level across image dimensions (2D, 2.5D, and 3D). Our experiments show that (1) transfer learning boosts performance despite differences between photographic images and CTPA scans; (2) self-supervised pre-training can surpass (fully) supervised pre-training; (3) transformer-based models demonstrate comparable performance but slower convergence compared with CNNs for slice-level PE classification; (4) model trained on the RSNA PE dataset demonstrates promising performance when tested on unseen datasets for slice-level PE classification; (5) our E-ViT framework excels in handling variable numbers of slices and outperforms sequence model learning for exam-level diagnosis; and (6) vessel-oriented image representation and self-supervised pre-training both enhance performance for PE false positive reduction across image dimensions. Our optimal approach surpasses state-of-the-art results on the RSNA PE dataset, enhancing AUC by 0.62% (slice-level) and 2.22% (exam-level). On our in-house PE-CAD dataset, 3D vessel-oriented images improve performance from 80.07% to 91.35%, a remarkable 11% gain. Codes are available at GitHub.com/JLiangLab/CAD_PE.


Subject(s)
Diagnosis, Computer-Assisted , Pulmonary Embolism , Humans , Diagnosis, Computer-Assisted/methods , Neural Networks, Computer , Imaging, Three-Dimensional , Pulmonary Embolism/diagnostic imaging , Computers
SELECTION OF CITATIONS
SEARCH DETAIL