Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 89
1.
Radiol Case Rep ; 19(8): 3080-3083, 2024 Aug.
Article En | MEDLINE | ID: mdl-38770385

Anomalous origin of the circumflex artery from the pulmonary artery (ACxAPA) is a rare but clinically significant condition in which the circumflex artery arises from either the main pulmonary artery or one of its main branches. Untreated patients with ACxAPA may develop severe heart failure or sudden cardiac death. Diagnosis is established with either catheter or CT angiography. We present a case of an adult male with no prior known cardiac history who was found to have ACxAPA after presenting to our institution in acute decompensated heart failure.

2.
Article En | MEDLINE | ID: mdl-38752223

Human anatomy is the foundation of medical imaging and boasts one striking characteristic: its hierarchy in nature, exhibiting two intrinsic properties: (1) locality: each anatomical structure is morphologically distinct from the others; and (2) compositionality: each anatomical structure is an integrated part of a larger whole. We envision a foundation model for medical imaging that is consciously and purposefully developed upon this foundation to gain the capability of "understanding" human anatomy and to possess the fundamental properties of medical imaging. As our first step in realizing this vision towards foundation models in medical imaging, we devise a novel self-supervised learning (SSL) strategy that exploits the hierarchical nature of human anatomy. Our extensive experiments demonstrate that the SSL pretrained model, derived from our training strategy, not only outperforms state-of-the-art (SOTA) fully/self-supervised baselines but also enhances annotation efficiency, offering potential few-shot segmentation capabilities with performance improvements ranging from 9% to 30% for segmentation tasks compared to SSL baselines. This performance is attributed to the significance of anatomy comprehension via our learning strategy, which encapsulates the intrinsic attributes of anatomical structures-locality and compositionality-within the embedding space, yet overlooked in existing SSL methods. All code and pretrained models are available at GitHub.com/JLiangLab/Eden.

3.
Med Image Anal ; 95: 103159, 2024 Jul.
Article En | MEDLINE | ID: mdl-38663318

We have developed a United framework that integrates three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning), enabling collaborative learning among the three learning ingredients and yielding three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this collaboration, we redesigned nine prominent self-supervised methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, TransVW, MoCo, BYOL, PCRL, and Swin UNETR, and augmented each with its missing components in a United framework for 3D medical imaging. However, such a United framework increases model complexity, making 3D pretraining difficult. To overcome this difficulty, we propose stepwise incremental pretraining, a strategy that unifies the pretraining, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning. Last, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models pretraining, resulting in significant performance gains and annotation cost reduction via transfer learning in six target tasks, ranging from classification to segmentation, across diseases, organs, datasets, and modalities. This performance improvement is attributed to the synergy of the three SSL ingredients in our United framework unleashed through stepwise incremental pretraining. Our codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.


Imaging, Three-Dimensional , Supervised Machine Learning , Humans , Imaging, Three-Dimensional/methods , Algorithms
4.
Semin Respir Crit Care Med ; 45(3): 287-304, 2024 Jun.
Article En | MEDLINE | ID: mdl-38631369

Interstitial lung disorders are a group of respiratory diseases characterized by interstitial compartment infiltration, varying degrees of infiltration, and fibrosis, with or without small airway involvement. Although some are idiopathic (e.g., idiopathic pulmonary fibrosis, idiopathic interstitial pneumonias, and sarcoidosis), the great majority have an underlying etiology, such as systemic autoimmune rheumatic disease (SARD, also called Connective Tissue Diseases or CTD), inhalational exposure to organic matter, medications, and rarely, genetic disorders. This review focuses on diagnostic approaches in interstitial lung diseases associated with SARDs. To make an accurate diagnosis, a multidisciplinary, personalized approach is required, with input from various specialties, including pulmonary, rheumatology, radiology, and pathology, to reach a consensus. In a minority of patients, a definitive diagnosis cannot be established. Their clinical presentations and prognosis can be variable even within subsets of SARDs.


Connective Tissue Diseases , Lung Diseases, Interstitial , Humans , Lung Diseases, Interstitial/diagnosis , Lung Diseases, Interstitial/etiology , Connective Tissue Diseases/diagnosis , Connective Tissue Diseases/complications , Prognosis , Rheumatic Diseases/diagnosis , Rheumatic Diseases/complications , Autoimmune Diseases/diagnosis , Autoimmune Diseases/complications
5.
Med Image Anal ; 94: 103086, 2024 May.
Article En | MEDLINE | ID: mdl-38537414

Discriminative, restorative, and adversarial learning have proven beneficial for self-supervised learning schemes in computer vision and medical imaging. Existing efforts, however, fail to capitalize on the potentially synergistic effects these methods may offer in a ternary setup, which, we envision can significantly benefit deep semantic representation learning. Towards this end, we developed DiRA, the first framework that unites discriminative, restorative, and adversarial learning in a unified manner to collaboratively glean complementary visual information from unlabeled medical images for fine-grained semantic representation learning. Our extensive experiments demonstrate that DiRA: (1) encourages collaborative learning among three learning ingredients, resulting in more generalizable representation across organs, diseases, and modalities; (2) outperforms fully supervised ImageNet models and increases robustness in small data regimes, reducing annotation cost across multiple medical imaging applications; (3) learns fine-grained semantic representation, facilitating accurate lesion localization with only image-level annotation; (4) improves reusability of low/mid-level features; and (5) enhances restorative self-supervised approaches, revealing that DiRA is a general framework for united representation learning. Code and pretrained models are available at https://github.com/JLiangLab/DiRA.


Hereditary Autoinflammatory Diseases , Humans , Semantics , Supervised Machine Learning , Interleukin 1 Receptor Antagonist Protein
6.
Med Image Anal ; 91: 102988, 2024 Jan.
Article En | MEDLINE | ID: mdl-37924750

Pulmonary Embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients death. This disorder is commonly diagnosed using Computed Tomography Pulmonary Angiography (CTPA). Deep learning holds great promise for the Computer-aided Diagnosis (CAD) of PE. However, numerous deep learning methods, such as Convolutional Neural Networks (CNN) and Transformer-based models, exist for a given task, causing great confusion regarding the development of CAD systems for PE. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis based on four datasets. First, we use the RSNA PE dataset, which includes (weak) slice-level and exam-level labels, for PE classification and diagnosis, respectively. At the slice level, we compare CNNs with the Vision Transformer (ViT) and the Swin Transformer. We also investigate the impact of self-supervised versus (fully) supervised ImageNet pre-training, and transfer learning over training models from scratch. Additionally, at the exam level, we compare sequence model learning with our proposed transformer-based architecture, Embedding-based ViT (E-ViT). For the second and third datasets, we utilize the CAD-PE Challenge Dataset and Ferdowsi University of Mashad's PE Dataset, where we convert (strong) clot-level masks into slice-level annotations to evaluate the optimal CNN model for slice-level PE classification. Finally, we use our in-house PE-CAD dataset, which contains (strong) clot-level masks. Here, we investigate the impact of our vessel-oriented image representations and self-supervised pre-training on PE false positive reduction at the clot level across image dimensions (2D, 2.5D, and 3D). Our experiments show that (1) transfer learning boosts performance despite differences between photographic images and CTPA scans; (2) self-supervised pre-training can surpass (fully) supervised pre-training; (3) transformer-based models demonstrate comparable performance but slower convergence compared with CNNs for slice-level PE classification; (4) model trained on the RSNA PE dataset demonstrates promising performance when tested on unseen datasets for slice-level PE classification; (5) our E-ViT framework excels in handling variable numbers of slices and outperforms sequence model learning for exam-level diagnosis; and (6) vessel-oriented image representation and self-supervised pre-training both enhance performance for PE false positive reduction across image dimensions. Our optimal approach surpasses state-of-the-art results on the RSNA PE dataset, enhancing AUC by 0.62% (slice-level) and 2.22% (exam-level). On our in-house PE-CAD dataset, 3D vessel-oriented images improve performance from 80.07% to 91.35%, a remarkable 11% gain. Codes are available at GitHub.com/JLiangLab/CAD_PE.


Diagnosis, Computer-Assisted , Pulmonary Embolism , Humans , Diagnosis, Computer-Assisted/methods , Neural Networks, Computer , Imaging, Three-Dimensional , Pulmonary Embolism/diagnostic imaging , Computers
7.
Am J Surg Pathol ; 47(3): 281-295, 2023 03 01.
Article En | MEDLINE | ID: mdl-36597787

The use of lymphoid interstitial pneumonia (LIP) as a diagnostic term has changed considerably since its introduction. Utilizing a multi-institutional collection of 201 cases from the last 20 years that demonstrate features associated with the LIP rubric, we compared cases meeting strict histologic criteria of LIP per American Thoracic Society (ATS)/European Respiratory Society (ERS) consensus ("pathologic LIP"; n=62) with cystic cases fulfilling radiologic ATS/ERS criteria ("radiologic LIP"; n=33) and with other diffuse benign lymphoid proliferations. "Pathologic LIP" was associated with immune dysregulation including autoimmune disorders and immune deficiency, whereas "radiologic LIP" was only seen with autoimmune disorders. No case of idiopathic LIP was found. On histology, "pathologic LIP" represented a subgroup of 70% (62/88) of cases with the distinctive pattern of diffuse expansile lymphoid infiltrates. In contrast, "radiologic LIP" demonstrated a broad spectrum of inflammatory patterns, airway-centered inflammation being most common (52%; 17/33). Only 5 cases with radiologic cysts also met consensus ATS/ERS criteria for "pathologic LIP." Overall, broad overlap was observed with the remaining study cases that failed to meet consensus criteria for "radiologic LIP" and/or "pathologic LIP." These data raise concerns about the practical use of the term LIP as currently defined. What radiologists and pathologist encounter as LIP differs remarkably, but neither "radiologic LIP" nor "pathologic LIP" present with sufficiently distinct findings to delineate such cases from other patterns of diffuse benign lymphoid proliferations. As a result of this study, we believe LIP should be abandoned as a pathologic and radiologic diagnosis.


Idiopathic Interstitial Pneumonias , Lung Diseases, Interstitial , Humans , Lung Diseases, Interstitial/pathology , Lung/pathology , Idiopathic Interstitial Pneumonias/diagnosis , Idiopathic Interstitial Pneumonias/pathology , Radiography
8.
Med Image Comput Comput Assist Interv ; 14220: 651-662, 2023 Oct.
Article En | MEDLINE | ID: mdl-38751905

Deep learning nowadays offers expert-level and sometimes even super-expert-level performance, but achieving such performance demands massive annotated data for training (e.g., Google's proprietary CXR Foundation Model (CXR-FM) was trained on 821,544 labeled and mostly private chest X-rays (CXRs)). Numerous datasets are publicly available in medical imaging but individually small and heterogeneous in expert labels. We envision a powerful and robust foundation model that can be trained by aggregating numerous small public datasets. To realize this vision, we have developed Ark, a framework that accrues and reuses knowledge from heterogeneous expert annotations in various datasets. As a proof of concept, we have trained two Ark models on 335,484 and 704,363 CXRs, respectively, by merging several datasets including ChestX-ray14, CheXpert, MIMIC-II, and VinDr-CXR, evaluated them on a wide range of imaging tasks covering both classification and segmentation via fine-tuning, linear-probing, and gender-bias analysis, and demonstrated our Ark's superior and robust performance over the state-of-the-art (SOTA) fully/self-supervised baselines and Google's proprietary CXR-FM. This enhanced performance is attributed to our simple yet powerful observation that aggregating numerous public datasets diversifies patient populations and accrues knowledge from diverse experts, yielding unprecedented performance yet saving annotation cost. With all codes and pretrained models released at GitHub.com/JLiangLab/Ark, we hope that Ark exerts an important impact on open science, as accruing and reusing knowledge from expert annotations in public datasets can potentially surpass the performance of proprietary models trained on unusually large data, inspiring many more researchers worldwide to share codes and datasets to build open foundation models, accelerate open science, and democratize deep learning for medical imaging.

9.
Proc Mach Learn Res ; 172: 535-551, 2022 Jul.
Article En | MEDLINE | ID: mdl-36579134

Recently, self-supervised instance discrimination methods have achieved significant success in learning visual representations from unlabeled photographic images. However, given the marked differences between photographic and medical images, the efficacy of instance-based objectives, focusing on learning the most discriminative global features in the image (i.e., wheels in bicycle), remains unknown in medical imaging. Our preliminary analysis showed that high global similarity of medical images in terms of anatomy hampers instance discrimination methods for capturing a set of distinct features, negatively impacting their performance on medical downstream tasks. To alleviate this limitation, we have developed a simple yet effective self-supervised framework, called Context-Aware instance Discrimination (CAiD). CAiD aims to improve instance discrimination learning by providing finer and more discriminative information encoded from a diverse local context of unlabeled medical images. We conduct a systematic analysis to investigate the utility of the learned features from a three-pronged perspective: (i) generalizability and transferability, (ii) separability in the embedding space, and (iii) reusability. Our extensive experiments demonstrate that CAiD (1) enriches representations learned from existing instance discrimination methods; (2) delivers more discriminative features by adequately capturing finer contextual information from individual medial images; and (3) improves reusability of low/mid-level features compared to standard instance discriminative methods. As open science, all codes and pre-trained models are available on our GitHub page: https://github.com/JLiangLab/CAiD.

10.
Domain Adapt Represent Transf (2022) ; 13542: 77-87, 2022 Sep.
Article En | MEDLINE | ID: mdl-36507898

Vision transformer-based self-supervised learning (SSL) approaches have recently shown substantial success in learning visual representations from unannotated photographic images. However, their acceptance in medical imaging is still lukewarm, due to the significant discrepancy between medical and photographic images. Consequently, we propose POPAR (patch order prediction and appearance recovery), a novel vision transformer-based self-supervised learning framework for chest X-ray images. POPAR leverages the benefits of vision transformers and unique properties of medical imaging, aiming to simultaneously learn patch-wise high-level contextual features by correcting shuffled patch orders and fine-grained features by recovering patch appearance. We transfer POPAR pretrained models to diverse downstream tasks. The experiment results suggest that (1) POPAR outperforms state-of-the-art (SoTA) self-supervised models with vision transformer backbone; (2) POPAR achieves significantly better performance over all three SoTA contrastive learning methods; and (3) POPAR also outperforms fully-supervised pretrained models across architectures. In addition, our ablation study suggests that to achieve better performance on medical imaging tasks, both fine-grained and global contextual features are preferred. All code and models are available at GitHub.com/JLiangLab/POPAR.

11.
Domain Adapt Represent Transf (2022) ; 13542: 66-76, 2022 Sep.
Article En | MEDLINE | ID: mdl-36507899

Uniting three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning) enables collaborative representation learning and yields three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this advantage, we have redesigned five prominent SSL methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, and TransVW, and formulated each in a United framework for 3D medical imaging. However, such a United framework increases model complexity and pretraining difficulty. To overcome this difficulty, we develop a stepwise incremental pretraining strategy, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning, and finally, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models training, resulting in significant performance gains and annotation cost reduction via transfer learning for five target tasks, encompassing both classification and segmentation, across diseases, organs, datasets, and modalities. This performance is attributed to the synergy of the three SSL ingredients in our United framework unleashed via stepwise incremental pretraining. All codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.

12.
Domain Adapt Represent Transf (2022) ; 13542: 12-22, 2022 Sep.
Article En | MEDLINE | ID: mdl-36383492

Visual transformers have recently gained popularity in the computer vision community as they began to outrank convolutional neural networks (CNNs) in one representative visual benchmark after another. However, the competition between visual transformers and CNNs in medical imaging is rarely studied, leaving many important questions unanswered. As the first step, we benchmark how well existing transformer variants that use various (supervised and self-supervised) pre-training methods perform against CNNs on a variety of medical classification tasks. Furthermore, given the data-hungry nature of transformers and the annotation-deficiency challenge of medical imaging, we present a practical approach for bridging the domain gap between photographic and medical images by utilizing unlabeled large-scale in-domain data. Our extensive empirical evaluations reveal the following insights in medical imaging: (1) good initialization is more crucial for transformer-based models than for CNNs, (2) self-supervised learning based on masked image modeling captures more generalizable representations than supervised models, and (3) assembling a larger-scale domain-specific dataset can better bridge the domain gap between photographic and medical images via self-supervised continuous pre-training. We hope this benchmark study can direct future research on applying transformers to medical imaging analysis. All codes and pre-trained models are available on our GitHub page https://github.com/JLiangLab/BenchmarkTransformers.

13.
Article En | MEDLINE | ID: mdl-36313959

Discriminative learning, restorative learning, and adversarial learning have proven beneficial for self-supervised learning schemes in computer vision and medical imaging. Existing efforts, however, omit their synergistic effects on each other in a ternary setup, which, we envision, can significantly benefit deep semantic representation learning. To realize this vision, we have developed DiRA, the first framework that unites discriminative, restorative, and adversarial learning in a unified manner to collaboratively glean complementary visual information from unlabeled medical images for fine-grained semantic representation learning. Our extensive experiments demonstrate that DiRA (1) encourages collaborative learning among three learning ingredients, resulting in more generalizable representation across organs, diseases, and modalities; (2) outperforms fully supervised ImageNet models and increases robustness in small data regimes, reducing annotation cost across multiple medical imaging applications; (3) learns fine-grained semantic representation, facilitating accurate lesion localization with only image-level annotation; and (4) enhances state-of-the-art restorative approaches, revealing that DiRA is a general mechanism for united representation learning. All code and pretrained models are available at https://github.com/JLiangLab/DiRA.

14.
JACC Case Rep ; 4(8): 476-480, 2022 Apr 20.
Article En | MEDLINE | ID: mdl-35493796

Although infrequent, damage to cardiovascular structures can occur during or following a minimally invasive repair of pectus excavatum. We present a case of right ventricular outflow tract compression caused by a displaced intrathoracic bar. Removal of the bar resulted in an improvement in symptoms and hemodynamics. (Level of Difficulty: Advanced.).

15.
J Am Heart Assoc ; 11(7): e022149, 2022 04 05.
Article En | MEDLINE | ID: mdl-35377159

Background Pectus excavatum is the most common chest wall deformity. There is still controversy about cardiopulmonary limitations of this disease and benefits of surgical repair. This study evaluates the impact of pectus excavatum on the cardiopulmonary function of adult patients before and after a modified minimally invasive repair. Methods and Results In this retrospective cohort study, an electronic database was used to identify consecutive adult (aged ≥18 years) patients who underwent cardiopulmonary exercise testing before and after primary pectus excavatum repair at Mayo Clinic Arizona from 2011 to 2020. In total, 392 patients underwent preoperative cardiopulmonary exercise testing; abnormal oxygen consumption results were present in 68% of patients. Among them, 130 patients (68% men, mean age, 32.4±10.0 years) had post-repair evaluations. Post-repair tests were performed immediately before bar removal with a mean time between repair and post-repair testing of 3.4±0.7 years (range, 2.5-7.0). A significant improvement in cardiopulmonary outcomes (P<0.001 for all the comparisons) was seen in the post-repair evaluations, including an increase in maximum, and predicted rate of oxygen consumption, oxygen pulse, oxygen consumption at anaerobic threshold, and maximal ventilation. In a subanalysis of 39 patients who also underwent intraoperative transesophageal echocardiography at repair and at bar removal, a significant increase in right ventricle stroke volume was found (P<0.001). Conclusions Consistent improvements in cardiopulmonary function were seen for pectus excavatum adult patients undergoing surgery. These results strongly support the existence of adverse cardiopulmonary consequences from this disease as well as the benefits of surgical repair.


Funnel Chest , Adolescent , Adult , Female , Funnel Chest/surgery , Humans , Lung , Male , Postoperative Period , Retrospective Studies , Treatment Outcome , Young Adult
16.
Radiographics ; 42(1): 38-55, 2022.
Article En | MEDLINE | ID: mdl-34826256

Medication-induced pulmonary injury (MIPI) is a complex medical condition that has become increasingly common yet remains stubbornly difficult to diagnose. Diagnosis can be aided by combining knowledge of the most common imaging patterns caused by MIPI with awareness of which medications a patient may be exposed to in specific clinical settings. The authors describe six imaging patterns commonly associated with MIPI: sarcoidosis-like, diffuse ground-glass opacities, organizing pneumonia, centrilobular ground-glass nodules, linear-septal, and fibrotic. Subsequently, the occurrence of these patterns is discussed in the context of five different clinical scenarios and the medications and medication classes typically used in those scenarios. These scenarios and medication classes include the rheumatology or gastrointestinal clinic (disease-modifying antirheumatic agents), cardiology clinic (antiarrhythmics), hematology clinic (cytotoxic agents, tyrosine kinase inhibitors, retinoids), oncology clinic (immune modulators, tyrosine kinase inhibitors, monoclonal antibodies), and inpatient service (antibiotics, blood products). Additionally, the article draws comparisons between the appearance of MIPI and the alternative causes of lung disease typically seen in those clinical scenarios (eg, connective tissue disease-related interstitial lung disease in the rheumatology clinic and hydrostatic pulmonary edema in the cardiology clinic). Familiarity with the most common imaging patterns associated with frequently administered medications can help insert MIPI into the differential diagnosis of acquired lung disease in these scenarios. However, confident diagnosis is often thwarted by absence of specific diagnostic tests for MIPI. Instead, a working diagnosis typically relies on multidisciplinary consensus. ©RSNA, 2021.


Connective Tissue Diseases , Lung Diseases, Interstitial , Lung Injury , Humans , Lung , Lung Injury/chemically induced , Lung Injury/diagnostic imaging , Tomography, X-Ray Computed/methods
17.
Med Image Anal ; 71: 101997, 2021 07.
Article En | MEDLINE | ID: mdl-33853034

The splendid success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, in medical imaging, it is challenging to create such large annotated datasets, as annotating medical images is not only tedious, laborious, and time consuming, but it also demands costly, specialty-oriented skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework, which starts directly with a pre-trained CNN to seek "worthy" samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. We have evaluated our method using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.


Diagnostic Imaging , Neural Networks, Computer , Humans , Longitudinal Studies
18.
Med Mycol ; 59(8): 834-841, 2021 Jul 14.
Article En | MEDLINE | ID: mdl-33724424

Approximately 5 to 15% of patients with pulmonary coccidioidomycosis subsequently develop pulmonary cavities. These cavities may resolve spontaneously over a number of years; however, some cavities never close, and a small proportion causes complications such as hemorrhage, pneumothorax or empyema. The impact of azole antifungal treatment on coccidioidal cavities has not been studied. Because azoles are a common treatment for symptomatic pulmonary coccidioidomycosis, we aimed to assess the impact of azole therapy on cavity closure. From January 1, 2004, through December 31, 2014, we retrospectively identified 313 patients with cavitary coccidioidomycosis and excluded 42 who had the cavity removed surgically, leaving 271 data sets available for study. Of the 271 patients, 221 (81.5%) received azole therapy during 5-year follow-up; 50 patients did not receive antifungal treatment. Among the 271 patients, cavities closed in 38 (14.0%). Statistical modeling showed that cavities were more likely to close in patients in the treated group than in the nontreated group (hazard ratio, 2.14 [95% CI: 1.45-5.66]). Cavities were less likely to close in active smokers than nonsmokers (11/41 [26.8%] vs 97/182 [53.3%]; P = 0.002) or in persons with than without diabetes (27/74 [36.5%] vs 81/149 [54.4%]; P = 0.01).We did not find an association between cavity size and closure. Our findings provide rationale for further study of treatment protocols in this subset of patients with coccidioidomycosis. LAY SUMMARY: Coccidioidomycosis, known as valley fever, is a fungal infection that infrequently causes cavities to form in the lungs, which potentially results in long-term lung symptoms. We learned that cavities closed more often in persons who received antifungal drugs, but most cavities never closed completely.


Antifungal Agents/therapeutic use , Azoles/therapeutic use , Coccidioidomycosis/drug therapy , Adolescent , Adult , Aged , Aged, 80 and over , Coccidioidomycosis/complications , Coccidioidomycosis/epidemiology , Comorbidity , Diabetes Complications/drug therapy , Diabetes Complications/epidemiology , Female , Humans , Immunosuppression Therapy , Male , Middle Aged , Neoplasms/complications , Pulmonary Disease, Chronic Obstructive/complications , Pulmonary Disease, Chronic Obstructive/epidemiology , Retrospective Studies , Smokers , Transplant Recipients , Young Adult
19.
IEEE Trans Med Imaging ; 40(10): 2857-2868, 2021 10.
Article En | MEDLINE | ID: mdl-33617450

This paper introduces a new concept called "transferable visual words" (TransVW), aiming to achieve annotation efficiency for deep learning in medical image analysis. Medical imaging-focusing on particular parts of the body for defined clinical purposes-generates images of great similarity in anatomy across patients and yields sophisticated anatomical patterns across images, which are associated with rich semantics about human anatomy and which are natural visual words. We show that these visual words can be automatically harvested according to anatomical consistency via self-discovery, and that the self-discovered visual words can serve as strong yet free supervision signals for deep models to learn semantics-enriched generic image representation via self-supervision (self-classification and self-restoration). Our extensive experiments demonstrate the annotation efficiency of TransVW by offering higher performance and faster convergence with reduced annotation cost in several applications. Our TransVW has several important advantages, including (1) TransVW is a fully autodidactic scheme, which exploits the semantics of visual words for self-supervised learning, requiring no expert annotation; (2) visual word learning is an add-on strategy, which complements existing self-supervised methods, boosting their performance; and (3) the learned image representation is semantics-enriched models, which have proven to be more robust and generalizable, saving annotation efforts for a variety of applications through transfer learning. Our code, pre-trained models, and curated visual words are available at https://github.com/JLiangLab/TransVW.


Diagnostic Imaging , Semantics , Humans , Radiography , Supervised Machine Learning
20.
Mach Learn Med Imaging ; 12966: 692-702, 2021 Sep.
Article En | MEDLINE | ID: mdl-35695860

Pulmonary embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients, death. This disorder is commonly diagnosed using CT pulmonary angiography (CTPA). Deep learning holds great promise for the computer-aided CTPA diagnosis (CAD) of PE. However, numerous competing methods for a given task in the deep learning literature exist, causing great confusion regarding the development of a CAD PE system. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis using CTPA at the both image and exam levels. At the image level, we compare convolutional neural networks (CNNs) with vision transformers, and contrast self-supervised learning (SSL) with supervised learning, followed by an evaluation of transfer learning compared with training from scratch. At the exam level, we focus on comparing conventional classification (CC) with multiple instance learning (MIL). Our extensive experiments consistently show: (1) transfer learning consistently boosts performance despite differences between natural images and CT scans, (2) transfer learning with SSL surpasses its supervised counterparts; (3) CNNs outperform vision transformers, which otherwise show satisfactory performance; and (4) CC is, surprisingly, superior to MIL. Compared with the state of the art, our optimal approach provides an AUC gain of 0.2% and 1.05% for image-level and exam-level, respectively.

...