Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
IEEE Trans Med Imaging ; 43(1): 416-426, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37651492

ABSTRACT

Deep learning methods are often hampered by issues such as data imbalance and data-hungry. In medical imaging, malignant or rare diseases are frequently of minority classes in the dataset, featured by diversified distribution. Besides that, insufficient labels and unseen cases also present conundrums for training on the minority classes. To confront the stated problems, we propose a novel Hierarchical-instance Contrastive Learning (HCLe) method for minority detection by only involving data from the majority class in the training stage. To tackle inconsistent intra-class distribution in majority classes, our method introduces two branches, where the first branch employs an auto-encoder network augmented with three constraint functions to effectively extract image-level features, and the second branch designs a novel contrastive learning network by taking into account the consistency of features among hierarchical samples from majority classes. The proposed method is further refined with a diverse mini-batch strategy, enabling the identification of minority classes under multiple conditions. Extensive experiments have been conducted to evaluate the proposed method on three datasets of different diseases and modalities. The experimental results show that the proposed method outperforms the state-of-the-art methods.

2.
Eur J Radiol ; 171: 111277, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38160541

ABSTRACT

OBJECTIVES: To explore the possibility of automatic diagnosis of congenital heart disease (CHD) and pulmonary arterial hypertension associated with CHD (PAH-CHD) from chest radiographs using artificial intelligence (AI) technology and to evaluate whether AI assistance could improve clinical diagnostic accuracy. MATERIALS AND METHODS: A total of 3255 frontal preoperative chest radiographs (1174 CHD of any type and 2081 non-CHD) were retrospectively obtained. In this study, we adopted ResNet18 pretrained with the ImageNet database to establish diagnostic models. Radiologists diagnosed CHD/PAH-CHD from 330/165 chest radiographs twice: the first time, 50% of the images were accompanied by AI-based classification; after a month, the remaining 50% were accompanied by AI-based classification. Diagnostic results were compared between the radiologists and AI models, and between radiologists with and without AI assistance. RESULTS: The AI model achieved an average area under the receiver operating characteristic curve (AUC) of 0.948 (sensitivity: 0.970, specificity: 0.982) for CHD diagnoses and an AUC of 0.778 (sensitivity: 0.632, specificity: 0.925) for identifying PAH-CHD. In the 330 balanced (165 CHD and 165 non-CHD) testing set, AI achieved higher AUCs than all 5 radiologists in the identification of CHD (0.670-0.858) and PAH-CHD (0.610-0.688). With AI assistance, the mean ± standard error AUC of radiologists was significantly improved for CHD (ΔAUC + 0.096, 95 % CI: 0.001-0.190; P = 0.048) and PAH-CHD (ΔAUC + 0.066, 95 % CI: 0.010-0.122; P = 0.031) diagnosis. CONCLUSION: Chest radiograph-based AI models can detect CHD and PAH-CHD automatically. AI assistance improved radiologists' diagnostic accuracy, which may facilitate a timely initial diagnosis of CHD and PAH-CHD.


Subject(s)
Heart Defects, Congenital , Hypertension, Pulmonary , Pulmonary Arterial Hypertension , Humans , Pulmonary Arterial Hypertension/complications , Artificial Intelligence , Retrospective Studies , Heart Defects, Congenital/complications , Heart Defects, Congenital/diagnostic imaging
3.
Med Image Anal ; 87: 102805, 2023 07.
Article in English | MEDLINE | ID: mdl-37104995

ABSTRACT

Unsupervised anomaly detection (UAD) is to detect anomalies through learning the distribution of normal data without labels and therefore has a wide application in medical images by alleviating the burden of collecting annotated medical data. Current UAD methods mostly learn the normal data by the reconstruction of the original input, but often lack the consideration of any prior information that has semantic meanings. In this paper, we first propose a universal unsupervised anomaly detection framework SSL-AnoVAE, which utilizes a self-supervised learning (SSL) module for providing more fine-grained semantics depending on the to-be detected anomalies in the retinal images. We also explore the relationship between the data transformation adopted in the SSL module and the quality of anomaly detection for retinal images. Moreover, to take full advantage of the proposed SSL-AnoVAE and apply towards clinical usages for computer-aided diagnosis of retinal-related diseases, we further propose to stage and segment the anomalies in retinal images detected by SSL-AnoVAE in an unsupervised manner. Experimental results demonstrate the effectiveness of our proposed method for unsupervised anomaly detection, staging and segmentation on both retinal optical coherence tomography images and color fundus photograph images.


Subject(s)
Diagnosis, Computer-Assisted , Retinal Diseases , Humans , Fundus Oculi , Retinal Diseases/diagnostic imaging , Semantics , Tomography, Optical Coherence , Image Processing, Computer-Assisted
4.
Interdiscip Sci ; 15(2): 262-272, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36656448

ABSTRACT

Differentiation of ductal carcinoma in situ (DCIS, a precancerous lesion of the breast) from fibroadenoma (FA) using ultrasonography is significant for the early prevention of malignant breast tumors. Radiomics-based artificial intelligence (AI) can provide additional diagnostic information but usually requires extensive labeling efforts by clinicians with specialized knowledge. This study aims to investigate the feasibility of differentially diagnosing DCIS and FA using ultrasound radiomics-based AI techniques and further explore a novel approach that can reduce labeling efforts without sacrificing diagnostic performance. We included 461 DCIS and 651 FA patients, of whom 139 DCIS and 181 FA patients constituted a prospective test cohort. First, various feature engineering-based machine learning (FEML) and deep learning (DL) approaches were developed. Then, we designed a difference-based self-supervised (DSS) learning approach that only required FA samples to participate in training. The DSS approach consists of three steps: (1) pretraining a Bootstrap Your Own Latent (BYOL) model using FA images, (2) reconstructing images using the encoder and decoder of the pretrained model, and (3) distinguishing DCIS from FA based on the differences between the original and reconstructed images. The experimental results showed that the trained FEML and DL models achieved the highest AUC of 0.7935 (95% confidence interval, 0.7900-0.7969) on the prospective test cohort, indicating that the developed models are effective for assisting in differentiating DCIS from FA based on ultrasound images. Furthermore, the DSS model achieved an AUC of 0.8172 (95% confidence interval, 0.8124-0.8219), indicating that our model outperforms the conventional radiomics-based AI models and is more competitive.


Subject(s)
Breast Neoplasms , Carcinoma, Intraductal, Noninfiltrating , Fibroadenoma , Humans , Female , Carcinoma, Intraductal, Noninfiltrating/diagnostic imaging , Carcinoma, Intraductal, Noninfiltrating/pathology , Artificial Intelligence , Diagnosis, Differential , Fibroadenoma/diagnostic imaging , Fibroadenoma/pathology , Prospective Studies , Breast Neoplasms/diagnostic imaging , Ultrasonography
5.
Quant Imaging Med Surg ; 12(10): 4758-4770, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36185061

ABSTRACT

Background: This study set out to develop a computed tomography (CT)-based wavelet transforming radiomics approach for grading pulmonary lesions caused by COVID-19 and to validate it using real-world data. Methods: This retrospective study analyzed 111 patients with 187 pulmonary lesions from 16 hospitals; all patients had confirmed COVID-19 and underwent non-contrast chest CT. Data were divided into a training cohort (72 patients with 127 lesions from nine hospitals) and an independent test cohort (39 patients with 60 lesions from seven hospitals) according to the hospital in which the CT was performed. In all, 73 texture features were extracted from manually delineated lesion volumes, and 23 three-dimensional (3D) wavelets with eight decomposition modes were implemented to compare and validate the value of wavelet transformation for grade assessment. Finally, the optimal machine learning pipeline, valuable radiomic features, and final radiomic models were determined. The area under the receiver operating characteristic (ROC) curve (AUC), calibration curve, and decision curve were used to determine the diagnostic performance and clinical utility of the models. Results: Of the 187 lesions, 108 (57.75%) were diagnosed as mild lesions and 79 (42.25%) as moderate/severe lesions. All selected radiomic features showed significant correlations with the grade of COVID-19 pulmonary lesions (P<0.05). Biorthogonal 1.1 (bior1.1) LLL was determined as the optimal wavelet transform mode. The wavelet transforming radiomic model had an AUC of 0.910 in the test cohort, outperforming the original radiomic model (AUC =0.880; P<0.05). Decision analysis showed the radiomic model could add a net benefit at any given threshold of probability. Conclusions: Wavelet transformation can enhance CT texture features. Wavelet transforming radiomics based on CT images can be used to effectively assess the grade of pulmonary lesions caused by COVID-19, which may facilitate individualized management of patients with this disease.

6.
Med Image Anal ; 79: 102443, 2022 07.
Article in English | MEDLINE | ID: mdl-35537340

ABSTRACT

Thyroid nodule segmentation and classification in ultrasound images are two essential but challenging tasks for computer-aided diagnosis of thyroid nodules. Since these two tasks are inherently related to each other and sharing some common features, solving them jointly with multi-task leaning is a promising direction. However, both previous studies and our experimental results confirm the problem of inconsistent predictions among these related tasks. In this paper, we summarize two types of task inconsistency according to the relationship among different tasks: intra-task inconsistency between homogeneous tasks (e.g., both tasks are pixel-wise segmentation tasks); and inter-task inconsistency between heterogeneous tasks (e.g., pixel-wise segmentation task and categorical classification task). To address the task inconsistency problems, we propose intra- and inter-task consistent learning on top of the designed multi-stage and multi-task learning network to enforce the network learn consistent predictions for all the tasks during network training. Our experimental results based on a large clinical thyroid ultrasound image dataset indicate that the proposed intra- and inter-task consistent learning can effectively eliminate both types of task inconsistency and thus improve the performance of all tasks for thyroid nodule segmentation and classification.


Subject(s)
Thyroid Nodule , Diagnosis, Computer-Assisted , Humans , Image Processing, Computer-Assisted , Thyroid Nodule/diagnostic imaging , Ultrasonography/methods
SELECTION OF CITATIONS
SEARCH DETAIL