Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Med Image Anal ; 94: 103153, 2024 May.
Article in English | MEDLINE | ID: mdl-38569380

ABSTRACT

Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.


Subject(s)
Diabetes Mellitus , Diabetic Foot , Humans , Diabetic Foot/diagnostic imaging , Neural Networks, Computer , Benchmarking , Image Processing, Computer-Assisted/methods
2.
J Biomech ; 166: 112046, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38467079

ABSTRACT

Full-length radiographs contain information from which many anatomical parameters of the pelvis, femur, and tibia may be derived, but only a few anatomical parameters are used for musculoskeletal modeling. This study aimed to develop a fully automatic algorithm to extract anatomical parameters from full-length radiograph to generate a musculoskeletal model that is more accurate than linear scaled one. A U-Net convolutional neural network was trained to segment the pelvis, femur, and tibia from the full-length radiograph. Eight anatomic parameters (six for length and width, two for angles) were automatically extracted from the bone segmentation masks and used to generate the musculoskeletal model. Sørensen-Dice coefficient was used to quantify the consistency of automatic bone segmentation masks with manually segmented labels. Maximum distance error, root mean square (RMS) distance error and Jaccard index (JI) were used to evaluate the geometric accuracy of the automatically generated pelvis, femur and tibia models versus CT bone models. Mean Sørensen-Dice coefficients for the pelvis, femur and tibia 2D segmentation masks were 0.9898, 0.9822 and 0.9786, respectively. The algorithm-driven bone models were closer to the 3D CT bone models than the scaled generic models in geometry, with significantly lower maximum distance error (28.3 % average decrease from 24.35 mm) and RMS distance error (28.9 % average decrease from 9.55 mm) and higher JI (17.2 % average increase from 0.46) (P < 0.001). The algorithm-driven musculoskeletal modeling (107.15 ± 10.24 s) was faster than the manual process (870.07 ± 44.79 s) for the same full-length radiograph. This algorithm provides a fully automatic way to generate a musculoskeletal model from full-length radiograph that achieves an approximately 30 % reduction in distance errors, which could enable personalized musculoskeletal simulation based on full-length radiograph for large scale OA populations.


Subject(s)
Neural Networks, Computer , Tibia , Radiography , Tibia/diagnostic imaging , Femur/diagnostic imaging , Pelvis , Image Processing, Computer-Assisted
3.
IEEE Trans Med Imaging ; 43(1): 416-426, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37651492

ABSTRACT

Deep learning methods are often hampered by issues such as data imbalance and data-hungry. In medical imaging, malignant or rare diseases are frequently of minority classes in the dataset, featured by diversified distribution. Besides that, insufficient labels and unseen cases also present conundrums for training on the minority classes. To confront the stated problems, we propose a novel Hierarchical-instance Contrastive Learning (HCLe) method for minority detection by only involving data from the majority class in the training stage. To tackle inconsistent intra-class distribution in majority classes, our method introduces two branches, where the first branch employs an auto-encoder network augmented with three constraint functions to effectively extract image-level features, and the second branch designs a novel contrastive learning network by taking into account the consistency of features among hierarchical samples from majority classes. The proposed method is further refined with a diverse mini-batch strategy, enabling the identification of minority classes under multiple conditions. Extensive experiments have been conducted to evaluate the proposed method on three datasets of different diseases and modalities. The experimental results show that the proposed method outperforms the state-of-the-art methods.

4.
Article in English | MEDLINE | ID: mdl-38147422

ABSTRACT

We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms. While most current methods focus on explaining graph nodes, edges, or features, we argue that, as the inherent functional mechanism of GNNs, message flows are more natural for performing explainability. To this end, we propose a novel method here, known as FlowX, to explain GNNs by identifying important message flows. To quantify the importance of flows, we propose to follow the philosophy of Shapley values from cooperative game theory. To tackle the complexity of computing all coalitions' marginal contributions, we propose a flow sampling scheme to compute Shapley value approximations as initial assessments of further training. We then propose an information-controlled learning algorithm to train flow scores toward diverse explanation targets: necessary or sufficient explanations. Experimental studies on both synthetic and real-world datasets demonstrate that our proposed FlowX and its variants lead to improved explainability of GNNs.

5.
IEEE J Biomed Health Inform ; 27(10): 4914-4925, 2023 10.
Article in English | MEDLINE | ID: mdl-37486830

ABSTRACT

Ultrasound based estimation of fetal biometry is extensively used to diagnose prenatal abnormalities and to monitor fetal growth, for which accurate segmentation of the fetal anatomy is a crucial prerequisite. Although deep neural network-based models have achieved encouraging results on this task, inevitable distribution shifts in ultrasound images can still result in severe performance drop in real world deployment scenarios. In this article, we propose a complete ultrasound fetal examination system to deal with this troublesome problem by repairing and screening the anatomically implausible results. Our system consists of three main components: A routine segmentation network, a fetal anatomical key points guided repair network, and a shape-coding based selective screener. Guided by the anatomical key points, our repair network has stronger cross-domain repair capabilities, which can substantially improve the outputs of the segmentation network. By quantifying the distance between an arbitrary segmentation mask to its corresponding anatomical shape class, the proposed shape-coding based selective screener can then effectively reject the entire implausible results that cannot be fully repaired. Extensive experiments demonstrate that our proposed framework has strong anatomical guarantee and outperforms other methods in three different cross-domain scenarios.


Subject(s)
Fetus , Image Processing, Computer-Assisted , Ultrasonography, Prenatal , Female , Humans , Pregnancy , Biometry , Fetus/diagnostic imaging , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Ultrasonography
6.
Med Image Anal ; 87: 102805, 2023 07.
Article in English | MEDLINE | ID: mdl-37104995

ABSTRACT

Unsupervised anomaly detection (UAD) is to detect anomalies through learning the distribution of normal data without labels and therefore has a wide application in medical images by alleviating the burden of collecting annotated medical data. Current UAD methods mostly learn the normal data by the reconstruction of the original input, but often lack the consideration of any prior information that has semantic meanings. In this paper, we first propose a universal unsupervised anomaly detection framework SSL-AnoVAE, which utilizes a self-supervised learning (SSL) module for providing more fine-grained semantics depending on the to-be detected anomalies in the retinal images. We also explore the relationship between the data transformation adopted in the SSL module and the quality of anomaly detection for retinal images. Moreover, to take full advantage of the proposed SSL-AnoVAE and apply towards clinical usages for computer-aided diagnosis of retinal-related diseases, we further propose to stage and segment the anomalies in retinal images detected by SSL-AnoVAE in an unsupervised manner. Experimental results demonstrate the effectiveness of our proposed method for unsupervised anomaly detection, staging and segmentation on both retinal optical coherence tomography images and color fundus photograph images.


Subject(s)
Diagnosis, Computer-Assisted , Retinal Diseases , Humans , Fundus Oculi , Retinal Diseases/diagnostic imaging , Semantics , Tomography, Optical Coherence , Image Processing, Computer-Assisted
7.
Interdiscip Sci ; 15(2): 262-272, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36656448

ABSTRACT

Differentiation of ductal carcinoma in situ (DCIS, a precancerous lesion of the breast) from fibroadenoma (FA) using ultrasonography is significant for the early prevention of malignant breast tumors. Radiomics-based artificial intelligence (AI) can provide additional diagnostic information but usually requires extensive labeling efforts by clinicians with specialized knowledge. This study aims to investigate the feasibility of differentially diagnosing DCIS and FA using ultrasound radiomics-based AI techniques and further explore a novel approach that can reduce labeling efforts without sacrificing diagnostic performance. We included 461 DCIS and 651 FA patients, of whom 139 DCIS and 181 FA patients constituted a prospective test cohort. First, various feature engineering-based machine learning (FEML) and deep learning (DL) approaches were developed. Then, we designed a difference-based self-supervised (DSS) learning approach that only required FA samples to participate in training. The DSS approach consists of three steps: (1) pretraining a Bootstrap Your Own Latent (BYOL) model using FA images, (2) reconstructing images using the encoder and decoder of the pretrained model, and (3) distinguishing DCIS from FA based on the differences between the original and reconstructed images. The experimental results showed that the trained FEML and DL models achieved the highest AUC of 0.7935 (95% confidence interval, 0.7900-0.7969) on the prospective test cohort, indicating that the developed models are effective for assisting in differentiating DCIS from FA based on ultrasound images. Furthermore, the DSS model achieved an AUC of 0.8172 (95% confidence interval, 0.8124-0.8219), indicating that our model outperforms the conventional radiomics-based AI models and is more competitive.


Subject(s)
Breast Neoplasms , Carcinoma, Intraductal, Noninfiltrating , Fibroadenoma , Humans , Female , Carcinoma, Intraductal, Noninfiltrating/diagnostic imaging , Carcinoma, Intraductal, Noninfiltrating/pathology , Artificial Intelligence , Diagnosis, Differential , Fibroadenoma/diagnostic imaging , Fibroadenoma/pathology , Prospective Studies , Breast Neoplasms/diagnostic imaging , Ultrasonography
8.
Quant Imaging Med Surg ; 12(10): 4758-4770, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36185061

ABSTRACT

Background: This study set out to develop a computed tomography (CT)-based wavelet transforming radiomics approach for grading pulmonary lesions caused by COVID-19 and to validate it using real-world data. Methods: This retrospective study analyzed 111 patients with 187 pulmonary lesions from 16 hospitals; all patients had confirmed COVID-19 and underwent non-contrast chest CT. Data were divided into a training cohort (72 patients with 127 lesions from nine hospitals) and an independent test cohort (39 patients with 60 lesions from seven hospitals) according to the hospital in which the CT was performed. In all, 73 texture features were extracted from manually delineated lesion volumes, and 23 three-dimensional (3D) wavelets with eight decomposition modes were implemented to compare and validate the value of wavelet transformation for grade assessment. Finally, the optimal machine learning pipeline, valuable radiomic features, and final radiomic models were determined. The area under the receiver operating characteristic (ROC) curve (AUC), calibration curve, and decision curve were used to determine the diagnostic performance and clinical utility of the models. Results: Of the 187 lesions, 108 (57.75%) were diagnosed as mild lesions and 79 (42.25%) as moderate/severe lesions. All selected radiomic features showed significant correlations with the grade of COVID-19 pulmonary lesions (P<0.05). Biorthogonal 1.1 (bior1.1) LLL was determined as the optimal wavelet transform mode. The wavelet transforming radiomic model had an AUC of 0.910 in the test cohort, outperforming the original radiomic model (AUC =0.880; P<0.05). Decision analysis showed the radiomic model could add a net benefit at any given threshold of probability. Conclusions: Wavelet transformation can enhance CT texture features. Wavelet transforming radiomics based on CT images can be used to effectively assess the grade of pulmonary lesions caused by COVID-19, which may facilitate individualized management of patients with this disease.

9.
Med Image Anal ; 79: 102443, 2022 07.
Article in English | MEDLINE | ID: mdl-35537340

ABSTRACT

Thyroid nodule segmentation and classification in ultrasound images are two essential but challenging tasks for computer-aided diagnosis of thyroid nodules. Since these two tasks are inherently related to each other and sharing some common features, solving them jointly with multi-task leaning is a promising direction. However, both previous studies and our experimental results confirm the problem of inconsistent predictions among these related tasks. In this paper, we summarize two types of task inconsistency according to the relationship among different tasks: intra-task inconsistency between homogeneous tasks (e.g., both tasks are pixel-wise segmentation tasks); and inter-task inconsistency between heterogeneous tasks (e.g., pixel-wise segmentation task and categorical classification task). To address the task inconsistency problems, we propose intra- and inter-task consistent learning on top of the designed multi-stage and multi-task learning network to enforce the network learn consistent predictions for all the tasks during network training. Our experimental results based on a large clinical thyroid ultrasound image dataset indicate that the proposed intra- and inter-task consistent learning can effectively eliminate both types of task inconsistency and thus improve the performance of all tasks for thyroid nodule segmentation and classification.


Subject(s)
Thyroid Nodule , Diagnosis, Computer-Assisted , Humans , Image Processing, Computer-Assisted , Thyroid Nodule/diagnostic imaging , Ultrasonography/methods
10.
IEEE Trans Neural Netw Learn Syst ; 33(9): 4466-4478, 2022 Sep.
Article in English | MEDLINE | ID: mdl-33657001

ABSTRACT

Learning in nonstationary environments is one of the biggest challenges in machine learning. Nonstationarity can be caused by either task drift, i.e., the drift in the conditional distribution of labels given the input data, or the domain drift, i.e., the drift in the marginal distribution of the input data. This article aims to tackle this challenge with a modularized two-stream continual learning (CL) system, where the model is required to learn new tasks from a support stream and adapted to new domains in the query stream while maintaining previously learned knowledge. To deal with both drifts within and across the two streams, we propose a variational domain-agnostic feature replay-based approach that decouples the system into three modules: an inference module that filters the input data from the two streams into domain-agnostic representations, a generative module that facilitates the high-level knowledge transfer, and a solver module that applies the filtered and transferable knowledge to solve the queries. We demonstrate the effectiveness of our proposed approach in addressing the two fundamental scenarios and complex scenarios in two-stream CL.

11.
Med Image Anal ; 72: 102106, 2021 08.
Article in English | MEDLINE | ID: mdl-34153625

ABSTRACT

Synthetic medical image generation has a huge potential for improving healthcare through many applications, from data augmentation for training machine learning systems to preserving patient privacy. Conditional Adversarial Generative Networks (cGANs) use a conditioning factor to generate images and have shown great success in recent years. Intuitively, the information in an image can be divided into two parts: 1) content which is presented through the conditioning vector and 2) style which is the undiscovered information missing from the conditioning vector. Current practices in using cGANs for medical image generation, only use a single variable for image generation (i.e., content) and therefore, do not provide much flexibility nor control over the generated image. In this work we propose DRAI-a dual adversarial inference framework with augmented disentanglement constraints-to learn from the image itself, disentangled representations of style and content, and use this information to impose control over the generation process. In this framework, style is learned in a fully unsupervised manner, while content is learned through both supervised learning (using the conditioning vector) and unsupervised learning (with the inference mechanism). We undergo two novel regularization steps to ensure content-style disentanglement. First, we minimize the shared information between content and style by introducing a novel application of the gradient reverse layer (GRL); second, we introduce a self-supervised regularization method to further separate information in the content and style variables. For evaluation, we consider two types of baselines: single latent variable models that infer a single variable, and double latent variable models that infer two variables (style and content). We conduct extensive qualitative and quantitative assessments on two publicly available medical imaging datasets (LIDC and HAM10000) and test for conditional image generation, image retrieval and style-content disentanglement. We show that in general, two latent variable models achieve better performance and give more control over the generated image. We also show that our proposed model (DRAI) achieves the best disentanglement score and has the best overall performance.


Subject(s)
Image Processing, Computer-Assisted , Machine Learning , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...