Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Anal ; 92: 103028, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38070453

ABSTRACT

Manual annotation of medical images is highly subjective, leading to inevitable annotation biases. Deep learning models may surpass human performance on a variety of tasks, but they may also mimic or amplify these biases. Although we can have multiple annotators and fuse their annotations to reduce stochastic errors, we cannot use this strategy to handle the bias caused by annotators' preferences. In this paper, we highlight the issue of annotator-related biases on medical image segmentation tasks, and propose a Preference-involved Annotation Distribution Learning (PADL) framework to address it from the perspective of modeling an annotator's preference and stochastic errors so as to produce not only a meta segmentation but also the annotator-specific segmentation. Under this framework, a stochastic error modeling (SEM) module estimates the meta segmentation map and average stochastic error map, and a series of human preference modeling (HPM) modules estimate each annotator's segmentation and the corresponding stochastic error. We evaluated our PADL framework on two medical image benchmarks with different imaging modalities, which have been annotated by multiple medical professionals, and achieved promising performance on all five medical image segmentation tasks. Code is available at https://github.com/Merrical/PADL.


Subject(s)
Benchmarking , Image Processing, Computer-Assisted , Humans
2.
IEEE Trans Med Imaging ; 42(1): 233-244, 2023 01.
Article in English | MEDLINE | ID: mdl-36155434

ABSTRACT

The domain gap caused mainly by variable medical image quality renders a major obstacle on the path between training a segmentation model in the lab and applying the trained model to unseen clinical data. To address this issue, domain generalization methods have been proposed, which however usually use static convolutions and are less flexible. In this paper, we propose a multi-source domain generalization model based on the domain and content adaptive convolution (DCAC) for the segmentation of medical images across different modalities. Specifically, we design the domain adaptive convolution (DAC) module and content adaptive convolution (CAC) module and incorporate both into an encoder-decoder backbone. In the DAC module, a dynamic convolutional head is conditioned on the predicted domain code of the input to make our model adapt to the unseen target domain. In the CAC module, a dynamic convolutional head is conditioned on the global image features to make our model adapt to the test image. We evaluated the DCAC model against the baseline and four state-of-the-art domain generalization methods on the prostate segmentation, COVID-19 lesion segmentation, and optic cup/optic disc segmentation tasks. Our results not only indicate that the proposed DCAC model outperforms all competing methods on each segmentation task but also demonstrate the effectiveness of the DAC and CAC modules. Code is available at https://git.io/DCAC.


Subject(s)
COVID-19 , Optic Disk , Male , Humans , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Prostate
3.
Med Image Anal ; 82: 102605, 2022 11.
Article in English | MEDLINE | ID: mdl-36156419

ABSTRACT

Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.


Subject(s)
COVID-19 , Pandemics , Humans , COVID-19/diagnostic imaging , Artificial Intelligence , Tomography, X-Ray Computed/methods , Lung/diagnostic imaging
4.
IEEE Trans Med Imaging ; 41(7): 1874-1884, 2022 07.
Article in English | MEDLINE | ID: mdl-35130152

ABSTRACT

Lung nodule malignancy prediction is an essential step in the early diagnosis of lung cancer. Besides the difficulties commonly discussed, the challenges of this task also come from the ambiguous labels provided by annotators, since deep learning models have in some cases been found to reproduce or amplify human biases. In this paper, we propose a multi-view 'divide-and-rule' (MV-DAR) model to learn from both reliable and ambiguous annotations for lung nodule malignancy prediction on chest CT scans. According to the consistency and reliability of their annotations, we divide nodules into three sets: a consistent and reliable set (CR-Set), an inconsistent set (IC-Set), and a low reliable set (LR-Set). The nodule in IC-Set is annotated by multiple radiologists inconsistently, and the nodule in LR-Set is annotated by only one radiologist. Although ambiguous, inconsistent labels tell which label(s) is consistently excluded by all annotators, and the unreliable labels of a cohort of nodules are largely correct from the statistical point of view. Hence, both IC-Set and LR-Set can be used to facilitate the training of MV-DAR. Our MV-DAR contains three DAR models to characterize a lung nodule from three orthographic views and is trained following a two-stage procedure. Each DAR consists of three networks with the same architecture, including a prediction network (Prd-Net), a counterfactual network (CF-Net), and a low reliable network (LR-Net), which are trained on CR-Set, IC-Set, and LR-Set respectively in the pretraining phase. In the fine-tuning phase, the image representation ability learned by CF-Net and LR-Net is transferred to Prd-Net by negative-attention module (NA-Module) and consistent-attention module (CA-Module), aiming to boost the prediction ability of Prd-Net. The MV-DAR model has been evaluated on the LIDC-IDRI dataset and LUNGx dataset. Our results indicate not only the effectiveness of the MV-DAR in learning from ambiguous labels but also its superiority over present noisy label-learning models in lung nodule malignancy prediction.


Subject(s)
Lung Neoplasms , Solitary Pulmonary Nodule , Cohort Studies , Humans , Lung/pathology , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Reproducibility of Results , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed/methods
5.
Res Sq ; 2021 Jun 04.
Article in English | MEDLINE | ID: mdl-34100010

ABSTRACT

Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.

SELECTION OF CITATIONS
SEARCH DETAIL
...