Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 8 de 8
Filtrer
Plus de filtres










Base de données
Gamme d'année
1.
Med Biol Eng Comput ; 2024 Jul 20.
Article de Anglais | MEDLINE | ID: mdl-39031327

RÉSUMÉ

Data-driven medical image segmentation networks require expert annotations, which are hard to obtain. Non-expert annotations are often used instead, but these can be inaccurate (referred to as "noisy labels"), misleading the network's training and causing a decline in segmentation performance. In this study, we focus on improving the segmentation performance of neural networks when trained with noisy annotations. Specifically, we propose a two-stage framework named "G-T correcting," consisting of "G" stage for recognizing noisy labels and "T" stage for correcting noisy labels. In the "G" stage, a positive feedback method is proposed to automatically recognize noisy samples, using a Gaussian mixed model to classify clean and noisy labels through the per-sample loss histogram. In the "T" stage, a confident correcting strategy and early learning strategy are adopted to allow the segmentation network to receive productive guidance from noisy labels. Experiments on simulated and real-world noisy labels show that this method can achieve over 90% accuracy in recognizing noisy labels, and improve the network's DICE coefficient to 91%. The results demonstrate that the proposed method can enhance the segmentation performance of the network when trained with noisy labels, indicating good clinical application prospects.

2.
Artif Intell Med ; 152: 102872, 2024 Jun.
Article de Anglais | MEDLINE | ID: mdl-38701636

RÉSUMÉ

Accurately measuring the evolution of Multiple Sclerosis (MS) with magnetic resonance imaging (MRI) critically informs understanding of disease progression and helps to direct therapeutic strategy. Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area. Obtaining sufficient data from a single clinical site is challenging and does not address the heterogeneous need for model robustness. Conversely, the collection of data from multiple sites introduces data privacy concerns and potential label noise due to varying annotation standards. To address this dilemma, we explore the use of the federated learning framework while considering label noise. Our approach enables collaboration among multiple clinical sites without compromising data privacy under a federated learning paradigm that incorporates a noise-robust training strategy based on label correction. Specifically, we introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions, enabling the correction of false annotations based on prediction confidence. We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites, enhancing the reliability of the correction process. Extensive experiments conducted on two multi-site datasets demonstrate the effectiveness and robustness of our proposed methods, indicating their potential for clinical applications in multi-site collaborations to train better deep learning models with lower cost in data collection and annotation.


Sujet(s)
Apprentissage profond , Imagerie par résonance magnétique , Sclérose en plaques , Sclérose en plaques/imagerie diagnostique , Humains , Imagerie par résonance magnétique/méthodes , Interprétation d'images assistée par ordinateur/méthodes , Traitement d'image par ordinateur/méthodes
3.
Neural Netw ; 176: 106383, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38781758

RÉSUMÉ

Label noises, categorized into closed-set noise and open-set noise, are prevalent in real-world scenarios and can seriously hinder the generalization ability of models. Identifying noise is challenging because noisy samples closely resemble true positives. Existing approaches often assume a single noise source, oversimplify closed-set noise, or treat open-set noise as toxic and eliminate it, resulting in limited practical effects. To address these issues, we present a novel approach named uncertainty-guided label correction with wavelet-transformed discriminative representation enhancement (Ultra), designed to mitigate the effects of mixed noise. Specifically, our approach considers a more practical noise setting. To achieve robust mixed-noise identification, we initially look into a learnable wavelet filter for obtaining discriminative features and filtering spurious cues automatically at the representation level. Subsequently, we introduce a two-fold uncertainty estimation to stably locate noise within the corrupted supervised signal level. These insights pave the way for a simple yet potent label correction technique, enabling comprehensive utilization of open-set noise, which can be rendered non-toxic in a specific manner, in contrast to harmful closed-set noise. Experimental validation on datasets with synthetic mixed noise, web noise corruption, and a real-world dataset confirms the effectiveness and generality of Ultra. Furthermore, our approach enhances the application of efficient techniques (e.g., supervised contrastive learning) within label noise scenarios.


Sujet(s)
Analyse en ondelettes , Incertitude , Algorithmes , Humains ,
4.
Neural Netw ; 172: 106137, 2024 Apr.
Article de Anglais | MEDLINE | ID: mdl-38309136

RÉSUMÉ

Learning with Noisy Labels (LNL) methods have been widely studied in recent years, which aims to improve the performance of Deep Neural Networks (DNNs) when the training dataset contains incorrectly annotated labels. Popular existing LNL methods rely on semantic features extracted by the DNN to detect and mitigate label noise. However, these extracted features are often spurious and contain unstable correlations with the label across different environments (domains), which can occasionally lead to incorrect prediction and compromise the efficacy of LNL methods. To mitigate this insufficiency, we propose Invariant Feature based Label Correction (IFLC), which reduces spurious features and accurately utilizes the learned invariant features that contain stable correlation to correct label noise. To the best of our knowledge, this is the first attempt to mitigate the issue of spurious features for LNL methods. IFLC consists of two critical processes: The Label Disturbing (LD) process and the Representation Decorrelation (RD) process. The LD process aims to encourage DNN to attain stable performance across different environments, thus reducing the captured spurious features. The RD process strengthens independence between each dimension of the representation vector, thus enabling accurate utilization of the learned invariant features for label correction. We then utilize robust linear regression for the feature representation to conduct label correction. We evaluated the effectiveness of our proposed method and compared it with state-of-the-art (sota) LNL methods on four benchmark datasets, CIFAR-10, CIFAR-100, Animal-10N, and Clothing1M. The experimental results show that our proposed method achieved comparable or even better performance than the existing sota methods. The source codes are available at https://github.com/yangbo1973/IFLC.


Sujet(s)
Référenciation , Apprentissage , Animaux , Savoir , Modèles linéaires ,
5.
Int J Comput Assist Radiol Surg ; 18(4): 675-683, 2023 Apr.
Article de Anglais | MEDLINE | ID: mdl-36437387

RÉSUMÉ

PURPOSE: Deep neural networks (DNNs) have made great achievements in computer-aided diagnostic systems, but the success highly depends on massive data with high-quality labels. However, for many medical image datasets, a considerable number of noisy labels are introduced by inter- and intra-observer variability, thus hampering DNNs' performance. To address this problem, a robust noisy label correction method with the co-teaching learning paradigm is proposed. METHODS: The proposed method aims to reduce the effect of noisy labels by correcting or removing them. It consists of two modules. An adaptive noise rate estimation module is employed to calculate the dataset's noise rate, which is helpful to detect noisy labels but is usually unavailable in clinical applications. A consistency-based noisy label correction module aims to detect noisy labels and correct them to reduce the disturbance from noisy labels and exploit useful information in data. RESULTS: Experiments are conducted on the public skin lesion dataset ISIC-2017, ISIC-2019, and our constructed thyroid ultrasound image dataset. The results demonstrate that the proposed method outperforms other noisy label learning methods in medical image classification tasks. It is also evaluated on the natural image dataset CIFAR-10 to show its generalization. CONCLUSION: This paper proposes a noisy label correction method to handle noisy labels in medical image datasets. Experimental results show that it can self-adapt to different datasets and efficiently correct noisy labels, which is suitable for medical image classification.


Sujet(s)
Systèmes informatiques , Apprentissage , Humains ,
6.
Comput Biol Med ; 151(Pt B): 106326, 2022 12.
Article de Anglais | MEDLINE | ID: mdl-36442274

RÉSUMÉ

Accurate segmentation of subcortical structures is an important task in quantitative brain image analysis. Convolutional neural networks (CNNs) have achieved remarkable results in medical image segmentation. However, due to the difficulty of acquiring high-quality annotations of brain subcortical structures, learning segmentation networks using noisy annotations is an inevitable topic. A common practice is to select images or pixels with reliable annotations for training, which usually may not make full use of the information from the training samples, thus affecting the performance of the learned segmentation model. To address the above problem, in this work, we propose a novel robust learning method and denote it as uncertainty-reliability awareness learning (URAL), which can make sufficient use of all training pixels. At each training iteration, the proposed method first selects training pixels with reliable annotations from the set of pixels with uncertain network prediction, by utilizing a small clean validation set following a meta-learning paradigm. Meanwhile, we propose the online prototypical soft label correction (PSLC) method to estimate the pseudo-labels of label-unreliable pixels. Then, the segmentation loss of label-reliable pixels and the semi-supervised segmentation loss of label-unreliable pixels are used to calibrate the total segmentation loss. Finally, we propose a category-wise contrastive regularization to learn compact feature representations of all uncertain training pixels. Comprehensive experiments are performed on two publicly available brain MRI datasets. The proposed method achieves the best Dice scores and MHD values on both datasets compared to several recent state-of-the-art methods under all label noise settings. Our code is available at https://github.com/neulxlx/URAL.


Sujet(s)
Encéphale , Apprentissage , Incertitude , Reproductibilité des résultats , Encéphale/imagerie diagnostique , , Traitement d'image par ordinateur
7.
Front Neuroinform ; 16: 895290, 2022.
Article de Anglais | MEDLINE | ID: mdl-35645753

RÉSUMÉ

Accurate labeling is essential for supervised deep learning methods. However, it is almost impossible to accurately and manually annotate thousands of images, which results in many labeling errors for most datasets. We proposes a local label point correction (LLPC) method to improve annotation quality for edge detection and image segmentation tasks. Our algorithm contains three steps: gradient-guided point correction, point interpolation, and local point smoothing. We correct the labels of object contours by moving the annotated points to the pixel gradient peaks. This can improve the edge localization accuracy, but it also causes unsmooth contours due to the interference of image noise. Therefore, we design a point smoothing method based on local linear fitting to smooth the corrected edge. To verify the effectiveness of our LLPC, we construct a largest overlapping cervical cell edge detection dataset (CCEDD) with higher precision label corrected by our label correction method. Our LLPC only needs to set three parameters, but yields 30-40% average precision improvement on multiple networks. The qualitative and quantitative experimental results show that our LLPC can improve the quality of manual labels and the accuracy of overlapping cell edge detection. We hope that our study will give a strong boost to the development of the label correction for edge detection and image segmentation. We will release the dataset and code at: https://github.com/nachifur/LLPC.

8.
Med Image Anal ; 74: 102214, 2021 12.
Article de Anglais | MEDLINE | ID: mdl-34464837

RÉSUMÉ

Medical image segmentation tasks hitherto have achieved excellent progresses with large-scale datasets, which empowers us to train potent deep convolutional neural networks (DCNNs). However, labeling such large-scale datasets is laborious and error-prone, which leads the noisy (or incorrect) labels to be an ubiquitous problem in the real-world scenarios. In addition, data collected from different sites usually exhibit significant data distribution shift (or domain shift). As a result, noisy label and domain shift become two common problems in medical imaging application scenarios, especially in medical image segmentation, which degrade the performance of deep learning models significantly. In this paper, we identify a novel problem hidden in medical image segmentation, which is unsupervised domain adaptation on noisy labeled data, and propose a novel algorithm named "Self-Cleansing Unsupervised Domain Adaptation" (S-CDUA) to address such issue. S-CUDA sets up a realistic scenario to solve the above problems simultaneously where training data (i.e., source domain) not only shows domain shift w.r.t. unsupervised test data (i.e., target domain) but also contains noisy labels. The key idea of S-CUDA is to learn noise-excluding and domain invariant knowledge from noisy supervised data, which will be applied on the highly corrupted data for label cleansing and further data-recycling, as well as on the test data with domain shift for supervised propagation. To this end, we propose a novel framework leveraging noisy-label learning and domain adaptation techniques to cleanse the noisy labels and learn from trustable clean samples, thus enabling robust adaptation and prediction on the target domain. Specifically, we train two peer adversarial networks to identify high-confidence clean data and exchange them in companions to eliminate the error accumulation problem and narrow the domain gap simultaneously. In the meantime, the high-confidence noisy data are detected and cleansed in order to reuse the contaminated training data. Therefore, our proposed method can not only cleanse the noisy labels in the training set but also take full advantage of the existing noisy data to update the parameters of the network. For evaluation, we conduct experiments on two popular datasets (REFUGE and Drishti-GS) for optic disc (OD) and optic cup (OC) segmentation, and on another public multi-vendor dataset for spinal cord gray matter (SCGM) segmentation. Experimental results show that our proposed method can cleanse noisy labels efficiently and obtain a model with better generalization performance at the same time, which outperforms previous state-of-the-art methods by large margin. Our code can be found at https://github.com/zzdxjtu/S-cuda.


Sujet(s)
Traitement d'image par ordinateur , Papille optique , Algorithmes , Humains ,
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE