Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Nat Commun ; 13(1): 1161, 2022 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-35246539

RESUMO

Imperfections in data annotation, known as label noise, are detrimental to the training of machine learning models and have a confounding effect on the assessment of model performance. Nevertheless, employing experts to remove label noise by fully re-annotating large datasets is infeasible in resource-constrained settings, such as healthcare. This work advocates for a data-driven approach to prioritising samples for re-annotation-which we term "active label cleaning". We propose to rank instances according to estimated label correctness and labelling difficulty of each sample, and introduce a simulation framework to evaluate relabelling efficacy. Our experiments on natural images and on a specifically-devised medical imaging benchmark show that cleaning noisy labels mitigates their negative impact on model training, evaluation, and selection. Crucially, the proposed approach enables correcting labels up to 4 × more effectively than typical random selection in realistic conditions, making better use of experts' valuable time for improving dataset quality.


Assuntos
Diagnóstico por Imagem , Aprendizado de Máquina , Benchmarking , Curadoria de Dados , Atenção à Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA