Human selection bias drives the linear nature of the more ground truth effect in explainable deep learning optical coherence tomography image segmentation.
J Biophotonics
; 17(2): e202300274, 2024 Feb.
Article
en En
| MEDLINE
| ID: mdl-37795556
ABSTRACT
Supervised deep learning (DL) algorithms are highly dependent on training data for which human graders are assigned, for example, for optical coherence tomography (OCT) image annotation. Despite the tremendous success of DL, due to human judgment, these ground truth labels can be inaccurate and/or ambiguous and cause a human selection bias. We therefore investigated the impact of the size of the ground truth and variable numbers of graders on the predictive performance of the same DL architecture and repeated each experiment three times. The largest training dataset delivered a prediction performance close to that of human experts. All DL systems utilized were highly consistent. Nevertheless, the DL under-performers could not achieve any further autonomous improvement even after repeated training. Furthermore, a quantifiable linear relationship between ground truth ambiguity and the beneficial effect of having a larger amount of ground truth data was detected and marked as the more-ground-truth effect.
Palabras clave
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Aprendizaje Profundo
Tipo de estudio:
Prognostic_studies
Límite:
Humans
Idioma:
En
Revista:
J Biophotonics
Asunto de la revista:
BIOFISICA
Año:
2024
Tipo del documento:
Article
País de afiliación:
Suiza