Your browser doesn't support javascript.
loading
Self-Supervised Pretext Tasks in Model Robustness & Generalizability: A Revisit from Medical Imaging Perspective.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 5074-5079, 2022 07.
Article en En | MEDLINE | ID: mdl-36086344
Self-supervised pretext tasks have been introduced as an effective strategy when learning target tasks on small annotated data sets. However, while current research focuses on exploring novel pretext tasks for meaningful and reusable representation learning for the target task, the study of its robustness and generalizability has remained relatively under-explored. Specifically, it is crucial in medical imaging to proactively investigate performance under different perturbations for reliable deployment of clinical applications. In this work, we revisit medical imaging networks pre-trained with self-supervised learnings and categorically evaluate robustness and generalizability compared to vanilla supervised learning. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield conclusive results exposing the hidden benefits of self-supervision pre-training for learning robust feature representations.
Asunto(s)

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Diagnóstico por Imagen Tipo de estudio: Diagnostic_studies Idioma: En Revista: Annu Int Conf IEEE Eng Med Biol Soc Año: 2022 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Diagnóstico por Imagen Tipo de estudio: Diagnostic_studies Idioma: En Revista: Annu Int Conf IEEE Eng Med Biol Soc Año: 2022 Tipo del documento: Article