Your browser doesn't support javascript.
loading
Semantically Redundant Training Data Removal and Deep Model Classification Performance: A Study with Chest X-rays.
Rajaraman, Sivaramakrishnan; Zamzmi, Ghada; Yang, Feng; Liang, Zhaohui; Xue, Zhiyun; Antani, Sameer.
Afiliação
  • Rajaraman S; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Zamzmi G; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Yang F; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Liang Z; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Xue Z; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Antani S; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
ArXiv ; 2023 Sep 18.
Article em En | MEDLINE | ID: mdl-37986725
ABSTRACT
Deep learning (DL) has demonstrated its innate capacity to independently learn hierarchical features from complex and multi-dimensional data. A common understanding is that its performance scales up with the amount of training data. Another data attribute is the inherent variety. It follows, therefore, that semantic redundancy, which is the presence of similar or repetitive information, would tend to lower performance and limit generalizability to unseen data. In medical imaging data, semantic redundancy can occur due to the presence of multiple images that have highly similar presentations for the disease of interest. Further, the common use of augmentation methods to generate variety in DL training may be limiting performance when applied to semantically redundant data. We propose an entropy-based sample scoring approach to identify and remove semantically redundant training data. We demonstrate using the publicly available NIH chest X-ray dataset that the model trained on the resulting informative subset of training data significantly outperforms the model trained on the full training set, during both internal (recall 0.7164 vs 0.6597, p<0.05) and external testing (recall 0.3185 vs 0.2589, p<0.05). Our findings emphasize the importance of information-oriented training sample selection as opposed to the conventional practice of using all available training data.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article