Your browser doesn't support javascript.
loading
Inconsistent Performance of Deep Learning Models on Mammogram Classification.
Wang, Xiaoqin; Liang, Gongbo; Zhang, Yu; Blanton, Hunter; Bessinger, Zachary; Jacobs, Nathan.
Afiliación
  • Wang X; Department of Radiology, University of Kentucky, Lexington, Kentucky; Markey Cancer Center, University of Kentucky, Lexington, Kentucky. Electronic address: Xiaoqin.wang@uky.edu.
  • Liang G; Department of Computer Science, University of Kentucky, Lexington, Kentucky.
  • Zhang Y; Department of Computer Science, University of Kentucky, Lexington, Kentucky.
  • Blanton H; Department of Computer Science, University of Kentucky, Lexington, Kentucky.
  • Bessinger Z; Department of Computer Science, University of Kentucky, Lexington, Kentucky.
  • Jacobs N; Department of Computer Science, University of Kentucky, Lexington, Kentucky.
J Am Coll Radiol ; 17(6): 796-803, 2020 Jun.
Article en En | MEDLINE | ID: mdl-32068005
ABSTRACT

OBJECTIVES:

Performance of recently developed deep learning models for image classification surpasses that of radiologists. However, there are questions about model performance consistency and generalization in unseen external data. The purpose of this study is to determine whether the high performance of deep learning on mammograms can be transferred to external data with a different data distribution. MATERIALS AND

METHODS:

Six deep learning models (three published models with high performance and three models designed by us) were evaluated on four different mammogram data sets, including three public (Digital Database for Screening Mammography, INbreast, and Mammographic Image Analysis Society) and one private data set (UKy). The models were trained and validated on either Digital Database for Screening Mammography alone or a combined data set that included Digital Database for Screening Mammography. The models were then tested on the three external data sets. The area under the receiver operating characteristic curve (auROC) was used to evaluate model performance.

RESULTS:

The three published models reported validation auROC scores between 0.88 and 0.95 on the validation data set. Our models achieved between 0.71 (95% confidence interval [CI] 0.70-0.72) and 0.79 (95% CI 0.78-0.80) auROC on the same validation data set. However, the same evaluation criteria of all six models on the three external test data sets were significantly decreased, only between 0.44 (95% CI 0.43-0.45) and 0.65 (95% CI 0.64-0.66).

CONCLUSION:

Our results demonstrate performance inconsistency across the data sets and models, indicating that the high performance of deep learning models on one data set cannot be readily transferred to unseen external data sets, and these models need further assessment and validation before being applied in clinical practice.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Neoplasias de la Mama / Aprendizaje Profundo Tipo de estudio: Diagnostic_studies / Prognostic_studies / Screening_studies Límite: Female / Humans Idioma: En Revista: J Am Coll Radiol Asunto de la revista: RADIOLOGIA Año: 2020 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Neoplasias de la Mama / Aprendizaje Profundo Tipo de estudio: Diagnostic_studies / Prognostic_studies / Screening_studies Límite: Female / Humans Idioma: En Revista: J Am Coll Radiol Asunto de la revista: RADIOLOGIA Año: 2020 Tipo del documento: Article