Your browser doesn't support javascript.
loading
Deep Learning Model for Accurate Automatic Determination of Phakic Status in Pediatric and Adult Ultrasound Biomicroscopy Images.
Le, Christopher; Baroni, Mariana; Vinnett, Alfred; Levin, Moran R; Martinez, Camilo; Jaafar, Mohamad; Madigan, William P; Alexander, Janet L.
Afiliação
  • Le C; Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA.
  • Baroni M; Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA.
  • Vinnett A; Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA.
  • Levin MR; Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA.
  • Martinez C; Department of Ophthalmology, Children's National Medical System, Washington, DC, USA.
  • Jaafar M; Department of Ophthalmology, Children's National Medical System, Washington, DC, USA.
  • Madigan WP; Department of Ophthalmology, Children's National Medical System, Washington, DC, USA.
  • Alexander JL; Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA.
Transl Vis Sci Technol ; 9(2): 63, 2020 12.
Article em En | MEDLINE | ID: mdl-33409005
Purpose: Ultrasound biomicroscopy (UBM) is a noninvasive method for assessing anterior segment anatomy. Previous studies were prone to intergrader variability, lacked assessment of the lens-iris diaphragm, and excluded pediatric subjects. Lens status classification is an objective task applicable in pediatric and adult populations. We developed and validated a neural network to classify lens status from UBM images. Methods: Two hundred eighty-five UBM images were collected in the Pediatric Anterior Segment Imaging Innovation Study (PASIIS) from 80 eyes of 51 pediatric and adult subjects (median age = 4.6 years, range = 3 weeks to 90 years) with lens status phakic, aphakic, or pseudophakic (n = 33, 7, and 21 subjects, respectively). Following transfer learning, a pretrained Densenet-121 model was fine-tuned on these images. Metrics were calculated for testing dataset results aggregated from fivefold cross-validation. For each fold, 20% of total subjects were partitioned for testing and the remaining subjects were used for training and validation (80:20 split). Results: Our neural network trained across 60 epochs achieved recall 96.15%, precision 96.14%, F1-score 96.14%, false positive rate 3.74%, and area under the curve (AUC) 0.992. Feature saliency heatmaps consistently involved the lens. Algorithm performance was compared using 2 image sets, 1 from subjects of all ages, and the second from only subjects under age 10 years, with similar performance under both circumstances. Conclusions: A neural network trained on a relatively small UBM image set classified lens status with satisfactory recall and precision. Adult and pediatric image sets offered roughly equivalent performance. Future studies will explore automated UBM image classification for complex anterior segment pathology. Translational Relevance: Deep learning models can evaluate lens status from UBM images in adult and pediatric subjects using a limited image set.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Aprendizado Profundo / Cristalino Tipo de estudo: Prognostic_studies Limite: Adult / Child / Humans / Newborn Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Aprendizado Profundo / Cristalino Tipo de estudo: Prognostic_studies Limite: Adult / Child / Humans / Newborn Idioma: En Ano de publicação: 2020 Tipo de documento: Article