Your browser doesn't support javascript.
loading
A Comparison of Artificial Intelligence and Human Diabetic Retinal Image Interpretation in an Urban Health System.
Mokhashi, Nikita; Grachevskaya, Julia; Cheng, Lorrie; Yu, Daohai; Lu, Xiaoning; Zhang, Yi; Henderer, Jeffrey D.
Afiliação
  • Mokhashi N; Department of Ophthalmology, Lewis Katz School of Medicine, Philadelphia, PA, USA.
  • Grachevskaya J; Department of Ophthalmology, Lewis Katz School of Medicine, Philadelphia, PA, USA.
  • Cheng L; Department of Ophthalmology, Lewis Katz School of Medicine, Philadelphia, PA, USA.
  • Yu D; Department of Ophthalmology, Lewis Katz School of Medicine, Philadelphia, PA, USA.
  • Lu X; Department of Ophthalmology, Lewis Katz School of Medicine, Philadelphia, PA, USA.
  • Zhang Y; Department of Ophthalmology, Lewis Katz School of Medicine, Philadelphia, PA, USA.
  • Henderer JD; Department of Ophthalmology, Lewis Katz School of Medicine, Philadelphia, PA, USA.
J Diabetes Sci Technol ; 16(4): 1003-1007, 2022 07.
Article em En | MEDLINE | ID: mdl-33719599
ABSTRACT

INTRODUCTION:

Artificial intelligence (AI) diabetic retinopathy (DR) software has the potential to decrease time spent by clinicians on image interpretation and expand the scope of DR screening. We performed a retrospective review to compare Eyenuk's EyeArt software (Woodland Hills, CA) to Temple Ophthalmology optometry grading using the International Classification of Diabetic Retinopathy scale.

METHODS:

Two hundred and sixty consecutive diabetic patients from the Temple Faculty Practice Internal Medicine clinic underwent 2-field retinal imaging. Classifications of the images by the software and optometrist were analyzed using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and McNemar's test. Ungradable images were analyzed to identify relationships with HbA1c, age, and ethnicity. Disagreements and a sample of 20% of agreements were adjudicated by a retina specialist.

RESULTS:

On patient level comparison, sensitivity for the software was 100%, while specificity was 77.78%. PPV was 19.15%, and NPV was 100%. The 38 disagreements between software and optometrist occurred when the optometrist classified a patient's images as non-referable while the software classified them as referable. Of these disagreements, a retina specialist agreed with the optometrist 57.9% the time (22/38). Of the agreements, the retina specialist agreed with both the program and the optometrist 96.7% of the time (28/29). There was a significant difference in numbers of ungradable photos in older patients (≥60) vs younger patients (<60) (p=0.003).

CONCLUSIONS:

The AI program showed high sensitivity with acceptable specificity for a screening algorithm. The high NPV indicates that the software is unlikely to miss DR but may refer patients unnecessarily.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Diabetes Mellitus / Retinopatia Diabética Tipo de estudo: Diagnostic_studies / Screening_studies Limite: Aged / Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Diabetes Mellitus / Retinopatia Diabética Tipo de estudo: Diagnostic_studies / Screening_studies Limite: Aged / Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article