Your browser doesn't support javascript.
loading
Automated vs. human evaluation of corneal staining.
Kourukmas, R; Roth, M; Geerling, G.
Afiliación
  • Kourukmas R; Department of Ophthalmology, Heinrich-Heine University Düsseldorf, Moorenstr. 5 40225, Düsseldorf, Germany. rashid.kourukmas@med.uni-duesseldorf.de.
  • Roth M; Department of Ophthalmology, Heinrich-Heine University Düsseldorf, Moorenstr. 5 40225, Düsseldorf, Germany.
  • Geerling G; Department of Ophthalmology, Heinrich-Heine University Düsseldorf, Moorenstr. 5 40225, Düsseldorf, Germany.
Graefes Arch Clin Exp Ophthalmol ; 260(8): 2605-2612, 2022 Aug.
Article en En | MEDLINE | ID: mdl-35357547
ABSTRACT
BACKGROUND AND

PURPOSE:

Corneal fluorescein staining is one of the most important diagnostic tests in dry eye disease (DED). Nevertheless, the result of this examination is depending on the grader. So far, there is no method for an automated quantification of corneal staining commercially available. Aim of this study was to develop a software-assisted grading algorithm and to compare it with a group of human graders with variable clinical experience in patients with DED.

METHODS:

Fifty images of eyes stained with 2 µl of 2% fluorescein presenting different severity of superficial punctate keratopathy in patients with DED were taken under standardized conditions. An algorithm for detecting and counting superficial punctate keratitis was developed using ImageJ with a training dataset of 20 randomly picked images. Then, the test dataset of 30 images was analyzed (1) by the ImageJ algorithm and (2) by 22 graders, all ophthalmologists with different levels of experience. All graders evaluated the images using the Oxford grading scheme for corneal staining at baseline and after 6-8 weeks. Intrarater agreement was also evaluated by adding a mirrored version of all original images into the set of images during the 2nd grading.

RESULTS:

The count of particles detected by the algorithm correlated significantly (n = 30; p < 0.01) with the estimated true Oxford grade (Sr = 0,91). Overall human graders showed only moderate intrarater agreement (K = 0,426), while software-assisted grading was always the same (K = 1,0). Little difference was found between specialists and non-specialists in terms of intrarater agreement (K = 0,436 specialists; K = 0,417 non-specialists). The highest interrater agreement was seen with 75,6% in the most experienced grader, a cornea specialist with 29 years of experience, and the lowest was seen in a resident with 25,6% who had only 2 years of experience.

CONCLUSION:

The variance in human grading of corneal staining - if only small - is likely to have only little impact on clinical management and thus seems to be acceptable. While human graders give results sufficient for clinical application, software-assisted grading of corneal staining ensures higher consistency and thus is preferrable for re-evaluating patients, e.g., in clinical trials.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Síndromes de Ojo Seco Tipo de estudio: Diagnostic_studies Límite: Humans Idioma: En Revista: Graefes Arch Clin Exp Ophthalmol Año: 2022 Tipo del documento: Article País de afiliación: Alemania

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Síndromes de Ojo Seco Tipo de estudio: Diagnostic_studies Límite: Humans Idioma: En Revista: Graefes Arch Clin Exp Ophthalmol Año: 2022 Tipo del documento: Article País de afiliación: Alemania