Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Curr Opin Ophthalmol ; 35(2): 104-110, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38018807

RESUMO

PURPOSE OF REVIEW: To address the current role of artificial intelligence (AI) in the field of glaucoma. RECENT FINDINGS: Current deep learning (DL) models concerning glaucoma diagnosis have shown consistently improving diagnostic capabilities, primarily based on color fundus photography and optical coherence tomography, but also with multimodal strategies. Recent models have also suggested that AI may be helpful in detecting and estimating visual field progression from different input data. Moreover, with the emergence of newer DL architectures and synthetic data, challenges such as model generalizability and explainability have begun to be tackled. SUMMARY: While some challenges remain before AI is routinely employed in clinical practice, new research has expanded the range in which it can be used in the context of glaucoma management and underlined the relevance of this research avenue.


Assuntos
Aprendizado Profundo , Glaucoma , Humanos , Inteligência Artificial , Glaucoma/diagnóstico , Técnicas de Diagnóstico Oftalmológico , Campos Visuais
2.
Acta Ophthalmol ; 102(2): 216-227, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37753831

RESUMO

PURPOSE: As the first step in monitoring and evaluating day-to-day glaucoma care, this study reports all real-world data recorded during the first full year after the implementation of a prototype for glaucoma-specific structured electronic healthcare record (EHR). METHODS: In 2019, 4618 patients visited Tays Medical Glaucoma Clinic at Tays Eye Centre, Tampere University Hospital, Finland, that serves a population of 0.53 M. Patient data were entered into a glaucoma-specific EHR by trained nurses to be checked by glaucoma specialists. Tays Eye Centre follows the Finnish Current Care Guideline for glaucoma in which glaucoma is defined using a '2 out of 3' rule, that is, ≥2 findings evaluated as glaucomatous in optic nerve head (ONH), retinal nerve fibre layer (RNFL) and visual field (VF). RESULTS: The clinical evaluations of ONH, RNFL and VF were recorded in 95%-100% of all eyes. ONH was evaluated as glaucomatous more often (44%) than RNFL (33%) and VF tests (30%). Progressive changes in any of the three tests were recorded in 35% of the '≥2/3 glaucoma group' compared to 2%-9% in the other groups. The mean IOP at visit was 15 mmHg. The mean target IOP was 17 mmHg, and it was recorded in 94% of eyes. CONCLUSION: The developed structured data presentation enables comparisons between different population-based real-world glaucoma data sets and glaucoma clinics. Compared to a data set from the UK, the proportion of glaucoma suspicion-related visits was smaller in Tays Eye Centre and test intervals were longer.


Assuntos
Glaucoma , Disco Óptico , Humanos , Registros Eletrônicos de Saúde , Glaucoma/diagnóstico , Testes de Campo Visual/métodos , Campos Visuais , Tomografia de Coerência Óptica/métodos , Pressão Intraocular
3.
NPJ Digit Med ; 6(1): 112, 2023 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-37311940

RESUMO

A plethora of classification models for the detection of glaucoma from fundus images have been proposed in recent years. Often trained with data from a single glaucoma clinic, they report impressive performance on internal test sets, but tend to struggle in generalizing to external sets. This performance drop can be attributed to data shifts in glaucoma prevalence, fundus camera, and the definition of glaucoma ground truth. In this study, we confirm that a previously described regression network for glaucoma referral (G-RISK) obtains excellent results in a variety of challenging settings. Thirteen different data sources of labeled fundus images were utilized. The data sources include two large population cohorts (Australian Blue Mountains Eye Study, BMES and German Gutenberg Health Study, GHS) and 11 publicly available datasets (AIROGS, ORIGA, REFUGE1, LAG, ODIR, REFUGE2, GAMMA, RIM-ONEr3, RIM-ONE DL, ACRIMA, PAPILA). To minimize data shifts in input data, a standardized image processing strategy was developed to obtain 30° disc-centered images from the original data. A total of 149,455 images were included for model testing. Area under the receiver operating characteristic curve (AUC) for BMES and GHS population cohorts were at 0.976 [95% CI: 0.967-0.986] and 0.984 [95% CI: 0.980-0.991] on participant level, respectively. At a fixed specificity of 95%, sensitivities were at 87.3% and 90.3%, respectively, surpassing the minimum criteria of 85% sensitivity recommended by Prevent Blindness America. AUC values on the eleven publicly available data sets ranged from 0.854 to 0.988. These results confirm the excellent generalizability of a glaucoma risk regression model trained with homogeneous data from a single tertiary referral center. Further validation using prospective cohort studies is warranted.

4.
Transl Vis Sci Technol ; 11(8): 22, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35998059

RESUMO

Purpose: Standard automated perimetry is the gold standard to monitor visual field (VF) loss in glaucoma management, but it is prone to intrasubject variability. We trained and validated a customized deep learning (DL) regression model with Xception backbone that estimates pointwise and overall VF sensitivity from unsegmented optical coherence tomography (OCT) scans. Methods: DL regression models have been trained with four imaging modalities (circumpapillary OCT at 3.5 mm, 4.1 mm, and 4.7 mm diameter) and scanning laser ophthalmoscopy en face images to estimate mean deviation (MD) and 52 threshold values. This retrospective study used data from patients who underwent a complete glaucoma examination, including a reliable Humphrey Field Analyzer (HFA) 24-2 SITA Standard (SS) VF exam and a SPECTRALIS OCT. Results: For MD estimation, weighted prediction averaging of all four individuals yielded a mean absolute error (MAE) of 2.89 dB (2.50-3.30) on 186 test images, reducing the baseline by 54% (MAEdecr%). For 52 VF threshold values' estimation, the weighted ensemble model resulted in an MAE of 4.82 dB (4.45-5.22), representing an MAEdecr% of 38% from baseline when predicting the pointwise mean value. DL managed to explain 75% and 58% of the variance (R2) in MD and pointwise sensitivity estimation, respectively. Conclusions: Deep learning can estimate global and pointwise VF sensitivities that fall almost entirely within the 90% test-retest confidence intervals of the 24-2 SS test. Translational Relevance: Fast and consistent VF prediction from unsegmented OCT scans could become a solution for visual function estimation in patients unable to perform reliable VF exams.


Assuntos
Aprendizado Profundo , Glaucoma , Glaucoma/diagnóstico por imagem , Humanos , Estudos Retrospectivos , Tomografia de Coerência Óptica , Transtornos da Visão/diagnóstico , Campos Visuais
5.
Sci Rep ; 11(1): 20313, 2021 10 13.
Artigo em Inglês | MEDLINE | ID: mdl-34645908

RESUMO

Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.


Assuntos
Aprendizado Profundo , Fundo de Olho , Glaucoma/diagnóstico por imagem , Disco Óptico/diagnóstico por imagem , Doenças do Nervo Óptico/diagnóstico por imagem , Idoso , Área Sob a Curva , Diagnóstico por Computador/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Análise de Regressão , Retina/diagnóstico por imagem , Sensibilidade e Especificidade
6.
Comput Methods Programs Biomed ; 199: 105920, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33412285

RESUMO

BACKGROUND AND OBJECTIVES: Pathological myopia (PM) is the seventh leading cause of blindness, with a reported global prevalence up to 3%. Early and automated PM detection from fundus images could aid to prevent blindness in a world population that is characterized by a rising myopia prevalence. We aim to assess the use of convolutional neural networks (CNNs) for the detection of PM and semantic segmentation of myopia-induced lesions from fundus images on a recently introduced reference data set. METHODS: This investigation reports on the results of CNNs developed for the recently introduced Pathological Myopia (PALM) dataset, which consists of 1200 images. Our CNN bundles lesion segmentation and PM classification, as the two tasks are heavily intertwined. Domain knowledge is also inserted through the introduction of a new Optic Nerve Head (ONH)-based prediction enhancement for the segmentation of atrophy and fovea localization. Finally, we are the first to approach fovea localization using segmentation instead of detection or regression models. Evaluation metrics include area under the receiver operating characteristic curve (AUC) for PM detection, Euclidean distance for fovea localization, and Dice and F1 metrics for the semantic segmentation tasks (optic disc, retinal atrophy and retinal detachment). RESULTS: Models trained with 400 available training images achieved an AUC of 0.9867 for PM detection, and a Euclidean distance of 58.27 pixels on the fovea localization task, evaluated on a test set of 400 images. Dice and F1 metrics for semantic segmentation of lesions scored 0.9303 and 0.9869 on optic disc, 0.8001 and 0.9135 on retinal atrophy, and 0.8073 and 0.7059 on retinal detachment, respectively. CONCLUSIONS: We report a successful approach for a simultaneous classification of pathological myopia and segmentation of associated lesions. Our work was acknowledged with an award in the context of the "Pathological Myopia detection from retinal images" challenge held during the IEEE International Symposium on Biomedical Imaging (April 2019). Considering that (pathological) myopia cases are often identified as false positives and negatives in glaucoma deep learning models, we envisage that the current work could aid in future research to discriminate between glaucomatous and highly-myopic eyes, complemented by the localization and segmentation of landmarks such as fovea, optic disc and atrophy.


Assuntos
Aprendizado Profundo , Glaucoma , Miopia Degenerativa , Disco Óptico , Fundo de Olho , Humanos , Miopia Degenerativa/diagnóstico por imagem
7.
Acta Ophthalmol ; 98(1): e94-e100, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31344328

RESUMO

PURPOSE: To assess the use of deep learning (DL) for computer-assisted glaucoma identification, and the impact of training using images selected by an active learning strategy, which minimizes labelling cost. Additionally, this study focuses on the explainability of the glaucoma classifier. METHODS: This original investigation pooled 8433 retrospectively collected and anonymized colour optic disc-centred fundus images, in order to develop a deep learning-based classifier for glaucoma diagnosis. The labels of the various deep learning models were compared with the clinical assessment by glaucoma experts. Data were analysed between March and October 2018. Sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and amount of data used for discriminating between glaucomatous and non-glaucomatous fundus images, on both image and patient level. RESULTS: Trained using 2072 colour fundus images, representing 42% of the original training data, the trained DL model achieved an AUC of 0.995, sensitivity and specificity of, respectively, 98.0% (CI 95.5%-99.4%) and 91% (CI 84.0%-96.0%), for glaucoma versus non-glaucoma patient referral. CONCLUSIONS: These results demonstrate the benefits of deep learning for automated glaucoma detection based on optic disc-centred fundus images. The combined use of transfer and active learning in the medical community can optimize performance of DL models, while minimizing the labelling cost of domain-specific mavens. Glaucoma experts are able to make use of heat maps generated by the deep learning classifier to assess its decision, which seems to be related to inferior and superior neuroretinal rim (within ONH), and RNFL in superotemporal and inferotemporal zones (outside ONH).


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Glaucoma/diagnóstico , Disco Óptico/patologia , Seguimentos , Fundo de Olho , Humanos , Curva ROC , Estudos Retrospectivos
8.
Comput Med Imaging Graph ; 76: 101636, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31288217

RESUMO

Epidemiological studies demonstrate that dimensions of retinal vessels change with ocular diseases, coronary heart disease and stroke. Different metrics have been described to quantify these changes in fundus images, with arteriolar and venular calibers among the most widely used. The analysis often includes a manual procedure during which a trained grader differentiates between arterioles and venules. This step can be time-consuming and can introduce variability, especially when large volumes of images need to be analyzed. In light of the recent successes of fully convolutional networks (FCNs) applied to biomedical image segmentation, we assess its potential in the context of retinal artery-vein (A/V) discrimination. To the best of our knowledge, a deep learning (DL) architecture for simultaneous vessel extraction and A/V discrimination has not been previously employed. With the aim of improving the automation of vessel analysis, a novel application of the U-Net semantic segmentation architecture (based on FCNs) on the discrimination of arteries and veins in fundus images is presented. By utilizing DL, results are obtained that exceed accuracies reported in the literature. Our model was trained and tested on the public DRIVE and HRF datasets. For DRIVE, measuring performance on vessels wider than two pixels, the FCN achieved accuracies of 94.42% and 94.11% on arteries and veins, respectively. This represents a decrease in error of 25% over the previous state of the art reported by Xu et al. (2017). Additionally, we introduce the HRF A/V ground truth, on which our model achieves 96.98% accuracy on all discovered centerline pixels. HRF A/V ground truth validated by an ophthalmologist, predicted A/V annotations and evaluation code are available at https://github.com/rubenhx/av-segmentation.


Assuntos
Aprendizado Profundo , Fundo de Olho , Vasos Retinianos/diagnóstico por imagem , Benchmarking , Conjuntos de Dados como Assunto , Humanos , Fotografação , Artéria Retiniana/diagnóstico por imagem , Veia Retiniana/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA