Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Curr Glaucoma Pract ; 18(1): 4-9, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38585168

RESUMO

Aim and background: Automated perimetry plays an important role in the diagnosis and monitoring of glaucoma patients. The purpose of this study is to prospectively determine parity between Humphrey visual field analyzer (HVFA) perimetry (the current gold standard) and the VisuALL virtual reality perimeter (VRP). Materials and methods: In this prospective fully paired diagnostic accuracy study, patients with stable, long-term HVFA visual fields (horizontal dots for ≥4 consecutive visits on progression analysis) with preperimetric, mild, moderate, or severe visual field loss were familiarized with the VRP and then tested using its proprietary software. These results were used for point-by-point comparison with a contemporaneous HVFA test. This study was approved by the Institutional Review Board (IRB) of the University of the Incarnate Word, San Antonio, Texas, United States of America (IRB approval #20-06-002). Results: The prospective study analyzed 43 eyes of 24 glaucoma patients. Spearman's correlation of mean deviation (MD) revealed a strong correlation between HVFA and VRP with rs(41) = 0.871, p < 0.001. The overall mean difference in locus-locus sensitivity between the devices was -0.4 ± 1.5 dB but varied for different visual field locations and glaucoma severity. Conclusion: The parity between the VRP and HVFA was remarkably strong for mild and moderate glaucoma. Given its portability, ease of use, space efficiency, and low cost, the VRP presents a viable alternative. Clinical significance: Automated perimetry, specifically the HVFA, has been the gold standard for visual field assessment since its introduction. The recent COVID-19 pandemic has illuminated the advantages of the VRP, allowing for safer visual assessment for both patient and clinician alike. Our study hopes to establish parity between these systems, allowing for the efficient integration of a novel head-mounted perimetry system that can safely diagnose and monitor glaucomatous progression in clinical practice. Precis: Investigation of parity between Olleyes VisuALL virtual reality perimetry (VRP) and existing standard HVFA perimetry is essential to the diagnosis and management of glaucoma. Linear correlations between the two were established from 43 glaucomatous eyes. Parity was strong for mild and moderate glaucoma, presenting VRP as a viable alternative. How to cite this article: Griffin JM, Slagle GT, Vu TA, et al. Prospective Comparison of VisuALL Virtual Reality Perimetry and Humphrey Automated Perimetry in Glaucoma. J Curr Glaucoma Pract 2024;18(1):4-9.

2.
Transl Vis Sci Technol ; 13(2): 16, 2024 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-38381447

RESUMO

Purpose: Retinal images contain rich biomarker information for neurodegenerative disease. Recently, deep learning models have been used for automated neurodegenerative disease diagnosis and risk prediction using retinal images with good results. Methods: In this review, we systematically report studies with datasets of retinal images from patients with neurodegenerative diseases, including Alzheimer's disease, Huntington's disease, Parkinson's disease, amyotrophic lateral sclerosis, and others. We also review and characterize the models in the current literature which have been used for classification, regression, or segmentation problems using retinal images in patients with neurodegenerative diseases. Results: Our review found several existing datasets and models with various imaging modalities primarily in patients with Alzheimer's disease, with most datasets on the order of tens to a few hundred images. We found limited data available for the other neurodegenerative diseases. Although cross-sectional imaging data for Alzheimer's disease is becoming more abundant, datasets with longitudinal imaging of any disease are lacking. Conclusions: The use of bilateral and multimodal imaging together with metadata seems to improve model performance, thus multimodal bilateral image datasets with patient metadata are needed. We identified several deep learning tools that have been useful in this context including feature extraction algorithms specifically for retinal images, retinal image preprocessing techniques, transfer learning, feature fusion, and attention mapping. Importantly, we also consider the limitations common to these models in real-world clinical applications. Translational Relevance: This systematic review evaluates the deep learning models and retinal features relevant in the evaluation of retinal images of patients with neurodegenerative disease.


Assuntos
Doença de Alzheimer , Aprendizado Profundo , Doenças Neurodegenerativas , Retina , Humanos , Algoritmos , Doença de Alzheimer/diagnóstico por imagem , Aprendizado de Máquina , Doenças Neurodegenerativas/diagnóstico por imagem , Conjuntos de Dados como Assunto , Retina/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...