Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Clin Ophthalmol ; 17: 1161-1168, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37082300

RESUMEN

Purpose: To evaluate the visual acuity and quality of vision in bilaterally implanted ZCBOO/ZCTx monofocal (Johnson & Johnson Vision) intraocular lens (IOL) and bilaterally implanted DATx15 extended depth of focus (EDOF) IOL (Alcon Vision, LLC). Methods: A single site, non-interventional study comparing ZCBOO/ZCTx monofocal IOL patients implanted with DATx15 IOL toric or non-toric versions in both eyes. A total of 30 patients (60 eyes) completed the study in the monofocal group, 32 (64 eyes) in the EDOF group, and all were targeted for emmetropia. Binocular uncorrected distance, intermediate (66cm), and near (40cm) visual acuities and distance corrected distance, intermediate (66cm) and near (40cm) visual acuities were assessed. Binocular distance corrected defocus curve testing was from -3.5 D to +3 D. Patient reported visual disturbances (QUVID) and IOL satisfaction (IOLSAT) questionnaires were administered. Results: The DATx15 group mean uncorrected visual acuity was 0.15 ± 0.10 logMAR at 66cm and 0.36 ± 0.14 logMAR at 40cm, compared to 0.24 ± 0.15 logMAR and 0.59 ± 0.17 logMAR respectively for the ZCBOO/ZCTx group. The DATx15 group (23 respondents, 74%) also reported significantly more spectacle independence at near with the IOLSAT (p < 0.01), compared to the ZCBOO/ZCTx group (13 respondents, 43%). Glare, halos, starbursts, and blur reported on the QUVID questionnaire were similar in the two groups. Conclusion: The DATx15 group had improved near and intermediate vision and increased spectacle independence compared to the ZCBOO/ZCTx group.

2.
Br J Ophthalmol ; 107(4): 511-517, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-34670749

RESUMEN

PURPOSE: To assess the generalisability and performance of a deep learning classifier for automated detection of gonioscopic angle closure in anterior segment optical coherence tomography (AS-OCT) images. METHODS: A convolutional neural network (CNN) model developed using data from the Chinese American Eye Study (CHES) was used to detect gonioscopic angle closure in AS-OCT images with reference gonioscopy grades provided by trained ophthalmologists. Independent test data were derived from the population-based CHES, a community-based clinic in Singapore, and a hospital-based clinic at the University of Southern California (USC). Classifier performance was evaluated with receiver operating characteristic curve and area under the receiver operating characteristic curve (AUC) metrics. Interexaminer agreement between the classifier and two human examiners at USC was calculated using Cohen's kappa coefficients. RESULTS: The classifier was tested using 640 images (311 open and 329 closed) from 127 Chinese Americans, 10 165 images (9595 open and 570 closed) from 1318 predominantly Chinese Singaporeans and 300 images (234 open and 66 closed) from 40 multiethnic USC patients. The classifier achieved similar performance in the CHES (AUC=0.917), Singapore (AUC=0.894) and USC (AUC=0.922) cohorts. Standardising the distribution of gonioscopy grades across cohorts produced similar AUC metrics (range 0.890-0.932). The agreement between the CNN classifier and two human examiners (Ò =0.700 and 0.704) approximated interexaminer agreement (Ò =0.693) in the USC cohort. CONCLUSION: An OCT-based deep learning classifier demonstrated consistent performance detecting gonioscopic angle closure across three independent patient populations. This automated method could aid ophthalmologists in the assessment of angle status in diverse patient populations.


Asunto(s)
Aprendizaje Profundo , Glaucoma de Ángulo Cerrado , Humanos , Gonioscopía , Segmento Anterior del Ojo , Tomografía de Coherencia Óptica/métodos , Presión Intraocular , Glaucoma de Ángulo Cerrado/diagnóstico , Hospitales
3.
Am J Ophthalmol ; 226: 100-107, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33577791

RESUMEN

PURPOSE: To compare the performance of a novel convolutional neural network (CNN) classifier and human graders in detecting angle closure in EyeCam (Clarity Medical Systems, Pleasanton, California, USA) goniophotographs. DESIGN: Retrospective cross-sectional study. METHODS: Subjects from the Chinese American Eye Study underwent EyeCam goniophotography in 4 angle quadrants. A CNN classifier based on the ResNet-50 architecture was trained to detect angle closure, defined as inability to visualize the pigmented trabecular meshwork, using reference labels by a single experienced glaucoma specialist. The performance of the CNN classifier was assessed using an independent test dataset and reference labels by the single glaucoma specialist or a panel of 3 glaucoma specialists. This performance was compared to that of 9 human graders with a range of clinical experience. Outcome measures included area under the receiver operating characteristic curve (AUC) metrics and Cohen kappa coefficients in the binary classification of open or closed angle. RESULTS: The CNN classifier was developed using 29,706 open and 2,929 closed angle images. The independent test dataset was composed of 600 open and 400 closed angle images. The CNN classifier achieved excellent performance based on single-grader (AUC = 0.969) and consensus (AUC = 0.952) labels. The agreement between the CNN classifier and consensus labels (κ = 0.746) surpassed that of all non-reference human graders (κ = 0.578-0.702). Human grader agreement with consensus labels improved with clinical experience (P = 0.03). CONCLUSION: A CNN classifier can effectively detect angle closure in goniophotographs with performance comparable to that of an experienced glaucoma specialist. This provides an automated method to support remote detection of patients at risk for primary angle closure glaucoma.


Asunto(s)
Diagnóstico por Computador/clasificación , Glaucoma de Ángulo Cerrado/diagnóstico , Procesamiento de Imagen Asistido por Computador/clasificación , Redes Neurales de la Computación , Fotograbar/clasificación , Anciano , Anciano de 80 o más Años , Segmento Anterior del Ojo/patología , Área Bajo la Curva , Asiático , China/etnología , Estudios Transversales , Sistemas Especialistas , Femenino , Glaucoma de Ángulo Cerrado/clasificación , Gonioscopía , Humanos , Masculino , Persona de Mediana Edad , Oftalmólogos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Especialización
4.
Curr Opin Ophthalmol ; 30(6): 484-490, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31589185

RESUMEN

PURPOSE OF REVIEW: As humans spend a considerable portion of life in the horizontal position, it is vital to better understand the effect of sleep position on glaucoma. RECENT FINDINGS: The mean positional increase from the supine position to the lateral decubitus position (LDP) in recent literature is less than 2 mmHg for each eye in its dependent position and less than 1 mmHg in the nondependent position. The right LDP is most commonly favored sleeping position. Some evidence suggests that the positional increases persist and so could lead to worse glaucomatous progression in the dependent eye. However, multiple studies failed to find a strong association. Ideally future research will identify risk factors for higher positional increases to identify patients who may benefit from a change in sleep position. To date, medications and argon laser trabeculoplasty have been ineffective in blunting the positional increase, although glaucoma surgery does reduce it. Raising the head of the bed has been linked with blunting the increase as well. SUMMARY: Certain sleeping positions appear to be associated with higher intraocular pressure, although the association between sleep position and glaucoma progression is not as clear.


Asunto(s)
Glaucoma/diagnóstico , Postura/fisiología , Sueño/fisiología , Progresión de la Enfermedad , Glaucoma/fisiopatología , Humanos , Presión Intraocular/fisiología , Tonometría Ocular
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA