Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Ophthalmology ; 129(2): 171-180, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34339778

RESUMEN

PURPOSE: To develop and validate a multimodal artificial intelligence algorithm, FusionNet, using the pattern deviation probability plots from visual field (VF) reports and circular peripapillary OCT scans to detect glaucomatous optic neuropathy (GON). DESIGN: Cross-sectional study. SUBJECTS: Two thousand four hundred sixty-three pairs of VF and OCT images from 1083 patients. METHODS: FusionNet based on bimodal input of VF and OCT paired data was developed to detect GON. Visual field data were collected using the Humphrey Field Analyzer (HFA). OCT images were collected from 3 types of devices (DRI-OCT, Cirrus OCT, and Spectralis). Two thousand four hundred sixty-three pairs of VF and OCT images were divided into 4 datasets: 1567 for training (HFA and DRI-OCT), 441 for primary validation (HFA and DRI-OCT), 255 for the internal test (HFA and Cirrus OCT), and 200 for the external test set (HFA and Spectralis). GON was defined as retinal nerve fiber layer thinning with corresponding VF defects. MAIN OUTCOME MEASURES: Diagnostic performance of FusionNet compared with that of VFNet (with VF data as input) and OCTNet (with OCT data as input). RESULTS: FusionNet achieved an area under the receiver operating characteristic curve (AUC) of 0.950 (0.931-0.968) and outperformed VFNet (AUC, 0.868 [95% confidence interval (CI), 0.834-0.902]), OCTNet (AUC, 0.809 [95% CI, 0.768-0.850]), and 2 glaucoma specialists (glaucoma specialist 1: AUC, 0.882 [95% CI, 0.847-0.917]; glaucoma specialist 2: AUC, 0.883 [95% CI, 0.849-0.918]) in the primary validation set. In the internal and external test sets, the performances of FusionNet were also superior to VFNet and OCTNet (FusionNet vs VFNet vs OCTNet: internal test set 0.917 vs 0.854 vs 0.811; external test set 0.873 vs 0.772 vs 0.785). No significant difference was found between the 2 glaucoma specialists and FusionNet in the internal and external test sets, except for glaucoma specialist 2 (AUC, 0.858 [95% CI, 0.805-0.912]) in the internal test set. CONCLUSIONS: FusionNet, developed using paired VF and OCT data, demonstrated superior performance to both VFNet and OCTNet in detecting GON, suggesting that multimodal machine learning models are valuable in detecting GON.


Asunto(s)
Glaucoma de Ángulo Abierto/diagnóstico por imagen , Aprendizaje Automático , Enfermedades del Nervio Óptico/diagnóstico por imagen , Tomografía de Coherencia Óptica , Trastornos de la Visión/fisiopatología , Campos Visuales/fisiología , Adulto , Anciano , Algoritmos , Área Bajo la Curva , Estudios Transversales , Femenino , Glaucoma de Ángulo Abierto/fisiopatología , Humanos , Presión Intraocular , Masculino , Persona de Mediana Edad , Imagen Multimodal , Fibras Nerviosas/patología , Enfermedades del Nervio Óptico/fisiopatología , Curva ROC , Células Ganglionares de la Retina/patología , Pruebas del Campo Visual
3.
BMC Med Imaging ; 18(1): 35, 2018 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-30286740

RESUMEN

BACKGROUND: To develop a deep neural network able to differentiate glaucoma from non-glaucoma visual fields based on visual filed (VF) test results, we collected VF tests from 3 different ophthalmic centers in mainland China. METHODS: Visual fields obtained by both Humphrey 30-2 and 24-2 tests were collected. Reliability criteria were established as fixation losses less than 2/13, false positive and false negative rates of less than 15%. RESULTS: We split a total of 4012 PD images from 1352 patients into two sets, 3712 for training and another 300 for validation. There is no significant difference between left to right ratio (P = 0.6211), while age (P = 0.0022), VFI (P = 0.0001), MD (P = 0.0039) and PSD (P = 0.0001) exhibited obvious statistical differences. On the validation set of 300 VFs, CNN achieves the accuracy of 0.876, while the specificity and sensitivity are 0.826 and 0.932, respectively. For ophthalmologists, the average accuracies are 0.607, 0.585 and 0.626 for resident ophthalmologists, attending ophthalmologists and glaucoma experts, respectively. AGIS and GSS2 achieved accuracy of 0.459 and 0.523 respectively. Three traditional machine learning algorithms, namely support vector machine (SVM), random forest (RF), and k-nearest neighbor (k-NN) were also implemented and evaluated in the experiments, which achieved accuracy of 0.670, 0.644, and 0.591 respectively. CONCLUSIONS: Our algorithm based on CNN has achieved higher accuracy compared to human ophthalmologists and traditional rules (AGIS and GSS2) in differentiation of glaucoma and non-glaucoma VFs.


Asunto(s)
Glaucoma/diagnóstico , Pruebas del Campo Visual/métodos , Adulto , Anciano , Femenino , Humanos , Aprendizaje Automático , Persona de Mediana Edad , Reproducibilidad de los Resultados
4.
Asia Pac J Ophthalmol (Phila) ; 13(4): 100085, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39059558

RESUMEN

Large language models (LLMs), a natural language processing technology based on deep learning, are currently in the spotlight. These models closely mimic natural language comprehension and generation. Their evolution has undergone several waves of innovation similar to convolutional neural networks. The transformer architecture advancement in generative artificial intelligence marks a monumental leap beyond early-stage pattern recognition via supervised learning. With the expansion of parameters and training data (terabytes), LLMs unveil remarkable human interactivity, encompassing capabilities such as memory retention and comprehension. These advances make LLMs particularly well-suited for roles in healthcare communication between medical practitioners and patients. In this comprehensive review, we discuss the trajectory of LLMs and their potential implications for clinicians and patients. For clinicians, LLMs can be used for automated medical documentation, and given better inputs and extensive validation, LLMs may be able to autonomously diagnose and treat in the future. For patient care, LLMs can be used for triage suggestions, summarization of medical documents, explanation of a patient's condition, and customizing patient education materials tailored to their comprehension level. The limitations of LLMs and possible solutions for real-world use are also presented. Given the rapid advancements in this area, this review attempts to briefly cover many roles that LLMs may play in the ophthalmic space, with a focus on improving the quality of healthcare delivery.


Asunto(s)
Procesamiento de Lenguaje Natural , Oftalmología , Humanos , Aprendizaje Profundo , Inteligencia Artificial , Redes Neurales de la Computación
5.
Comput Biol Med ; 151(Pt B): 106283, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36442272

RESUMEN

Glaucoma has become a major cause of vision loss. Early-stage diagnosis of glaucoma is critical for treatment planning to avoid irreversible vision damage. Meanwhile, interpreting the rapidly accumulated medical data from ophthalmic exams is cumbersome and resource-intensive. Therefore, automated methods are highly desired to assist ophthalmologists in achieving fast and accurate glaucoma diagnosis. Deep learning has achieved great successes in diagnosing glaucoma by analyzing data from different kinds of tests, such as peripapillary optical coherence tomography (OCT) and visual field (VF) testing. Nevertheless, applying these developed models to clinical practice is still challenging because of various limiting factors. OCT models present worse glaucoma diagnosis performances compared to those achieved by OCT&VF based models, whereas VF is time-consuming and highly variable, which can restrict the wide employment of OCT&VF models. To this end, we develop a novel deep learning framework that leverages the OCT&VF model to enhance the performance of the OCT model. To transfer the complementary knowledge from the structural and functional assessments to the OCT model, a cross-modal knowledge transfer method is designed by integrating a designed distillation loss and a proposed asynchronous feature regularization (AFR) module. We demonstrate the effectiveness of the proposed method for glaucoma diagnosis by utilizing a public OCT&VF dataset and evaluating it on an external OCT dataset. Our final model with only OCT inputs achieves the accuracy of 87.4% (3.1% absolute improvement) and AUC of 92.3%, which are on par with the OCT&VF joint model. Moreover, results on the external dataset sufficiently indicate the effectiveness and generalization capability of our model.


Asunto(s)
Glaucoma , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Campos Visuales , Destilación , Glaucoma/diagnóstico por imagen , Pruebas del Campo Visual/métodos , Presión Intraocular
6.
IEEE Trans Med Imaging ; 40(9): 2392-2402, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33945474

RESUMEN

Glaucoma is the leading reason for irreversible blindness. Early detection and timely treatment of glaucoma are essential for preventing visual field loss or even blindness. In clinical practice, Optical Coherence Tomography (OCT) and Visual Field (VF) exams are two widely-used and complementary techniques for diagnosing glaucoma. OCT provides quantitative measurements of the optic nerve head (ONH) structure, while VF test is the functional assessment of peripheral vision. In this paper, we propose a Deep Relation Transformer (DRT) to perform glaucoma diagnosis with OCT and VF information combined. A novel deep reasoning mechanism is proposed to explore implicit pairwise relations between OCT and VF information in global and regional manners. With the pairwise relations, a carefully-designed deep transformer mechanism is developed to enhance the representation with complementary information for each modal. Based on reasoning and transformer mechanisms, three successive modules are designed to extract and collect valuable information for glaucoma diagnosis, the global relation module, the guided regional relation module, and the interaction transformer module, namely. Moreover, we build a large dataset, namely ZOC-OCT&VF dataset, which includes 1395 OCT-VF pairs for developing and evaluating our DRT. We conduct extensive experiments to validate the effectiveness of the proposed method. Experimental results show that our method achieves 88.3% accuracy and outperforms the existing single-modal approaches with a large margin. The codes and dataset will be publicly available in the future.


Asunto(s)
Glaucoma , Disco Óptico , Glaucoma/diagnóstico por imagen , Humanos , Presión Intraocular , Disco Óptico/diagnóstico por imagen , Tomografía de Coherencia Óptica , Pruebas del Campo Visual , Campos Visuales
7.
NPJ Digit Med ; 3: 123, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33043147

RESUMEN

By 2040, ~100 million people will have glaucoma. To date, there are a lack of high-efficiency glaucoma diagnostic tools based on visual fields (VFs). Herein, we develop and evaluate the performance of 'iGlaucoma', a smartphone application-based deep learning system (DLS) in detecting glaucomatous VF changes. A total of 1,614,808 data points of 10,784 VFs (5542 patients) from seven centers in China were included in this study, divided over two phases. In Phase I, 1,581,060 data points from 10,135 VFs of 5105 patients were included to train (8424 VFs), validate (598 VFs) and test (3 independent test sets-200, 406, 507 samples) the diagnostic performance of the DLS. In Phase II, using the same DLS, iGlaucoma cloud-based application further tested on 33,748 data points from 649 VFs of 437 patients from three glaucoma clinics. With reference to three experienced expert glaucomatologists, the diagnostic performance (area under curve [AUC], sensitivity and specificity) of the DLS and six ophthalmologists were evaluated in detecting glaucoma. In Phase I, the DLS outperformed all six ophthalmologists in the three test sets (AUC of 0.834-0.877, with a sensitivity of 0.831-0.922 and a specificity of 0.676-0.709). In Phase II, iGlaucoma had 0.99 accuracy in recognizing different patterns in pattern deviation probability plots region, with corresponding AUC, sensitivity and specificity of 0.966 (0.953-0.979), 0.954 (0.930-0.977), and 0.873 (0.838-0.908), respectively. The 'iGlaucoma' is a clinically effective glaucoma diagnostic tool to detect glaucoma from humphrey VFs, although the target population will need to be carefully identified with glaucoma expertise input.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA