Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Adv Ophthalmol Pract Res ; 4(3): 164-172, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39114269

RESUMEN

Background: Uncorrected refractive error is a major cause of vision impairment worldwide and its increasing prevalent necessitates effective screening and management strategies. Meanwhile, deep learning, a subset of Artificial Intelligence, has significantly advanced ophthalmological diagnostics by automating tasks that required extensive clinical expertise. Although recent studies have investigated the use of deep learning models for refractive power detection through various imaging techniques, a comprehensive systematic review on this topic is has yet be done. This review aims to summarise and evaluate the performance of ocular image-based deep learning models in predicting refractive errors. Main text: We search on three databases (PubMed, Scopus, Web of Science) up till June 2023, focusing on deep learning applications in detecting refractive error from ocular images. We included studies that had reported refractive error outcomes, regardless of publication years. We systematically extracted and evaluated the continuous outcomes (sphere, SE, cylinder) and categorical outcomes (myopia), ground truth measurements, ocular imaging modalities, deep learning models, and performance metrics, adhering to PRISMA guidelines. Nine studies were identified and categorised into three groups: retinal photo-based (n â€‹= â€‹5), OCT-based (n â€‹= â€‹1), and external ocular photo-based (n â€‹= â€‹3).For high myopia prediction, retinal photo-based models achieved AUC between 0.91 and 0.98, sensitivity levels between 85.10% and 97.80%, and specificity levels between 76.40% and 94.50%. For continuous prediction, retinal photo-based models reported MAE ranging from 0.31D to 2.19D, and R 2 between 0.05 and 0.96. The OCT-based model achieved an AUC of 0.79-0.81, sensitivity of 82.30% and 87.20% and specificity of 61.70%-68.90%. For external ocular photo-based models, the AUC ranged from 0.91 to 0.99, sensitivity of 81.13%-84.00% and specificity of 74.00%-86.42%, MAE ranges from 0.07D to 0.18D and accuracy ranges from 81.60% to 96.70%. The reported papers collectively showed promising performances, in particular the retinal photo-based and external eye photo -based DL models. Conclusions: The integration of deep learning model and ocular imaging for refractive error detection appear promising. However, their real-world clinical utility in current screening workflow have yet been evaluated and would require thoughtful consideration in design and implementation.

2.
Ophthalmol Sci ; 4(6): 100552, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39165694

RESUMEN

Objective: Vision transformers (ViTs) have shown promising performance in various classification tasks previously dominated by convolutional neural networks (CNNs). However, the performance of ViTs in referable diabetic retinopathy (DR) detection is relatively underexplored. In this study, using retinal photographs, we evaluated the comparative performances of ViTs and CNNs on detection of referable DR. Design: Retrospective study. Participants: A total of 48 269 retinal images from the open-source Kaggle DR detection dataset, the Messidor-1 dataset and the Singapore Epidemiology of Eye Diseases (SEED) study were included. Methods: Using 41 614 retinal photographs from the Kaggle dataset, we developed 5 CNN (Visual Geometry Group 19, ResNet50, InceptionV3, DenseNet201, and EfficientNetV2S) and 4 ViTs models (VAN_small, CrossViT_small, ViT_small, and Hierarchical Vision transformer using Shifted Windows [SWIN]_tiny) for the detection of referable DR. We defined the presence of referable DR as eyes with moderate or worse DR. The comparative performance of all 9 models was evaluated in the Kaggle internal test dataset (with 1045 study eyes), and in 2 external test sets, the SEED study (5455 study eyes) and the Messidor-1 (1200 study eyes). Main Outcome Measures: Area under operating characteristics curve (AUC), specificity, and sensitivity. Results: Among all models, the SWIN transformer displayed the highest AUC of 95.7% on the internal test set, significantly outperforming the CNN models (all P < 0.001). The same observation was confirmed in the external test sets, with the SWIN transformer achieving AUC of 97.3% in SEED and 96.3% in Messidor-1. When specificity level was fixed at 80% for the internal test, the SWIN transformer achieved the highest sensitivity of 94.4%, significantly better than all the CNN models (sensitivity levels ranging between 76.3% and 83.8%; all P < 0.001). This trend was also consistently observed in both external test sets. Conclusions: Our findings demonstrate that ViTs provide superior performance over CNNs in detecting referable DR from retinal photographs. These results point to the potential of utilizing ViT models to improve and optimize retinal photo-based deep learning for referable DR detection. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

3.
Lancet Diabetes Endocrinol ; 12(8): 569-595, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39054035

RESUMEN

Artificial intelligence (AI) use in diabetes care is increasingly being explored to personalise care for people with diabetes and adapt treatments for complex presentations. However, the rapid advancement of AI also introduces challenges such as potential biases, ethical considerations, and implementation challenges in ensuring that its deployment is equitable. Ensuring inclusive and ethical developments of AI technology can empower both health-care providers and people with diabetes in managing the condition. In this Review, we explore and summarise the current and future prospects of AI across the diabetes care continuum, from enhancing screening and diagnosis to optimising treatment and predicting and managing complications.


Asunto(s)
Inteligencia Artificial , Diabetes Mellitus , Humanos , Inteligencia Artificial/tendencias , Diabetes Mellitus/terapia , Diabetes Mellitus/diagnóstico
4.
Nat Med ; 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39030266

RESUMEN

Primary diabetes care and diabetic retinopathy (DR) screening persist as major public health challenges due to a shortage of trained primary care physicians (PCPs), particularly in low-resource settings. Here, to bridge the gaps, we developed an integrated image-language system (DeepDR-LLM), combining a large language model (LLM module) and image-based deep learning (DeepDR-Transformer), to provide individualized diabetes management recommendations to PCPs. In a retrospective evaluation, the LLM module demonstrated comparable performance to PCPs and endocrinology residents when tested in English and outperformed PCPs and had comparable performance to endocrinology residents in Chinese. For identifying referable DR, the average PCP's accuracy was 81.0% unassisted and 92.3% assisted by DeepDR-Transformer. Furthermore, we performed a single-center real-world prospective study, deploying DeepDR-LLM. We compared diabetes management adherence of patients under the unassisted PCP arm (n = 397) with those under the PCP+DeepDR-LLM arm (n = 372). Patients with newly diagnosed diabetes in the PCP+DeepDR-LLM arm showed better self-management behaviors throughout follow-up (P < 0.05). For patients with referral DR, those in the PCP+DeepDR-LLM arm were more likely to adhere to DR referrals (P < 0.01). Additionally, DeepDR-LLM deployment improved the quality and empathy level of management recommendations. Given its multifaceted performance, DeepDR-LLM holds promise as a digital solution for enhancing primary diabetes care and DR screening.

5.
Commun Med (Lond) ; 3(1): 184, 2023 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-38104223

RESUMEN

BACKGROUND: Cataract diagnosis typically requires in-person evaluation by an ophthalmologist. However, color fundus photography (CFP) is widely performed outside ophthalmology clinics, which could be exploited to increase the accessibility of cataract screening by automated detection. METHODS: DeepOpacityNet was developed to detect cataracts from CFP and highlight the most relevant CFP features associated with cataracts. We used 17,514 CFPs from 2573 AREDS2 participants curated from the Age-Related Eye Diseases Study 2 (AREDS2) dataset, of which 8681 CFPs were labeled with cataracts. The ground truth labels were transferred from slit-lamp examination of nuclear cataracts and reading center grading of anterior segment photographs for cortical and posterior subcapsular cataracts. DeepOpacityNet was internally validated on an independent test set (20%), compared to three ophthalmologists on a subset of the test set (100 CFPs), externally validated on three datasets obtained from the Singapore Epidemiology of Eye Diseases study (SEED), and visualized to highlight important features. RESULTS: Internally, DeepOpacityNet achieved a superior accuracy of 0.66 (95% confidence interval (CI): 0.64-0.68) and an area under the curve (AUC) of 0.72 (95% CI: 0.70-0.74), compared to that of other state-of-the-art methods. DeepOpacityNet achieved an accuracy of 0.75, compared to an accuracy of 0.67 for the ophthalmologist with the highest performance. Externally, DeepOpacityNet achieved AUC scores of 0.86, 0.88, and 0.89 on SEED datasets, demonstrating the generalizability of our proposed method. Visualizations show that the visibility of blood vessels could be characteristic of cataract absence while blurred regions could be characteristic of cataract presence. CONCLUSIONS: DeepOpacityNet could detect cataracts from CFPs in AREDS2 with performance superior to that of ophthalmologists and generate interpretable results. The code and models are available at https://github.com/ncbi/DeepOpacityNet ( https://doi.org/10.5281/zenodo.10127002 ).


Cataracts are cloudy areas in the eye that impact sight. Diagnosis typically requires in-person evaluation by an ophthalmologist. In this study, a computer program was developed that can identify cataracts from specialist photographs of the eye. The computer program successfully identified cataracts and was better able to identify these than ophthalmologists. This computer program could be introduced to improve the diagnosis of cataracts in eye clinics.

6.
EBioMedicine ; 95: 104770, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37625267

RESUMEN

BACKGROUND: Large language models (LLMs) are garnering wide interest due to their human-like and contextually relevant responses. However, LLMs' accuracy across specific medical domains has yet been thoroughly evaluated. Myopia is a frequent topic which patients and parents commonly seek information online. Our study evaluated the performance of three LLMs namely ChatGPT-3.5, ChatGPT-4.0, and Google Bard, in delivering accurate responses to common myopia-related queries. METHODS: We curated thirty-one commonly asked myopia care-related questions, which were categorised into six domains-pathogenesis, risk factors, clinical presentation, diagnosis, treatment and prevention, and prognosis. Each question was posed to the LLMs, and their responses were independently graded by three consultant-level paediatric ophthalmologists on a three-point accuracy scale (poor, borderline, good). A majority consensus approach was used to determine the final rating for each response. 'Good' rated responses were further evaluated for comprehensiveness on a five-point scale. Conversely, 'poor' rated responses were further prompted for self-correction and then re-evaluated for accuracy. FINDINGS: ChatGPT-4.0 demonstrated superior accuracy, with 80.6% of responses rated as 'good', compared to 61.3% in ChatGPT-3.5 and 54.8% in Google Bard (Pearson's chi-squared test, all p ≤ 0.009). All three LLM-Chatbots showed high mean comprehensiveness scores (Google Bard: 4.35; ChatGPT-4.0: 4.23; ChatGPT-3.5: 4.11, out of a maximum score of 5). All LLM-Chatbots also demonstrated substantial self-correction capabilities: 66.7% (2 in 3) of ChatGPT-4.0's, 40% (2 in 5) of ChatGPT-3.5's, and 60% (3 in 5) of Google Bard's responses improved after self-correction. The LLM-Chatbots performed consistently across domains, except for 'treatment and prevention'. However, ChatGPT-4.0 still performed superiorly in this domain, receiving 70% 'good' ratings, compared to 40% in ChatGPT-3.5 and 45% in Google Bard (Pearson's chi-squared test, all p ≤ 0.001). INTERPRETATION: Our findings underscore the potential of LLMs, particularly ChatGPT-4.0, for delivering accurate and comprehensive responses to myopia-related queries. Continuous strategies and evaluations to improve LLMs' accuracy remain crucial. FUNDING: Dr Yih-Chung Tham was supported by the National Medical Research Council of Singapore (NMRC/MOH/HCSAINV21nov-0001).


Asunto(s)
Benchmarking , Miopía , Humanos , Niño , Motor de Búsqueda , Consenso , Lenguaje , Miopía/diagnóstico , Miopía/epidemiología , Miopía/terapia
7.
Ophthalmology ; 129(5): 571-584, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34990643

RESUMEN

PURPOSE: To develop deep learning models to perform automated diagnosis and quantitative classification of age-related cataract from anterior segment photographs. DESIGN: DeepLensNet was trained by applying deep learning models to the Age-Related Eye Disease Study (AREDS) dataset. PARTICIPANTS: A total of 18 999 photographs (6333 triplets) from longitudinal follow-up of 1137 eyes (576 AREDS participants). METHODS: Deep learning models were trained to detect and quantify nuclear sclerosis (NS; scale 0.9-7.1) from 45-degree slit-lamp photographs and cortical lens opacity (CLO; scale 0%-100%) and posterior subcapsular cataract (PSC; scale 0%-100%) from retroillumination photographs. DeepLensNet performance was compared with that of 14 ophthalmologists and 24 medical students. MAIN OUTCOME MEASURES: Mean squared error (MSE). RESULTS: On the full test set, mean MSE for DeepLensNet was 0.23 (standard deviation [SD], 0.01) for NS, 13.1 (SD, 1.6) for CLO, and 16.6 (SD, 2.4) for PSC. On a subset of the test set (substantially enriched for positive cases of CLO and PSC), for NS, mean MSE for DeepLensNet was 0.23 (SD, 0.02), compared with 0.98 (SD, 0.24; P = 0.000001) for the ophthalmologists and 1.24 (SD, 0.34; P = 0.000005) for the medical students. For CLO, mean MSE was 53.5 (SD, 14.8), compared with 134.9 (SD, 89.9; P = 0.003) for the ophthalmologists and 433.6 (SD, 962.1; P = 0.0007) for the medical students. For PSC, mean MSE was 171.9 (SD, 38.9), compared with 176.8 (SD, 98.0; P = 0.67) for the ophthalmologists and 398.2 (SD, 645.4; P = 0.18) for the medical students. In external validation on the Singapore Malay Eye Study (sampled to reflect the cataract severity distribution in AREDS), the MSE for DeepSeeNet was 1.27 for NS and 25.5 for PSC. CONCLUSIONS: DeepLensNet performed automated and quantitative classification of cataract severity for all 3 types of age-related cataract. For the 2 most common types (NS and CLO), the accuracy was significantly superior to that of ophthalmologists; for the least common type (PSC), it was similar. DeepLensNet may have wide potential applications in both clinical and research domains. In the future, such approaches may increase the accessibility of cataract assessment globally. The code and models are available at https://github.com/ncbi/deeplensnet.


Asunto(s)
Extracción de Catarata , Catarata , Aprendizaje Profundo , Catarata/diagnóstico , Humanos , Fotograbar
8.
Nat Aging ; 2(3): 264-271, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-37118370

RESUMEN

Age-related cataracts are the leading cause of visual impairment among older adults. Many significant cases remain undiagnosed or neglected in communities, due to limited availability or accessibility to cataract screening. In the present study, we report the development and validation of a retinal photograph-based, deep-learning algorithm for automated detection of visually significant cataracts, using more than 25,000 images from population-based studies. In the internal test set, the area under the receiver operating characteristic curve (AUROC) was 96.6%. External testing performed across three studies showed AUROCs of 91.6-96.5%. In a separate test set of 186 eyes, we further compared the algorithm's performance with 4 ophthalmologists' evaluations. The algorithm performed comparably, if not being slightly more superior (sensitivity of 93.3% versus 51.7-96.6% by ophthalmologists and specificity of 99.0% versus 90.7-97.9% by ophthalmologists). Our findings show the potential of a retinal photograph-based screening tool for visually significant cataracts among older adults, providing more appropriate referrals to tertiary eye centers.


Asunto(s)
Catarata , Aprendizaje Profundo , Humanos , Anciano , Retina/diagnóstico por imagen , Catarata/diagnóstico , Curva ROC , Algoritmos
10.
Br J Ophthalmol ; 106(12): 1642-1647, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-34244208

RESUMEN

BACKGROUND/AIMS: To evaluate the performances of deep learning (DL) algorithms for detection of presence and extent pterygium, based on colour anterior segment photographs (ASPs) taken from slit-lamp and hand-held cameras. METHODS: Referable pterygium was defined as having extension towards the cornea from the limbus of >2.50 mm or base width at the limbus of >5.00 mm. 2503 images from the Singapore Epidemiology of Eye Diseases (SEED) study were used as the development set. Algorithms were validated on an internal set from the SEED cohort (629 images (55.3% pterygium, 8.4% referable pterygium)), and tested on two external clinic-based sets (set 1 with 2610 images (2.8% pterygium, 0.7% referable pterygium, from slit-lamp ASP); and set 2 with 3701 images, 2.5% pterygium, 0.9% referable pterygium, from hand-held ASP). RESULTS: The algorithm's area under the receiver operating characteristic curve (AUROC) for detection of any pterygium was 99.5%(sensitivity=98.6%; specificity=99.0%) in internal test set, 99.1% (sensitivity=95.9%, specificity=98.5%) in external test set 1 and 99.7% (sensitivity=100.0%; specificity=88.3%) in external test set 2. For referable pterygium, the algorithm's AUROC was 98.5% (sensitivity=94.0%; specificity=95.3%) in internal test set, 99.7% (sensitivity=87.2%; specificity=99.4%) in external set 1 and 99.0% (sensitivity=94.3%; specificity=98.0%) in external set 2. CONCLUSION: DL algorithms based on ASPs can detect presence of and referable-level pterygium with optimal sensitivity and specificity. These algorithms, particularly if used with a handheld camera, may potentially be used as a simple screening tool for detection of referable pterygium. Further validation in community setting is warranted. SYNOPSIS/PRECIS: DL algorithms based on ASPs can detect presence of and referable-level pterygium optimally, and may be used as a simple screening tool for the detection of referable pterygium in community screenings.


Asunto(s)
Aprendizaje Profundo , Oftalmopatías , Pterigion , Humanos , Pterigion/diagnóstico , Algoritmos , Área Bajo la Curva , Oftalmopatías/diagnóstico
11.
Lancet Digit Health ; 3(1): e29-e40, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33735066

RESUMEN

BACKGROUND: In current approaches to vision screening in the community, a simple and efficient process is needed to identify individuals who should be referred to tertiary eye care centres for vision loss related to eye diseases. The emergence of deep learning technology offers new opportunities to revolutionise this clinical referral pathway. We aimed to assess the performance of a newly developed deep learning algorithm for detection of disease-related visual impairment. METHODS: In this proof-of-concept study, using retinal fundus images from 15 175 eyes with complete data related to best-corrected visual acuity or pinhole visual acuity from the Singapore Epidemiology of Eye Diseases Study, we first developed a single-modality deep learning algorithm based on retinal photographs alone for detection of any disease-related visual impairment (defined as eyes from patients with major eye diseases and best-corrected visual acuity of <20/40), and moderate or worse disease-related visual impairment (eyes with disease and best-corrected visual acuity of <20/60). After development of the algorithm, we tested it internally, using a new set of 3803 eyes from the Singapore Epidemiology of Eye Diseases Study. We then tested it externally using three population-based studies (the Beijing Eye study [6239 eyes], Central India Eye and Medical study [6526 eyes], and Blue Mountains Eye Study [2002 eyes]), and two clinical studies (the Chinese University of Hong Kong's Sight Threatening Diabetic Retinopathy study [971 eyes] and the Outram Polyclinic Study [1225 eyes]). The algorithm's performance in each dataset was assessed on the basis of the area under the receiver operating characteristic curve (AUC). FINDINGS: In the internal test dataset, the AUC for detection of any disease-related visual impairment was 94·2% (95% CI 93·0-95·3; sensitivity 90·7% [87·0-93·6]; specificity 86·8% [85·6-87·9]). The AUC for moderate or worse disease-related visual impairment was 93·9% (95% CI 92·2-95·6; sensitivity 94·6% [89·6-97·6]; specificity 81·3% [80·0-82·5]). Across the five external test datasets (16 993 eyes), the algorithm achieved AUCs ranging between 86·6% (83·4-89·7; sensitivity 87·5% [80·7-92·5]; specificity 70·0% [66·7-73·1]) and 93·6% (92·4-94·8; sensitivity 87·8% [84·1-90·9]; specificity 87·1% [86·2-88·0]) for any disease-related visual impairment, and the AUCs for moderate or worse disease-related visual impairment ranged between 85·9% (81·8-90·1; sensitivity 84·7% [73·0-92·8]; specificity 74·4% [71·4-77·2]) and 93·5% (91·7-95·3; sensitivity 90·3% [84·2-94·6]; specificity 84·2% [83·2-85·1]). INTERPRETATION: This proof-of-concept study shows the potential of a single-modality, function-focused tool in identifying visual impairment related to major eye diseases, providing more timely and pinpointed referral of patients with disease-related visual impairment from the community to tertiary eye hospitals. FUNDING: National Medical Research Council, Singapore.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Oftalmopatías/complicaciones , Trastornos de la Visión/diagnóstico , Trastornos de la Visión/etiología , Anciano , Área Bajo la Curva , Pueblo Asiatico , Femenino , Humanos , Masculino , Persona de Mediana Edad , Fotograbar/métodos , Prueba de Estudio Conceptual , Curva ROC , Sensibilidad y Especificidad , Singapur/epidemiología
12.
Asia Pac J Ophthalmol (Phila) ; 9(2): 88-95, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32349116

RESUMEN

The rising popularity of artificial intelligence (AI) in ophthalmology is fuelled by the ever-increasing clinical "big data" that can be used for algorithm development. Cataract is one of the leading causes of visual impairment worldwide. However, compared with other major age-related eye diseases, such as diabetic retinopathy, age-related macular degeneration, and glaucoma, AI development in the domain of cataract is still relatively underexplored. In this regard, several previous studies explored algorithms for automated cataract assessment using either slit lamp of color fundus photographs. However, several other study groups proposed or derived new AI-based calculation for pre-cataract surgery intraocular lens power. Along with advancements in digitization of clinical data, data curation for future cataract-related AI developmental work is bound to undergo significant improvements in the foreseeable future. Even though most of these previous studies reported early promising performances, limitations such as lack of robust, high-quality training data, and lack of external validations remain. In the next phase of work, apart from algorithm's performance, it will also be pertinent to evaluate deployment angles, feasibility, efficiency, and cost-effectiveness of these new cataract-related AI systems.


Asunto(s)
Inteligencia Artificial/tendencias , Extracción de Catarata , Catarata/diagnóstico , Técnicas de Diagnóstico Oftalmológico , Humanos , Trastornos de la Visión/diagnóstico , Trastornos de la Visión/rehabilitación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...