Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
Adv Ophthalmol Pract Res ; 4(3): 164-172, 2024.
Article in English | MEDLINE | ID: mdl-39114269

ABSTRACT

Background: Uncorrected refractive error is a major cause of vision impairment worldwide and its increasing prevalent necessitates effective screening and management strategies. Meanwhile, deep learning, a subset of Artificial Intelligence, has significantly advanced ophthalmological diagnostics by automating tasks that required extensive clinical expertise. Although recent studies have investigated the use of deep learning models for refractive power detection through various imaging techniques, a comprehensive systematic review on this topic is has yet be done. This review aims to summarise and evaluate the performance of ocular image-based deep learning models in predicting refractive errors. Main text: We search on three databases (PubMed, Scopus, Web of Science) up till June 2023, focusing on deep learning applications in detecting refractive error from ocular images. We included studies that had reported refractive error outcomes, regardless of publication years. We systematically extracted and evaluated the continuous outcomes (sphere, SE, cylinder) and categorical outcomes (myopia), ground truth measurements, ocular imaging modalities, deep learning models, and performance metrics, adhering to PRISMA guidelines. Nine studies were identified and categorised into three groups: retinal photo-based (n â€‹= â€‹5), OCT-based (n â€‹= â€‹1), and external ocular photo-based (n â€‹= â€‹3).For high myopia prediction, retinal photo-based models achieved AUC between 0.91 and 0.98, sensitivity levels between 85.10% and 97.80%, and specificity levels between 76.40% and 94.50%. For continuous prediction, retinal photo-based models reported MAE ranging from 0.31D to 2.19D, and R 2 between 0.05 and 0.96. The OCT-based model achieved an AUC of 0.79-0.81, sensitivity of 82.30% and 87.20% and specificity of 61.70%-68.90%. For external ocular photo-based models, the AUC ranged from 0.91 to 0.99, sensitivity of 81.13%-84.00% and specificity of 74.00%-86.42%, MAE ranges from 0.07D to 0.18D and accuracy ranges from 81.60% to 96.70%. The reported papers collectively showed promising performances, in particular the retinal photo-based and external eye photo -based DL models. Conclusions: The integration of deep learning model and ocular imaging for refractive error detection appear promising. However, their real-world clinical utility in current screening workflow have yet been evaluated and would require thoughtful consideration in design and implementation.

2.
Lancet Diabetes Endocrinol ; 12(8): 569-595, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39054035

ABSTRACT

Artificial intelligence (AI) use in diabetes care is increasingly being explored to personalise care for people with diabetes and adapt treatments for complex presentations. However, the rapid advancement of AI also introduces challenges such as potential biases, ethical considerations, and implementation challenges in ensuring that its deployment is equitable. Ensuring inclusive and ethical developments of AI technology can empower both health-care providers and people with diabetes in managing the condition. In this Review, we explore and summarise the current and future prospects of AI across the diabetes care continuum, from enhancing screening and diagnosis to optimising treatment and predicting and managing complications.


Subject(s)
Artificial Intelligence , Diabetes Mellitus , Humans , Artificial Intelligence/trends , Diabetes Mellitus/therapy , Diabetes Mellitus/diagnosis
3.
EBioMedicine ; 95: 104770, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37625267

ABSTRACT

BACKGROUND: Large language models (LLMs) are garnering wide interest due to their human-like and contextually relevant responses. However, LLMs' accuracy across specific medical domains has yet been thoroughly evaluated. Myopia is a frequent topic which patients and parents commonly seek information online. Our study evaluated the performance of three LLMs namely ChatGPT-3.5, ChatGPT-4.0, and Google Bard, in delivering accurate responses to common myopia-related queries. METHODS: We curated thirty-one commonly asked myopia care-related questions, which were categorised into six domains-pathogenesis, risk factors, clinical presentation, diagnosis, treatment and prevention, and prognosis. Each question was posed to the LLMs, and their responses were independently graded by three consultant-level paediatric ophthalmologists on a three-point accuracy scale (poor, borderline, good). A majority consensus approach was used to determine the final rating for each response. 'Good' rated responses were further evaluated for comprehensiveness on a five-point scale. Conversely, 'poor' rated responses were further prompted for self-correction and then re-evaluated for accuracy. FINDINGS: ChatGPT-4.0 demonstrated superior accuracy, with 80.6% of responses rated as 'good', compared to 61.3% in ChatGPT-3.5 and 54.8% in Google Bard (Pearson's chi-squared test, all p ≤ 0.009). All three LLM-Chatbots showed high mean comprehensiveness scores (Google Bard: 4.35; ChatGPT-4.0: 4.23; ChatGPT-3.5: 4.11, out of a maximum score of 5). All LLM-Chatbots also demonstrated substantial self-correction capabilities: 66.7% (2 in 3) of ChatGPT-4.0's, 40% (2 in 5) of ChatGPT-3.5's, and 60% (3 in 5) of Google Bard's responses improved after self-correction. The LLM-Chatbots performed consistently across domains, except for 'treatment and prevention'. However, ChatGPT-4.0 still performed superiorly in this domain, receiving 70% 'good' ratings, compared to 40% in ChatGPT-3.5 and 45% in Google Bard (Pearson's chi-squared test, all p ≤ 0.001). INTERPRETATION: Our findings underscore the potential of LLMs, particularly ChatGPT-4.0, for delivering accurate and comprehensive responses to myopia-related queries. Continuous strategies and evaluations to improve LLMs' accuracy remain crucial. FUNDING: Dr Yih-Chung Tham was supported by the National Medical Research Council of Singapore (NMRC/MOH/HCSAINV21nov-0001).


Subject(s)
Benchmarking , Myopia , Humans , Child , Search Engine , Consensus , Language , Myopia/diagnosis , Myopia/epidemiology , Myopia/therapy
SELECTION OF CITATIONS
SEARCH DETAIL