Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Br J Ophthalmol ; 2023 Sep 19.
Article in English | MEDLINE | ID: mdl-37726156

ABSTRACT

AIMS: To determine axial length (AL) elongation profiles in children aged 3-6 years in an Asian population. METHODS: Eligible subjects were recruited from the Growing Up in Singapore Towards Healthy Outcomes birth cohort. AL measurement was performed using IOLMaster (Carl Zeiss Meditec, Jena, Germany) at 3 and 6 years. Anthropometric measurements at birth, cycloplegic refraction at 3 and 6 years, questionnaires on the children's behavioural habits at 2 years and parental spherical equivalent refraction were performed. Multivariable linear regression model with generalised estimating equation was performed to determine factors associated with AL elongation. RESULTS: 273 eyes of 194 children were included. The mean AL increased from 21.72±0.59 mm at 3 years to 22.52±0.66 mm at 6 years (p<0.001). Myopic eyes at 6 years had greater AL elongation (1.02±0.34 mm) compared with emmetropic eyes (0.85±0.25 mm, p=0.008) and hyperopic eyes (0.74±0.16 mm, p<0.001). The 95th percentile limit of AL elongation was 1.59 mm in myopes, 1.34 mm in emmetropes and 1.00 mm in hyperopes. Greater birth weight (per 100 g, ß=0.010, p=0.02) was significantly associated with greater AL elongation from 3 to 6 years, while parental and other behavioural factors assessed at 2 years were not (all p≥0.08). CONCLUSION: In this preschool cohort, AL elongates at an average length of 0.80 mm from 3 to 6 years, with myopes demonstrating the greatest elongation. The differences in 95th percentile limits for AL elongation between myopes, emmetropes and hyperopes can be valuable information in identifying myopia development in preschool children.

2.
EBioMedicine ; 95: 104770, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37625267

ABSTRACT

BACKGROUND: Large language models (LLMs) are garnering wide interest due to their human-like and contextually relevant responses. However, LLMs' accuracy across specific medical domains has yet been thoroughly evaluated. Myopia is a frequent topic which patients and parents commonly seek information online. Our study evaluated the performance of three LLMs namely ChatGPT-3.5, ChatGPT-4.0, and Google Bard, in delivering accurate responses to common myopia-related queries. METHODS: We curated thirty-one commonly asked myopia care-related questions, which were categorised into six domains-pathogenesis, risk factors, clinical presentation, diagnosis, treatment and prevention, and prognosis. Each question was posed to the LLMs, and their responses were independently graded by three consultant-level paediatric ophthalmologists on a three-point accuracy scale (poor, borderline, good). A majority consensus approach was used to determine the final rating for each response. 'Good' rated responses were further evaluated for comprehensiveness on a five-point scale. Conversely, 'poor' rated responses were further prompted for self-correction and then re-evaluated for accuracy. FINDINGS: ChatGPT-4.0 demonstrated superior accuracy, with 80.6% of responses rated as 'good', compared to 61.3% in ChatGPT-3.5 and 54.8% in Google Bard (Pearson's chi-squared test, all p ≤ 0.009). All three LLM-Chatbots showed high mean comprehensiveness scores (Google Bard: 4.35; ChatGPT-4.0: 4.23; ChatGPT-3.5: 4.11, out of a maximum score of 5). All LLM-Chatbots also demonstrated substantial self-correction capabilities: 66.7% (2 in 3) of ChatGPT-4.0's, 40% (2 in 5) of ChatGPT-3.5's, and 60% (3 in 5) of Google Bard's responses improved after self-correction. The LLM-Chatbots performed consistently across domains, except for 'treatment and prevention'. However, ChatGPT-4.0 still performed superiorly in this domain, receiving 70% 'good' ratings, compared to 40% in ChatGPT-3.5 and 45% in Google Bard (Pearson's chi-squared test, all p ≤ 0.001). INTERPRETATION: Our findings underscore the potential of LLMs, particularly ChatGPT-4.0, for delivering accurate and comprehensive responses to myopia-related queries. Continuous strategies and evaluations to improve LLMs' accuracy remain crucial. FUNDING: Dr Yih-Chung Tham was supported by the National Medical Research Council of Singapore (NMRC/MOH/HCSAINV21nov-0001).


Subject(s)
Benchmarking , Myopia , Humans , Child , Search Engine , Consensus , Language , Myopia/diagnosis , Myopia/epidemiology , Myopia/therapy
SELECTION OF CITATIONS
SEARCH DETAIL
...