Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Ophthalmology ; 131(6): 692-699, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38160880

RESUMO

PURPOSE: Chronic kidney disease (CKD) may elevate susceptibility to age-related macular degeneration (AMD) because of shared risk factors, pathogenic mechanisms, and genetic polymorphisms. Given the inconclusive findings in prior studies, we investigated this association using extensive datasets in the Asian Eye Epidemiology Consortium. DESIGN: Cross-sectional study. PARTICIPANTS: Fifty-one thousand two hundred fifty-three participants from 10 distinct population-based Asian studies. METHODS: Age-related macular degeneration was defined using the Wisconsin Age-Related Maculopathy Grading System, the International Age-Related Maculopathy Epidemiological Study Group Classification, or the Beckman Clinical Classification. Chronic kidney disease was defined as estimated glomerular filtration rate (eGFR) of less than 60 ml/min per 1.73 m2. A pooled analysis using individual-level participant data was performed to examine the associations between CKD and eGFR with AMD (early and late), adjusting for age, sex, hypertension, diabetes, body mass index, smoking status, total cholesterol, and study groups. MAIN OUTCOME MEASURES: Odds ratio (OR) of early and late AMD. RESULTS: Among 51 253 participants (mean age, 54.1 ± 14.5 years), 5079 had CKD (9.9%). The prevalence of early AMD was 9.0%, and that of late AMD was 0.71%. After adjusting for confounders, individuals with CKD were associated with higher odds of late AMD (OR, 1.46; 95% confidence interval [CI], 1.11-1.93; P = 0.008). Similarly, poorer kidney function (per 10-unit eGFR decrease) was associated with late AMD (OR, 1.12; 95% CI, 1.05-1.19; P = 0.001). Nevertheless, CKD and eGFR were not associated significantly with early AMD (all P ≥ 0.149). CONCLUSIONS: Pooled analysis from 10 distinct Asian population-based studies revealed that CKD and compromised kidney function are associated significantly with late AMD. This finding further underscores the importance of ocular examinations in patients with CKD. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.


Assuntos
Taxa de Filtração Glomerular , Degeneração Macular , Insuficiência Renal Crônica , Humanos , Masculino , Estudos Transversais , Feminino , Pessoa de Meia-Idade , Insuficiência Renal Crônica/epidemiologia , Insuficiência Renal Crônica/fisiopatologia , Idoso , Degeneração Macular/fisiopatologia , Degeneração Macular/epidemiologia , Fatores de Risco , Povo Asiático/etnologia , Adulto , Razão de Chances , Prevalência , Idoso de 80 Anos ou mais
2.
Singapore Med J ; 65(3): 159-166, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38527300

RESUMO

ABSTRACT: With the rise of generative artificial intelligence (AI) and AI-powered chatbots, the landscape of medicine and healthcare is on the brink of significant transformation. This perspective delves into the prospective influence of AI on medical education, residency training and the continuing education of attending physicians or consultants. We begin by highlighting the constraints of the current education model, challenges in limited faculty, uniformity amidst burgeoning medical knowledge and the limitations in 'traditional' linear knowledge acquisition. We introduce 'AI-assisted' and 'AI-integrated' paradigms for medical education and physician training, targeting a more universal, accessible, high-quality and interconnected educational journey. We differentiate between essential knowledge for all physicians, specialised insights for clinician-scientists and mastery-level proficiency for clinician-computer scientists. With the transformative potential of AI in healthcare and service delivery, it is poised to reshape the pedagogy of medical education and residency training.


Assuntos
Educação Médica , Médicos , Humanos , Inteligência Artificial , Estudos Prospectivos , Educação Continuada
3.
Adv Ophthalmol Pract Res ; 4(3): 120-127, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38846624

RESUMO

Background: The convergence of smartphone technology and artificial intelligence (AI) has revolutionized the landscape of ophthalmic care, offering unprecedented opportunities for diagnosis, monitoring, and management of ocular conditions. Nevertheless, there is a lack of systematic studies on discussing the integration of smartphone and AI in this field. Main text: This review includes 52 studies, and explores the integration of smartphones and AI in ophthalmology, delineating its collective impact on screening methodologies, disease detection, telemedicine initiatives, and patient management. The collective findings from the curated studies indicate promising performance of the smartphone-based AI screening for various ocular diseases which encompass major retinal diseases, glaucoma, cataract, visual impairment in children and ocular surface diseases. Moreover, the utilization of smartphone-based imaging modalities, coupled with AI algorithms, is able to provide timely, efficient and cost-effective screening for ocular pathologies. This modality can also facilitate patient self-monitoring, remote patient monitoring and enhancing accessibility to eye care services, particularly in underserved regions. Challenges involving data privacy, algorithm validation, regulatory frameworks and issues of trust are still need to be addressed. Furthermore, evaluation on real-world implementation is imperative as well, and real-world prospective studies are currently lacking. Conclusions: Smartphone ocular imaging merged with AI enables earlier, precise diagnoses, personalized treatments, and enhanced service accessibility in eye care. Collaboration is crucial to navigate ethical and data security challenges while responsibly leveraging these innovations, promising a potential revolution in care access and global eye health equity.

4.
Ophthalmol Sci ; 4(5): 100538, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39051044

RESUMO

Objective: Our objective was to determine the effects of lipids and complement proteins on early and intermediate age-related macular degeneration (AMD) stages using machine learning models by integrating metabolomics and proteomic data. Design: Nested case-control study. Subjects and Controls: The analyses were performed in a subset of the Singapore Indian Chinese Cohort (SICC) Eye Study. Among the 6753 participants, we randomly selected 155 Indian and 155 Chinese cases of AMD and matched them with 310 controls on age, sex, and ethnicity. Methods: We measured 35 complement proteins and 56 lipids using mass spectrometry and nuclear magnetic resonance, respectively. We first selected the most contributing lipids and complement proteins to early and intermediate AMD using random forest models. Then, we estimated their effects using a multinomial model adjusted for potential confounders. Main Outcome Measures: Age-related macular degeneration was classified using the Beckman classification system. Results: Among the 310 individuals with AMD, 166 (53.5%) had early AMD, and 144 (46.5%) had intermediate AMD. First, high-density lipoprotein (HDL) particle diameter was positively associated with both early and intermediate AMD (odds ratio [OR]early = 1.69; 95% confidence interval [CI],1.11-2.55 and ORintermediate = 1.72; 95% CI, 1.11-2.66 per 1-standard deviation increase in HDL diameter). Second, complement protein 2 (C2), complement C1 inhibitor (IC1), complement protein 6 (C6), complement protein 1QC (C1QC) and complement factor H-related protein 1 (FHR1), were associated with AMD. C2 was positively associated with both early and intermediate AMD (ORearly = 1.58; 95% CI, 1.08-2.30 and ORintermediate = 1.56; 95% CI, 1.04-2.34). C6 was positively (ORearly = 1.41; 95% CI, 1.03-1.93) associated with early AMD. However, IC1 was negatively associated with early AMD (ORearly = 0.62; 95% CI, 0.38-0.99), whereas C1QC (ORintermediate = 0.63; 95% CI, 0.42-0.93) and FHR1 (ORintermediate = 0.73; 95% CI, 0.54-0.98) were both negatively associated with intermediate AMD. Conclusions: Although both HDL diameter and C2 levels show associations with both early and intermediate AMD, dysregulations of IC1, C6, C1QC, and FHR1 are only observed at specific stages of AMD. These findings underscore the complexity of complement system dysregulation in AMD, which appears to vary depending on the disease severity. Financial Disclosures: The authors have no proprietary or commercial interest in any materials discussed in this article.

5.
Front Med (Lausanne) ; 11: 1359073, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39050528

RESUMO

Objective: The aim of this study was to evaluate the accuracy, comprehensiveness, and safety of a publicly available large language model (LLM)-ChatGPT in the sub-domain of glaucoma. Design: Evaluation of diagnostic test or technology. Subjects participants and/or controls: We seek to evaluate the responses of an artificial intelligence chatbot ChatGPT (version GPT-3.5, OpenAI). Methods intervention or testing: We curated 24 clinically relevant questions in the domain of glaucoma. The questions spanned four categories: pertaining to diagnosis, treatment, surgeries, and ocular emergencies. Each question was posed to the LLM and the responses obtained were graded by an expert grader panel of three glaucoma specialists with combined experience of more than 30 years in the field. For responses which performed poorly, the LLM was further prompted to self-correct. The subsequent responses were then re-evaluated by the expert panel. Main outcome measures: Accuracy, comprehensiveness, and safety of the responses of a public domain LLM. Results: There were a total of 24 questions and three expert graders with a total number of responses of n = 72. The scores were ranked from 1 to 4, where 4 represents the best score with a complete and accurate response. The mean score of the expert panel was 3.29 with a standard deviation of 0.484. Out of the 24 question-response pairs, seven (29.2%) of them had a mean inter-grader score of 3 or less. The mean score of the original seven question-response pairs was 2.96 which rose to 3.58 after an opportunity to self-correct (z-score - 3.27, p = 0.001, Mann-Whitney U). The seven out of 24 question-response pairs which performed poorly were given a chance to self-correct. After self-correction, the proportion of responses obtaining a full score increased from 22/72 (30.6%) to 12/21 (57.1%), (p = 0.026, χ2 test). Conclusion: LLMs show great promise in the realm of glaucoma with additional capabilities of self-correction. The application of LLMs in glaucoma is still in its infancy, and still requires further research and validation.

6.
Ophthalmol Sci ; 4(6): 100552, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39165694

RESUMO

Objective: Vision transformers (ViTs) have shown promising performance in various classification tasks previously dominated by convolutional neural networks (CNNs). However, the performance of ViTs in referable diabetic retinopathy (DR) detection is relatively underexplored. In this study, using retinal photographs, we evaluated the comparative performances of ViTs and CNNs on detection of referable DR. Design: Retrospective study. Participants: A total of 48 269 retinal images from the open-source Kaggle DR detection dataset, the Messidor-1 dataset and the Singapore Epidemiology of Eye Diseases (SEED) study were included. Methods: Using 41 614 retinal photographs from the Kaggle dataset, we developed 5 CNN (Visual Geometry Group 19, ResNet50, InceptionV3, DenseNet201, and EfficientNetV2S) and 4 ViTs models (VAN_small, CrossViT_small, ViT_small, and Hierarchical Vision transformer using Shifted Windows [SWIN]_tiny) for the detection of referable DR. We defined the presence of referable DR as eyes with moderate or worse DR. The comparative performance of all 9 models was evaluated in the Kaggle internal test dataset (with 1045 study eyes), and in 2 external test sets, the SEED study (5455 study eyes) and the Messidor-1 (1200 study eyes). Main Outcome Measures: Area under operating characteristics curve (AUC), specificity, and sensitivity. Results: Among all models, the SWIN transformer displayed the highest AUC of 95.7% on the internal test set, significantly outperforming the CNN models (all P < 0.001). The same observation was confirmed in the external test sets, with the SWIN transformer achieving AUC of 97.3% in SEED and 96.3% in Messidor-1. When specificity level was fixed at 80% for the internal test, the SWIN transformer achieved the highest sensitivity of 94.4%, significantly better than all the CNN models (sensitivity levels ranging between 76.3% and 83.8%; all P < 0.001). This trend was also consistently observed in both external test sets. Conclusions: Our findings demonstrate that ViTs provide superior performance over CNNs in detecting referable DR from retinal photographs. These results point to the potential of utilizing ViT models to improve and optimize retinal photo-based deep learning for referable DR detection. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

7.
Asia Pac J Ophthalmol (Phila) ; : 100090, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39128549

RESUMO

The emergence of generative artificial intelligence (AI) has revolutionized various fields. In ophthalmology, generative AI has the potential to enhance efficiency, accuracy, personalization and innovation in clinical practice and medical research, through processing data, streamlining medical documentation, facilitating patient-doctor communication, aiding in clinical decision-making, and simulating clinical trials. This review focuses on the development and integration of generative AI models into clinical workflows and scientific research of ophthalmology. It outlines the need for development of a standard framework for comprehensive assessments, robust evidence, and exploration of the potential of multimodal capabilities and intelligent agents. Additionally, the review addresses the risks in AI model development and application in clinical service and research of ophthalmology, including data privacy, data bias, adaptation friction, over interdependence, and job replacement, based on which we summarized a risk management framework to mitigate these concerns. This review highlights the transformative potential of generative AI in enhancing patient care, improving operational efficiency in the clinical service and research in ophthalmology. It also advocates for a balanced approach to its adoption.

8.
Eye Vis (Lond) ; 11(1): 17, 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38711111

RESUMO

BACKGROUND: Artificial intelligence (AI) that utilizes deep learning (DL) has potential for systemic disease prediction using retinal imaging. The retina's unique features enable non-invasive visualization of the central nervous system and microvascular circulation, aiding early detection and personalized treatment plans for personalized care. This review explores the value of retinal assessment, AI-based retinal biomarkers, and the importance of longitudinal prediction models in personalized care. MAIN TEXT: This narrative review extensively surveys the literature for relevant studies in PubMed and Google Scholar, investigating the application of AI-based retina biomarkers in predicting systemic diseases using retinal fundus photography. The study settings, sample sizes, utilized AI models and corresponding results were extracted and analysed. This review highlights the substantial potential of AI-based retinal biomarkers in predicting neurodegenerative, cardiovascular, and chronic kidney diseases. Notably, DL algorithms have demonstrated effectiveness in identifying retinal image features associated with cognitive decline, dementia, Parkinson's disease, and cardiovascular risk factors. Furthermore, longitudinal prediction models leveraging retinal images have shown potential in continuous disease risk assessment and early detection. AI-based retinal biomarkers are non-invasive, accurate, and efficient for disease forecasting and personalized care. CONCLUSION: AI-based retinal imaging hold promise in transforming primary care and systemic disease management. Together, the retina's unique features and the power of AI enable early detection, risk stratification, and help revolutionizing disease management plans. However, to fully realize the potential of AI in this domain, further research and validation in real-world settings are essential.

9.
Adv Ophthalmol Pract Res ; 4(3): 164-172, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39114269

RESUMO

Background: Uncorrected refractive error is a major cause of vision impairment worldwide and its increasing prevalent necessitates effective screening and management strategies. Meanwhile, deep learning, a subset of Artificial Intelligence, has significantly advanced ophthalmological diagnostics by automating tasks that required extensive clinical expertise. Although recent studies have investigated the use of deep learning models for refractive power detection through various imaging techniques, a comprehensive systematic review on this topic is has yet be done. This review aims to summarise and evaluate the performance of ocular image-based deep learning models in predicting refractive errors. Main text: We search on three databases (PubMed, Scopus, Web of Science) up till June 2023, focusing on deep learning applications in detecting refractive error from ocular images. We included studies that had reported refractive error outcomes, regardless of publication years. We systematically extracted and evaluated the continuous outcomes (sphere, SE, cylinder) and categorical outcomes (myopia), ground truth measurements, ocular imaging modalities, deep learning models, and performance metrics, adhering to PRISMA guidelines. Nine studies were identified and categorised into three groups: retinal photo-based (n â€‹= â€‹5), OCT-based (n â€‹= â€‹1), and external ocular photo-based (n â€‹= â€‹3).For high myopia prediction, retinal photo-based models achieved AUC between 0.91 and 0.98, sensitivity levels between 85.10% and 97.80%, and specificity levels between 76.40% and 94.50%. For continuous prediction, retinal photo-based models reported MAE ranging from 0.31D to 2.19D, and R 2 between 0.05 and 0.96. The OCT-based model achieved an AUC of 0.79-0.81, sensitivity of 82.30% and 87.20% and specificity of 61.70%-68.90%. For external ocular photo-based models, the AUC ranged from 0.91 to 0.99, sensitivity of 81.13%-84.00% and specificity of 74.00%-86.42%, MAE ranges from 0.07D to 0.18D and accuracy ranges from 81.60% to 96.70%. The reported papers collectively showed promising performances, in particular the retinal photo-based and external eye photo -based DL models. Conclusions: The integration of deep learning model and ocular imaging for refractive error detection appear promising. However, their real-world clinical utility in current screening workflow have yet been evaluated and would require thoughtful consideration in design and implementation.

10.
Br J Ophthalmol ; 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38749531

RESUMO

BACKGROUND/AIMS: To compare the performance of generative versus retrieval-based chatbots in answering patient inquiries regarding age-related macular degeneration (AMD) and diabetic retinopathy (DR). METHODS: We evaluated four chatbots: generative models (ChatGPT-4, ChatGPT-3.5 and Google Bard) and a retrieval-based model (OcularBERT) in a cross-sectional study. Their response accuracy to 45 questions (15 AMD, 15 DR and 15 others) was evaluated and compared. Three masked retinal specialists graded the responses using a three-point Likert scale: either 2 (good, error-free), 1 (borderline) or 0 (poor with significant inaccuracies). The scores were aggregated, ranging from 0 to 6. Based on majority consensus among the graders, the responses were also classified as 'Good', 'Borderline' or 'Poor' quality. RESULTS: Overall, ChatGPT-4 and ChatGPT-3.5 outperformed the other chatbots, both achieving median scores (IQR) of 6 (1), compared with 4.5 (2) in Google Bard, and 2 (1) in OcularBERT (all p ≤8.4×10-3). Based on the consensus approach, 83.3% of ChatGPT-4's responses and 86.7% of ChatGPT-3.5's were rated as 'Good', surpassing Google Bard (50%) and OcularBERT (10%) (all p ≤1.4×10-2). ChatGPT-4 and ChatGPT-3.5 had no 'Poor' rated responses. Google Bard produced 6.7% Poor responses, and OcularBERT produced 20%. Across question types, ChatGPT-4 outperformed Google Bard only for AMD, and ChatGPT-3.5 outperformed Google Bard for DR and others. CONCLUSION: ChatGPT-4 and ChatGPT-3.5 demonstrated superior performance, followed by Google Bard and OcularBERT. Generative chatbots are potentially capable of answering domain-specific questions outside their original training. Further validation studies are still required prior to real-world implementation.

11.
Prog Retin Eye Res ; : 101290, 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39173942

RESUMO

Alzheimer's disease (AD) is the leading cause of dementia worldwide. Current diagnostic modalities of AD generally focus on detecting the presence of amyloid ß and tau protein in the brain (for example, positron emission tomography [PET] and cerebrospinal fluid testing), but these are limited by their high cost, invasiveness, and lack of expertise. Retinal imaging exhibits potential in AD screening and risk stratification, as the retina provides a platform for the optical visualization of the central nervous system in vivo, with vascular and neuronal changes that mirror brain pathology. Given the paradigm shift brought by advances in artificial intelligence and the emergence of disease-modifying therapies, this article aims to summarize and review the current literature to highlight 8 trends in an evolving landscape regarding the role and potential value of retinal imaging in AD screening.

12.
J Am Med Inform Assoc ; 31(3): 776-783, 2024 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-38269644

RESUMO

OBJECTIVES: To provide balanced consideration of the opportunities and challenges associated with integrating Large Language Models (LLMs) throughout the medical school continuum. PROCESS: Narrative review of published literature contextualized by current reports of LLM application in medical education. CONCLUSIONS: LLMs like OpenAI's ChatGPT can potentially revolutionize traditional teaching methodologies. LLMs offer several potential advantages to students, including direct access to vast information, facilitation of personalized learning experiences, and enhancement of clinical skills development. For faculty and instructors, LLMs can facilitate innovative approaches to teaching complex medical concepts and fostering student engagement. Notable challenges of LLMs integration include the risk of fostering academic misconduct, inadvertent overreliance on AI, potential dilution of critical thinking skills, concerns regarding the accuracy and reliability of LLM-generated content, and the possible implications on teaching staff.


Assuntos
Competência Clínica , Educação Médica , Humanos , Reprodutibilidade dos Testes , Idioma , Aprendizagem
13.
Lancet Diabetes Endocrinol ; 12(8): 569-595, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39054035

RESUMO

Artificial intelligence (AI) use in diabetes care is increasingly being explored to personalise care for people with diabetes and adapt treatments for complex presentations. However, the rapid advancement of AI also introduces challenges such as potential biases, ethical considerations, and implementation challenges in ensuring that its deployment is equitable. Ensuring inclusive and ethical developments of AI technology can empower both health-care providers and people with diabetes in managing the condition. In this Review, we explore and summarise the current and future prospects of AI across the diabetes care continuum, from enhancing screening and diagnosis to optimising treatment and predicting and managing complications.


Assuntos
Inteligência Artificial , Diabetes Mellitus , Humanos , Inteligência Artificial/tendências , Diabetes Mellitus/terapia , Diabetes Mellitus/diagnóstico
14.
Patterns (N Y) ; 5(3): 100929, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38487802

RESUMO

We described a challenge named "DRAC - Diabetic Retinopathy Analysis Challenge" in conjunction with the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2022). Within this challenge, we provided the DRAC datset, an ultra-wide optical coherence tomography angiography (UW-OCTA) dataset (1,103 images), addressing three primary clinical tasks: diabetic retinopathy (DR) lesion segmentation, image quality assessment, and DR grading. The scientific community responded positively to the challenge, with 11, 12, and 13 teams submitting different solutions for these three tasks, respectively. This paper presents a concise summary and analysis of the top-performing solutions and results across all challenge tasks. These solutions could provide practical guidance for developing accurate classification and segmentation models for image quality assessment and DR diagnosis using UW-OCTA images, potentially improving the diagnostic capabilities of healthcare professionals. The dataset has been released to support the development of computer-aided diagnostic systems for DR evaluation.

15.
Br J Ophthalmol ; 2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-39033014

RESUMO

AIMS: To develop and externally test deep learning (DL) models for assessing the image quality of three-dimensional (3D) macular scans from Cirrus and Spectralis optical coherence tomography devices. METHODS: We retrospectively collected two data sets including 2277 Cirrus 3D scans and 1557 Spectralis 3D scans, respectively, for training (70%), fine-tuning (10%) and internal validation (20%) from electronic medical and research records at The Chinese University of Hong Kong Eye Centre and the Hong Kong Eye Hospital. Scans with various eye diseases (eg, diabetic macular oedema, age-related macular degeneration, polypoidal choroidal vasculopathy and pathological myopia), and scans of normal eyes from adults and children were included. Two graders labelled each 3D scan as gradable or ungradable, according to standardised criteria. We used a 3D version of the residual network (ResNet)-18 for Cirrus 3D scans and a multiple-instance learning pipline with ResNet-18 for Spectralis 3D scans. Two deep learning (DL) models were further tested via three unseen Cirrus data sets from Singapore and five unseen Spectralis data sets from India, Australia and Hong Kong, respectively. RESULTS: In the internal validation, the models achieved the area under curves (AUCs) of 0.930 (0.885-0.976) and 0.906 (0.863-0.948) for assessing the Cirrus 3D scans and Spectralis 3D scans, respectively. In the external testing, the models showed robust performance with AUCs ranging from 0.832 (0.730-0.934) to 0.930 (0.906-0.953) and 0.891 (0.836-0.945) to 0.962 (0.918-1.000), respectively. CONCLUSIONS: Our models could be used for filtering out ungradable 3D scans and further incorporated with a disease-detection DL model, allowing a fully automated eye disease detection workflow.

16.
Asia Pac J Ophthalmol (Phila) ; 13(1): 100030, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38233300

RESUMO

PURPOSE: There are major gaps in our knowledge of hereditary ocular conditions in the Asia-Pacific population, which comprises approximately 60% of the world's population. Therefore, a concerted regional effort is urgently needed to close this critical knowledge gap and apply precision medicine technology to improve the quality of lives of these patients in the Asia-Pacific region. DESIGN: Multi-national, multi-center collaborative network. METHODS: The Research Standing Committee of the Asia-Pacific Academy of Ophthalmology and the Asia-Pacific Society of Eye Genetics fostered this research collaboration, which brings together renowned institutions and experts for inherited eye diseases in the Asia-Pacific region. The immediate priority of the network will be inherited retinal diseases (IRDs), where there is a lack of detailed characterization of these conditions and in the number of established registries. RESULTS: The network comprises 55 members from 35 centers, spanning 12 countries and regions, including Australia, China, India, Indonesia, Japan, South Korea, Malaysia, Nepal, Philippines, Singapore, Taiwan, and Thailand. The steering committee comprises ophthalmologists with experience in consortia for eye diseases in the Asia-Pacific region, leading ophthalmologists and vision scientists in the field of IRDs internationally, and ophthalmic geneticists. CONCLUSIONS: The Asia Pacific Inherited Eye Disease (APIED) network aims to (1) improve genotyping capabilities and expertise to increase early and accurate genetic diagnosis of IRDs, (2) harmonise deep phenotyping practices and utilization of ontological terms, and (3) establish high-quality, multi-user, federated disease registries that will facilitate patient care, genetic counseling, and research of IRDs regionally and internationally.


Assuntos
Países em Desenvolvimento , Humanos , Filipinas , China , Tailândia , Malásia
17.
Nat Med ; 30(2): 584-594, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38177850

RESUMO

Diabetic retinopathy (DR) is the leading cause of preventable blindness worldwide. The risk of DR progression is highly variable among different individuals, making it difficult to predict risk and personalize screening intervals. We developed and validated a deep learning system (DeepDR Plus) to predict time to DR progression within 5 years solely from fundus images. First, we used 717,308 fundus images from 179,327 participants with diabetes to pretrain the system. Subsequently, we trained and validated the system with a multiethnic dataset comprising 118,868 images from 29,868 participants with diabetes. For predicting time to DR progression, the system achieved concordance indexes of 0.754-0.846 and integrated Brier scores of 0.153-0.241 for all times up to 5 years. Furthermore, we validated the system in real-world cohorts of participants with diabetes. The integration with clinical workflow could potentially extend the mean screening interval from 12 months to 31.97 months, and the percentage of participants recommended to be screened at 1-5 years was 30.62%, 20.00%, 19.63%, 11.85% and 17.89%, respectively, while delayed detection of progression to vision-threatening DR was 0.18%. Altogether, the DeepDR Plus system could predict individualized risk and time to DR progression over 5 years, potentially allowing personalized screening intervals.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Cegueira
18.
Br J Ophthalmol ; 2023 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-38164563

RESUMO

BACKGROUND: Large language models (LLMs) are fast emerging as potent tools in healthcare, including ophthalmology. This systematic review offers a twofold contribution: it summarises current trends in ophthalmology-related LLM research and projects future directions for this burgeoning field. METHODS: We systematically searched across various databases (PubMed, Europe PMC, Scopus and Web of Science) for articles related to LLM use in ophthalmology, published between 1 January 2022 and 31 July 2023. Selected articles were summarised, and categorised by type (editorial, commentary, original research, etc) and their research focus (eg, evaluating ChatGPT's performance in ophthalmology examinations or clinical tasks). FINDINGS: We identified 32 articles meeting our criteria, published between January and July 2023, with a peak in June (n=12). Most were original research evaluating LLMs' proficiency in clinically related tasks (n=9). Studies demonstrated that ChatGPT-4.0 outperformed its predecessor, ChatGPT-3.5, in ophthalmology exams. Furthermore, ChatGPT excelled in constructing discharge notes (n=2), evaluating diagnoses (n=2) and answering general medical queries (n=6). However, it struggled with generating scientific articles or abstracts (n=3) and answering specific subdomain questions, especially those regarding specific treatment options (n=2). ChatGPT's performance relative to other LLMs (Google's Bard, Microsoft's Bing) varied by study design. Ethical concerns such as data hallucination (n=27), authorship (n=5) and data privacy (n=2) were frequently cited. INTERPRETATION: While LLMs hold transformative potential for healthcare and ophthalmology, concerns over accountability, accuracy and data security remain. Future research should focus on application programming interface integration, comparative assessments of popular LLMs, their ability to interpret image-based data and the establishment of standardised evaluation frameworks.

19.
Commun Med (Lond) ; 3(1): 184, 2023 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-38104223

RESUMO

BACKGROUND: Cataract diagnosis typically requires in-person evaluation by an ophthalmologist. However, color fundus photography (CFP) is widely performed outside ophthalmology clinics, which could be exploited to increase the accessibility of cataract screening by automated detection. METHODS: DeepOpacityNet was developed to detect cataracts from CFP and highlight the most relevant CFP features associated with cataracts. We used 17,514 CFPs from 2573 AREDS2 participants curated from the Age-Related Eye Diseases Study 2 (AREDS2) dataset, of which 8681 CFPs were labeled with cataracts. The ground truth labels were transferred from slit-lamp examination of nuclear cataracts and reading center grading of anterior segment photographs for cortical and posterior subcapsular cataracts. DeepOpacityNet was internally validated on an independent test set (20%), compared to three ophthalmologists on a subset of the test set (100 CFPs), externally validated on three datasets obtained from the Singapore Epidemiology of Eye Diseases study (SEED), and visualized to highlight important features. RESULTS: Internally, DeepOpacityNet achieved a superior accuracy of 0.66 (95% confidence interval (CI): 0.64-0.68) and an area under the curve (AUC) of 0.72 (95% CI: 0.70-0.74), compared to that of other state-of-the-art methods. DeepOpacityNet achieved an accuracy of 0.75, compared to an accuracy of 0.67 for the ophthalmologist with the highest performance. Externally, DeepOpacityNet achieved AUC scores of 0.86, 0.88, and 0.89 on SEED datasets, demonstrating the generalizability of our proposed method. Visualizations show that the visibility of blood vessels could be characteristic of cataract absence while blurred regions could be characteristic of cataract presence. CONCLUSIONS: DeepOpacityNet could detect cataracts from CFPs in AREDS2 with performance superior to that of ophthalmologists and generate interpretable results. The code and models are available at https://github.com/ncbi/DeepOpacityNet ( https://doi.org/10.5281/zenodo.10127002 ).


Cataracts are cloudy areas in the eye that impact sight. Diagnosis typically requires in-person evaluation by an ophthalmologist. In this study, a computer program was developed that can identify cataracts from specialist photographs of the eye. The computer program successfully identified cataracts and was better able to identify these than ophthalmologists. This computer program could be introduced to improve the diagnosis of cataracts in eye clinics.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA