Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 176
Filtrar
1.
Ocul Immunol Inflamm ; : 1-9, 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38842198

RESUMEN

The aim of this perspective is to promote the theory of salutogenesis as a novel approach to addressing ophthalmologic inflammatory conditions, illustrating several concepts in which it is based upon and how they can be applied to medical practice. This theory can better contextualize why patients with similar demographics and exposures are not uniform in their clinical presentations. Stressors in daily life can contribute to a state of ill-health and there are various factors that help alleviate their negative impact. These alleviating factors are significantly impaired in people with poor vision, one of the most common presentations of ophthalmologic conditions. Salutogenic principles can guide the treatment of eye conditions to be more respectful of patient autonomy amidst shifting expectations of the doctor-patient relationship. Being able to take ownership of their health and feeling that their cultural beliefs were considered improves compliance and subsequently gives more optimal outcomes. Population-level policy interventions could also utilize salutogenic principles to identify previously overlooked domains that can be addressed. We identified several papers about salutogenesis in an ophthalmological context and acknowledged the relatively few studies on this topic at present and offer directions in which we can explore further in subsequent studies.

3.
Invest Ophthalmol Vis Sci ; 65(6): 21, 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38864811

RESUMEN

Data is the cornerstone of using AI models, because their performance directly depends on the diversity, quantity, and quality of the data used for training. Using AI presents unique potential, particularly in medical applications that involve rich data such as ophthalmology, encompassing a variety of imaging methods, medical records, and eye-tracking data. However, sharing medical data comes with challenges because of regulatory issues and privacy concerns. This review explores traditional and nontraditional data sharing methods in medicine, focusing on previous works in ophthalmology. Traditional methods involve direct data transfer, whereas newer approaches prioritize security and privacy by sharing derived datasets, creating secure research environments, or using model-to-data strategies. We examine each method's mechanisms, variations, recent applications in ophthalmology, and their respective advantages and disadvantages. By empowering medical researchers with insights into data sharing methods and considerations, this review aims to assist informed decision-making while upholding ethical standards and patient privacy in medical AI development.


Asunto(s)
Inteligencia Artificial , Difusión de la Información , Oftalmología , Humanos
4.
Br J Ophthalmol ; 2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38834291

RESUMEN

Foundation models represent a paradigm shift in artificial intelligence (AI), evolving from narrow models designed for specific tasks to versatile, generalisable models adaptable to a myriad of diverse applications. Ophthalmology as a specialty has the potential to act as an exemplar for other medical specialties, offering a blueprint for integrating foundation models broadly into clinical practice. This review hopes to serve as a roadmap for eyecare professionals seeking to better understand foundation models, while equipping readers with the tools to explore the use of foundation models in their own research and practice. We begin by outlining the key concepts and technological advances which have enabled the development of these models, providing an overview of novel training approaches and modern AI architectures. Next, we summarise existing literature on the topic of foundation models in ophthalmology, encompassing progress in vision foundation models, large language models and large multimodal models. Finally, we outline major challenges relating to privacy, bias and clinical validation, and propose key steps forward to maximise the benefit of this powerful technology.

5.
Clin Ophthalmol ; 18: 1257-1266, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38741584

RESUMEN

Purpose: Understanding sociodemographic factors associated with poor visual outcomes in children with juvenile idiopathic arthritis-associated uveitis may help inform practice patterns. Patients and Methods: Retrospective cohort study on patients <18 years old who were diagnosed with both juvenile idiopathic arthritis and uveitis based on International Classification of Diseases tenth edition codes in the Intelligent Research in Sight Registry through December 2020. Surgical history was extracted using current procedural terminology codes. The primary outcome was incidence of blindness (20/200 or worse) in at least one eye in association with sociodemographic factors. Secondary outcomes included cataract and glaucoma surgery following uveitis diagnosis. Hazard ratios were calculated using multivariable-adjusted Cox proportional hazards models. Results: Median age of juvenile idiopathic arthritis-associated uveitis diagnosis was 11 (Interquartile Range: 8 to 15). In the Cox models adjusting for sociodemographic and insurance factors, the hazard ratios of best corrected visual acuity 20/200 or worse were higher in males compared to females (HR 2.15; 95% CI: 1.45-3.18), in Black or African American patients compared to White patients (2.54; 1.44-4.48), and in Medicaid-insured patients compared to commercially-insured patients (2.23; 1.48-3.37). Conclusion: Sociodemographic factors and insurance coverage were associated with varying levels of risk for poor visual outcomes in children with juvenile idiopathic arthritis-associated uveitis.

6.
Br J Ophthalmol ; 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38749531

RESUMEN

BACKGROUND/AIMS: To compare the performance of generative versus retrieval-based chatbots in answering patient inquiries regarding age-related macular degeneration (AMD) and diabetic retinopathy (DR). METHODS: We evaluated four chatbots: generative models (ChatGPT-4, ChatGPT-3.5 and Google Bard) and a retrieval-based model (OcularBERT) in a cross-sectional study. Their response accuracy to 45 questions (15 AMD, 15 DR and 15 others) was evaluated and compared. Three masked retinal specialists graded the responses using a three-point Likert scale: either 2 (good, error-free), 1 (borderline) or 0 (poor with significant inaccuracies). The scores were aggregated, ranging from 0 to 6. Based on majority consensus among the graders, the responses were also classified as 'Good', 'Borderline' or 'Poor' quality. RESULTS: Overall, ChatGPT-4 and ChatGPT-3.5 outperformed the other chatbots, both achieving median scores (IQR) of 6 (1), compared with 4.5 (2) in Google Bard, and 2 (1) in OcularBERT (all p ≤8.4×10-3). Based on the consensus approach, 83.3% of ChatGPT-4's responses and 86.7% of ChatGPT-3.5's were rated as 'Good', surpassing Google Bard (50%) and OcularBERT (10%) (all p ≤1.4×10-2). ChatGPT-4 and ChatGPT-3.5 had no 'Poor' rated responses. Google Bard produced 6.7% Poor responses, and OcularBERT produced 20%. Across question types, ChatGPT-4 outperformed Google Bard only for AMD, and ChatGPT-3.5 outperformed Google Bard for DR and others. CONCLUSION: ChatGPT-4 and ChatGPT-3.5 demonstrated superior performance, followed by Google Bard and OcularBERT. Generative chatbots are potentially capable of answering domain-specific questions outside their original training. Further validation studies are still required prior to real-world implementation.

7.
Commun Med (Lond) ; 4(1): 72, 2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38605245

RESUMEN

BACKGROUND: Sensory changes due to aging or disease can impact brain tissue. This study aims to investigate the link between glaucoma, a leading cause of blindness, and alterations in brain connections. METHODS: We analyzed diffusion MRI measurements of white matter tissue in a large group, consisting of 905 glaucoma patients (aged 49-80) and 5292 healthy individuals (aged 45-80) from the UK Biobank. Confounds due to group differences were mitigated by matching a sub-sample of controls to glaucoma subjects. We compared classification of glaucoma using convolutional neural networks (CNNs) focusing on the optic radiations, which are the primary visual connection to the cortex, against those analyzing non-visual brain connections. As a control, we evaluated the performance of regularized linear regression models. RESULTS: We showed that CNNs using information from the optic radiations exhibited higher accuracy in classifying subjects with glaucoma when contrasted with CNNs relying on information from non-visual brain connections. Regularized linear regression models were also tested, and showed significantly weaker classification performance. Additionally, the CNN was unable to generalize to the classification of age-group or of age-related macular degeneration. CONCLUSIONS: Our findings indicate a distinct and potentially non-linear signature of glaucoma in the tissue properties of optic radiations. This study enhances our understanding of how glaucoma affects brain tissue and opens avenues for further research into how diseases that affect sensory input may also affect brain aging.


In this study, we explored the relationship between glaucoma, the most common cause of blindness, and changes within the brain. We used data from diffusion MRI, a measurement method which assesses the properties of brain connections. We examined 905 individuals with glaucoma alongside 5292 healthy people. We refined the test cohort to be closely matched in age, sex, ethnicity, and socioeconomic backgrounds. The use of deep learning neural networks allowed accurate detection of glaucoma by focusing on the tissue properties of the optic radiations, a major brain pathway that transmits visual information, rather than other brain pathways used for comparison. Our work provides additional evidence that brain connections may age differently based on varying sensory inputs.

8.
JAMA Ophthalmol ; 142(3): 226-233, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38329740

RESUMEN

Importance: Deep learning image analysis often depends on large, labeled datasets, which are difficult to obtain for rare diseases. Objective: To develop a self-supervised approach for automated classification of macular telangiectasia type 2 (MacTel) on optical coherence tomography (OCT) with limited labeled data. Design, Setting, and Participants: This was a retrospective comparative study. OCT images from May 2014 to May 2019 were collected by the Lowy Medical Research Institute, La Jolla, California, and the University of Washington, Seattle, from January 2016 to October 2022. Clinical diagnoses of patients with and without MacTel were confirmed by retina specialists. Data were analyzed from January to September 2023. Exposures: Two convolutional neural networks were pretrained using the Bootstrap Your Own Latent algorithm on unlabeled training data and fine-tuned with labeled training data to predict MacTel (self-supervised method). ResNet18 and ResNet50 models were also trained using all labeled data (supervised method). Main Outcomes and Measures: The ground truth yes vs no MacTel diagnosis is determined by retinal specialists based on spectral-domain OCT. The models' predictions were compared against human graders using accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under precision recall curve (AUPRC), and area under the receiver operating characteristic curve (AUROC). Uniform manifold approximation and projection was performed for dimension reduction and GradCAM visualizations for supervised and self-supervised methods. Results: A total of 2636 OCT scans from 780 patients with MacTel and 131 patients without MacTel were included from the MacTel Project (mean [SD] age, 60.8 [11.7] years; 63.8% female), and another 2564 from 1769 patients without MacTel from the University of Washington (mean [SD] age, 61.2 [18.1] years; 53.4% female). The self-supervised approach fine-tuned on 100% of the labeled training data with ResNet50 as the feature extractor performed the best, achieving an AUPRC of 0.971 (95% CI, 0.969-0.972), an AUROC of 0.970 (95% CI, 0.970-0.973), accuracy of 0.898%, sensitivity of 0.898, specificity of 0.949, PPV of 0.935, and NPV of 0.919. With only 419 OCT volumes (185 MacTel patients in 10% of labeled training dataset), the ResNet18 self-supervised model achieved comparable performance, with an AUPRC of 0.958 (95% CI, 0.957-0.960), an AUROC of 0.966 (95% CI, 0.964-0.967), and accuracy, sensitivity, specificity, PPV, and NPV of 90.2%, 0.884, 0.916, 0.896, and 0.906, respectively. The self-supervised models showed better agreement with the more experienced human expert graders. Conclusions and Relevance: The findings suggest that self-supervised learning may improve the accuracy of automated MacTel vs non-MacTel binary classification on OCT with limited labeled training data, and these approaches may be applicable to other rare diseases, although further research is warranted.


Asunto(s)
Aprendizaje Profundo , Telangiectasia Retiniana , Humanos , Femenino , Persona de Mediana Edad , Masculino , Tomografía de Coherencia Óptica/métodos , Estudios Retrospectivos , Enfermedades Raras , Telangiectasia Retiniana/diagnóstico por imagen , Aprendizaje Automático Supervisado
10.
Ophthalmology ; 131(2): 219-226, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37739233

RESUMEN

PURPOSE: Deep learning (DL) models have achieved state-of-the-art medical diagnosis classification accuracy. Current models are limited by discrete diagnosis labels, but could yield more information with diagnosis in a continuous scale. We developed a novel continuous severity scaling system for macular telangiectasia (MacTel) type 2 by combining a DL classification model with uniform manifold approximation and projection (UMAP). DESIGN: We used a DL network to learn a feature representation of MacTel severity from discrete severity labels and applied UMAP to embed this feature representation into 2 dimensions, thereby creating a continuous MacTel severity scale. PARTICIPANTS: A total of 2003 OCT volumes were analyzed from 1089 MacTel Project participants. METHODS: We trained a multiview DL classifier using multiple B-scans from OCT volumes to learn a previously published discrete 7-step MacTel severity scale. The classifiers' last feature layer was extracted as input for UMAP, which embedded these features into a continuous 2-dimensional manifold. The DL classifier was assessed in terms of test accuracy. Rank correlation for the continuous UMAP scale against the previously published scale was calculated. Additionally, the UMAP scale was assessed in the κ agreement against 5 clinical experts on 100 pairs of patient volumes. For each pair of patient volumes, clinical experts were asked to select the volume with more severe MacTel disease and to compare them against the UMAP scale. MAIN OUTCOME MEASURES: Classification accuracy for the DL classifier and κ agreement versus clinical experts for UMAP. RESULTS: The multiview DL classifier achieved top 1 accuracy of 63.3% (186/294) on held-out test OCT volumes. The UMAP metric showed a clear continuous gradation of MacTel severity with a Spearman rank correlation of 0.84 with the previously published scale. Furthermore, the continuous UMAP metric achieved κ agreements of 0.56 to 0.63 with 5 clinical experts, which was comparable with interobserver κ values. CONCLUSIONS: Our UMAP embedding generated a continuous MacTel severity scale, without requiring continuous training labels. This technique can be applied to other diseases and may lead to more accurate diagnosis, improved understanding of disease progression, and key imaging features for pathologic characteristics. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.


Asunto(s)
Aprendizaje Profundo , Retinopatía Diabética , Telangiectasia Retiniana , Humanos , Telangiectasia Retiniana/diagnóstico , Angiografía con Fluoresceína/métodos , Progresión de la Enfermedad , Tomografía de Coherencia Óptica/métodos
11.
Rev. panam. salud pública ; 48: e13, 2024. tab, graf
Artículo en Español | LILACS-Express | LILACS | ID: biblio-1536672

RESUMEN

resumen está disponible en el texto completo


ABSTRACT The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.


RESUMO A declaração CONSORT 2010 apresenta diretrizes mínimas para relatórios de ensaios clínicos randomizados. Seu uso generalizado tem sido fundamental para garantir a transparência na avaliação de novas intervenções. Recentemente, tem-se reconhecido cada vez mais que intervenções que incluem inteligência artificial (IA) precisam ser submetidas a uma avaliação rigorosa e prospectiva para demonstrar seus impactos sobre os resultados de saúde. A extensão CONSORT-AI (Consolidated Standards of Reporting Trials - Artificial Intelligence) é uma nova diretriz para relatórios de ensaios clínicos que avaliam intervenções com um componente de IA. Ela foi desenvolvida em paralelo à sua declaração complementar para protocolos de ensaios clínicos, a SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials - Artificial Intelligence). Ambas as diretrizes foram desenvolvidas por meio de um processo de consenso em etapas que incluiu revisão da literatura e consultas a especialistas para gerar 29 itens candidatos. Foram feitas consultas sobre esses itens a um grupo internacional composto por 103 interessados diretos, que participaram de uma pesquisa Delphi em duas etapas. Chegou-se a um acordo sobre os itens em uma reunião de consenso que incluiu 31 interessados diretos, e os itens foram refinados por meio de uma lista de verificação piloto que envolveu 34 participantes. A extensão CONSORT-AI inclui 14 itens novos que, devido à sua importância para as intervenções de IA, devem ser informados rotineiramente juntamente com os itens básicos da CONSORT 2010. A CONSORT-AI preconiza que os pesquisadores descrevam claramente a intervenção de IA, incluindo instruções e as habilidades necessárias para seu uso, o contexto no qual a intervenção de IA está inserida, considerações sobre o manuseio dos dados de entrada e saída da intervenção de IA, a interação humano-IA e uma análise dos casos de erro. A CONSORT-AI ajudará a promover a transparência e a integralidade nos relatórios de ensaios clínicos com intervenções que utilizam IA. Seu uso ajudará editores e revisores, bem como leitores em geral, a entender, interpretar e avaliar criticamente a qualidade do desenho do ensaio clínico e o risco de viés nos resultados relatados.

12.
Rev. panam. salud pública ; 48: e12, 2024. tab, graf
Artículo en Español | LILACS-Express | LILACS | ID: biblio-1536674

RESUMEN

resumen está disponible en el texto completo


ABSTRACT The SPIRIT 2013 statement aims to improve the completeness of clinical trial protocol reporting by providing evidence-based recommendations for the minimum set of items to be addressed. This guidance has been instrumental in promoting transparent evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate their impact on health outcomes. The SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trial protocols evaluating interventions with an AI component. It was developed in parallel with its companion statement for trial reports: CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 26 candidate items, which were consulted upon by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The SPIRIT-AI extension includes 15 new items that were considered sufficiently important for clinical trial protocols of AI interventions. These new items should be routinely reported in addition to the core SPIRIT 2013 items. SPIRIT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention will be integrated, considerations for the handling of input and output data, the human-AI interaction and analysis of error cases. SPIRIT-AI will help promote transparency and completeness for clinical trial protocols for AI interventions. Its use will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the design and risk of bias for a planned clinical trial.


RESUMO A declaração SPIRIT 2013 tem como objetivo melhorar a integralidade dos relatórios dos protocolos de ensaios clínicos, fornecendo recomendações baseadas em evidências para o conjunto mínimo de itens que devem ser abordados. Essas orientações têm sido fundamentais para promover uma avaliação transparente de novas intervenções. Recentemente, tem-se reconhecido cada vez mais que intervenções que incluem inteligência artificial (IA) precisam ser submetidas a uma avaliação rigorosa e prospectiva para demonstrar seus impactos sobre os resultados de saúde. A extensão SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials - Artificial Intelligence) é uma nova diretriz de relatório para protocolos de ensaios clínicos que avaliam intervenções com um componente de IA. Essa diretriz foi desenvolvida em paralelo à sua declaração complementar para relatórios de ensaios clínicos, CONSORT-AI (Consolidated Standards of Reporting Trials - Artificial Intelligence). Ambas as diretrizes foram desenvolvidas por meio de um processo de consenso em etapas que incluiu revisão da literatura e consultas a especialistas para gerar 26 itens candidatos. Foram feitas consultas sobre esses itens a um grupo internacional composto por 103 interessados diretos, que participaram de uma pesquisa Delphi em duas etapas. Chegou-se a um acordo sobre os itens em uma reunião de consenso que incluiu 31 interessados diretos, e os itens foram refinados por meio de uma lista de verificação piloto que envolveu 34 participantes. A extensão SPIRIT-AI inclui 15 itens novos que foram considerados suficientemente importantes para os protocolos de ensaios clínicos com intervenções que utilizam IA. Esses itens novos devem constar dos relatórios de rotina, juntamente com os itens básicos da SPIRIT 2013. A SPIRIT-AI preconiza que os pesquisadores descrevam claramente a intervenção de IA, incluindo instruções e as habilidades necessárias para seu uso, o contexto no qual a intervenção de IA será integrada, considerações sobre o manuseio dos dados de entrada e saída, a interação humano-IA e a análise de casos de erro. A SPIRIT-AI ajudará a promover a transparência e a integralidade nos protocolos de ensaios clínicos com intervenções que utilizam IA. Seu uso ajudará editores e revisores, bem como leitores em geral, a entender, interpretar e avaliar criticamente o delineamento e o risco de viés de um futuro estudo clínico.

13.
Ophthalmol Sci ; 4(1): 100352, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37869025

RESUMEN

Objective: To describe visual acuity data representation in the American Academy of Ophthalmology Intelligent Research in Sight (IRIS) Registry and present a data-cleaning strategy. Design: Reliability and validity study. Participants: Patients with visual acuity records from 2018 in the IRIS Registry. Methods: Visual acuity measurements and metadata were identified and characterized from 2018 IRIS Registry records. Metadata, including laterality, assessment method (distance, near, and unspecified), correction (corrected, uncorrected, and unspecified), and flags for refraction or pinhole assessment were compared between Rome (frozen April 20, 2020) and Chicago (frozen December 24, 2021) versions. We developed a data-cleaning strategy to infer patients' corrected distance visual acuity in their better-seeing eye. Main Outcome Measures: Visual acuity data characteristics in the IRIS Registry. Results: The IRIS Registry Chicago data set contains 168 920 049 visual acuity records among 23 001 531 unique patients and 49 968 974 unique patient visit dates in 2018. Visual acuity records were associated with refraction in 5.3% of cases, and with pinhole in 11.0%. Mean (standard deviation) of all measurements was 0.26 (0.41) logarithm of the minimum angle of resolution (logMAR), with a range of - 0.3 to 4.0 A plurality of visual acuity records were labeled corrected (corrected visual acuity [CVA], 39.1%), followed by unspecified (37.6%) and uncorrected (uncorrected visual acuity [UCVA], 23.4%). Corrected visual acuity measurements were paradoxically worse than same day UCVA 15% of the time. In aggregate, mean and median values were similar for CVA and unspecified visual acuity. Most visual acuity measurements were at distance (59.8%, vs. 32.1% unspecified and 8.2% near). Rome contained more duplicate visual acuity records than Chicago (10.8% vs. 1.4%). Near visual acuity was classified with Jaeger notation and (in Chicago only) also assigned logMAR values by Verana Health. LogMAR values for hand motion and light perception visual acuity were lower in Chicago than in Rome. The impact of data entry errors or outliers on analyses may be reduced by filtering and averaging visual acuity per eye over time. Conclusions: The IRIS Registry includes similar visual acuity metadata in Rome and Chicago. Although fewer duplicate records were found in Chicago, both versions include duplicate and atypical measurements (i.e., CVA worse than UCVA on the same day). Analyses may benefit from using algorithms to filter outliers and average visual acuity measurements over time. Financial Disclosures: Proprietary or commercial disclosure may be found found in the Footnotes and Disclosures at the end of this article.

14.
Artículo en Inglés | MEDLINE | ID: mdl-37949472

RESUMEN

INTRODUCTION: The English Diabetic Eye Screening Programme (DESP) offers people living with diabetes (PLD) annual eye screening. We examined incidence and determinants of sight-threatening diabetic retinopathy (STDR) in a sociodemographically diverse multi-ethnic population. RESEARCH DESIGN AND METHODS: North East London DESP cohort data (January 2012 to December 2021) with 137 591 PLD with no retinopathy, or non-STDR at baseline in one/both eyes, were used to calculate STDR incidence rates by sociodemographic factors, diabetes type, and duration. HR from Cox models examined associations with STDR. RESULTS: There were 16 388 incident STDR cases over a median of 5.4 years (IQR 2.8-8.2; STDR rate 2.214, 95% CI 2.214 to 2.215 per 100 person-years). People with no retinopathy at baseline had a lower risk of sight-threatening diabetic retinopathy (STDR) compared with those with non-STDR in one eye (HR 3.03, 95% CI 2.91 to 3.15, p<0.001) and both eyes (HR 7.88, 95% CI 7.59 to 8.18, p<0.001). Black and South Asian individuals had higher STDR hazards than white individuals (HR 1.57, 95% CI 1.50 to 1.64 and HR 1.36, 95% CI 1.31 to 1.42, respectively). Additionally, every 5-year increase in age at inclusion was associated with an 8% reduction in STDR hazards (p<0.001). CONCLUSIONS: Ethnic disparities exist in a health system limited by capacity rather than patient economic circumstances. Diabetic retinopathy at first screen is a strong determinant of STDR development. By using basic demographic characteristics, screening programmes or clinical practices can stratify risk for sight-threatening diabetic retinopathy development.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Estudios Retrospectivos , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/epidemiología , Tamizaje Masivo , Incidencia , Londres/epidemiología , Diabetes Mellitus/diagnóstico , Diabetes Mellitus/epidemiología
15.
Lancet Digit Health ; 5(12): e917-e924, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-38000875

RESUMEN

The advent of generative artificial intelligence and large language models has ushered in transformative applications within medicine. Specifically in ophthalmology, large language models offer unique opportunities to revolutionise digital eye care, address clinical workflow inefficiencies, and enhance patient experiences across diverse global eye care landscapes. Yet alongside these prospects lie tangible and ethical challenges, encompassing data privacy, security, and the intricacies of embedding large language models into clinical routines. This Viewpoint highlights the promising applications of large language models in ophthalmology, while weighing up the practical and ethical barriers towards their real-world implementation. This Viewpoint seeks to stimulate broader discourse on the potential of large language models in ophthalmology and to galvanise both clinicians and researchers into tackling the prevailing challenges and optimising the benefits of large language models while curtailing the associated risks.


Asunto(s)
Medicina , Oftalmología , Humanos , Inteligencia Artificial , Lenguaje , Privacidad
16.
Br J Ophthalmol ; 107(12): 1839-1845, 2023 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-37875374

RESUMEN

BACKGROUND/AIMS: The English Diabetic Eye Screening Programme (DESP) offers people living with diabetes (PLD) annual screening. Less frequent screening has been advocated among PLD without diabetic retinopathy (DR), but evidence for each ethnic group is limited. We examined the potential effect of biennial versus annual screening on the detection of sight-threatening diabetic retinopathy (STDR) and proliferative diabetic retinopathy (PDR) among PLD without DR from a large urban multi-ethnic English DESP. METHODS: PLD in North-East London DESP (January 2012 to December 2021) with no DR on two prior consecutive screening visits with up to 8 years of follow-up were examined. Annual STDR and PDR incidence rates, overall and by ethnicity, were quantified. Delays in identification of STDR and PDR events had 2-year screening intervals been used were determined. FINDINGS: Among 82 782 PLD (37% white, 36% South Asian, and 16% black people), there were 1788 incident STDR cases over mean (SD) 4.3 (2.4) years (STDR rate 0.51, 95% CI 0.47 to 0.55 per 100-person-years). STDR incidence rates per 100-person-years by ethnicity were 0.55 (95% CI 0.48 to 0.62) for South Asian, 0.34 (95% CI 0.29 to 0.40) for white, and 0.77 (95% CI 0.65 to 0.90) for black people. Biennial screening would have delayed diagnosis by 1 year for 56.3% (1007/1788) with STDR and 43.6% (45/103) with PDR. Standardised cumulative rates of delayed STDR per 100 000 persons for each ethnic group were 1904 (95% CI 1683 to 2154) for black people, 1276 (95% CI 1153 to 1412) for South Asian people, and 844 (95% CI 745 to 955) for white people. INTERPRETATION: Biennial screening would have delayed detection of some STDR and PDR by 1 year, especially among those of black ethnic origin, leading to healthcare inequalities.


Asunto(s)
Diabetes Mellitus Tipo 2 , Retinopatía Diabética , Humanos , Pueblo Asiatico , Diabetes Mellitus Tipo 2/complicaciones , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/epidemiología , Retinopatía Diabética/etiología , Etnicidad , Tamizaje Masivo , Estudios Retrospectivos , Población Blanca , Población Negra
17.
J Am Med Inform Assoc ; 30(12): 1904-1914, 2023 11 17.
Artículo en Inglés | MEDLINE | ID: mdl-37659103

RESUMEN

OBJECTIVE: To develop a deep learning algorithm (DLA) to detect diabetic kideny disease (DKD) from retinal photographs of patients with diabetes, and evaluate performance in multiethnic populations. MATERIALS AND METHODS: We trained 3 models: (1) image-only; (2) risk factor (RF)-only multivariable logistic regression (LR) model adjusted for age, sex, ethnicity, diabetes duration, HbA1c, systolic blood pressure; (3) hybrid multivariable LR model combining RF data and standardized z-scores from image-only model. Data from Singapore Integrated Diabetic Retinopathy Program (SiDRP) were used to develop (6066 participants with diabetes, primary-care-based) and internally validate (5-fold cross-validation) the models. External testing on 2 independent datasets: (1) Singapore Epidemiology of Eye Diseases (SEED) study (1885 participants with diabetes, population-based); (2) Singapore Macroangiopathy and Microvascular Reactivity in Type 2 Diabetes (SMART2D) (439 participants with diabetes, cross-sectional) in Singapore. Supplementary external testing on 2 Caucasian cohorts: (3) Australian Eye and Heart Study (AHES) (460 participants with diabetes, cross-sectional) and (4) Northern Ireland Cohort for the Longitudinal Study of Ageing (NICOLA) (265 participants with diabetes, cross-sectional). RESULTS: In SiDRP validation, area under the curve (AUC) was 0.826(95% CI 0.818-0.833) for image-only, 0.847(0.840-0.854) for RF-only, and 0.866(0.859-0.872) for hybrid. Estimates with SEED were 0.764(0.743-0.785) for image-only, 0.802(0.783-0.822) for RF-only, and 0.828(0.810-0.846) for hybrid. In SMART2D, AUC was 0.726(0.686-0.765) for image-only, 0.701(0.660-0.741) in RF-only, 0.761(0.724-0.797) for hybrid. DISCUSSION AND CONCLUSION: There is potential for DLA using retinal images as a screening adjunct for DKD among individuals with diabetes. This can value-add to existing DLA systems which diagnose diabetic retinopathy from retinal images, facilitating primary screening for DKD.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus Tipo 2 , Nefropatías Diabéticas , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico , Diabetes Mellitus Tipo 2/complicaciones , Estudios Transversales , Estudios Longitudinales , Australia , Algoritmos
18.
Nature ; 622(7981): 156-163, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37704728

RESUMEN

Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging.


Asunto(s)
Inteligencia Artificial , Oftalmopatías , Retina , Humanos , Oftalmopatías/complicaciones , Oftalmopatías/diagnóstico por imagen , Insuficiencia Cardíaca/complicaciones , Insuficiencia Cardíaca/diagnóstico , Infarto del Miocardio/complicaciones , Infarto del Miocardio/diagnóstico , Retina/diagnóstico por imagen , Aprendizaje Automático Supervisado
19.
Diabetes Care ; 46(10): 1728-1739, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-37729502

RESUMEN

Current guidelines recommend that individuals with diabetes receive yearly eye exams for detection of referable diabetic retinopathy (DR), one of the leading causes of new-onset blindness. For addressing the immense screening burden, artificial intelligence (AI) algorithms have been developed to autonomously screen for DR from fundus photography without human input. Over the last 10 years, many AI algorithms have achieved good sensitivity and specificity (>85%) for detection of referable DR compared with human graders; however, many questions still remain. In this narrative review on AI in DR screening, we discuss key concepts in AI algorithm development as a background for understanding the algorithms. We present the AI algorithms that have been prospectively validated against human graders and demonstrate the variability of reference standards and cohort demographics. We review the limited head-to-head validation studies where investigators attempt to directly compare the available algorithms. Next, we discuss the literature regarding cost-effectiveness, equity and bias, and medicolegal considerations, all of which play a role in the implementation of these AI algorithms in clinical practice. Lastly, we highlight ongoing efforts to bridge gaps in AI model data sets to pursue equitable development and delivery.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Estudios Prospectivos , Análisis Costo-Beneficio , Algoritmos
20.
JAMA Ophthalmol ; 141(8): 776-783, 2023 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-37471084

RESUMEN

Importance: Recently, several states have granted optometrists privileges to perform select laser procedures (laser peripheral iridotomy, selective laser trabeculoplasty, and YAG laser capsulotomy) with the aim of increasing access. However, whether these changes are associated with increased access to these procedures among each state's Medicare population has not been evaluated. Objective: To compare patient access to laser surgery eye care by estimated travel time and 30-minute proximity to an optometrist or ophthalmologist. Design, Setting, and Participants: This retrospective cohort database study used Medicare Part B claims data from 2016 through 2020 for patients accessing new patient or laser eye care (laser peripheral iridotomy, selective laser trabeculoplasty, YAG) from optometrists or ophthalmologists in Oklahoma, Kentucky, Louisiana, Arkansas, and Missouri. Analysis took place between December 2021 and March 2023. Main Outcome and Measures: Percentage of each state's Medicare population within a 30-minute travel time (isochrone) of an optometrist or ophthalmologist based on US census block group population and estimated travel time from patient to health care professional. Results: The analytic cohort consisted of 1 564 307 individual claims. Isochrones show that optometrists performing laser eye surgery cover a geographic area similar to that covered by ophthalmologists. Less than 5% of the population had only optometrists (no ophthalmologists) within a 30-minute drive in every state except for Oklahoma for YAG (301 470 [7.6%]) and selective laser trabeculoplasty (371 097 [9.4%]). Patients had a longer travel time to receive all laser procedures from optometrists than ophthalmologists in Kentucky: the shortest median (IQR) drive time for an optometrist-performed procedure was 49.0 (18.4-71.7) minutes for YAG, and the the longest median (IQR) drive time for an ophthalmologist-performed procedure was 22.8 (12.1-41.4) minutes, also for YAG. The median (IQR) driving time for YAG in Oklahoma was 26.6 (12.2-56.9) for optometrists vs 22.0 (11.2-40.8) minutes for ophthalmologists, and in Arkansas it was 90.0 (16.2-93.2) for optometrists vs 26.5 (11.8-51.6) minutes for ophthalmologists. In Louisiana, the longest median (IQR) travel time to receive laser procedures from optometrists was for YAG at 18.5 (7.6-32.6) minutes and the shortest drive to receive procedures from ophthalmologists was for YAG at 20.5 (11.7-39.7) minutes. Conclusions and Relevance: Although this study did not assess impact on quality of care, expansion of laser eye surgery privileges to optometrists was not found to lead to shorter travel times to receive care or to a meaningful increase in the percentage of the population with nearby health care professionals.


Asunto(s)
Equidad en Salud , Terapia por Láser , Medicare Part B , Optometristas , Anciano , Humanos , Estados Unidos , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA