Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
1.
JMIR Diabetes ; 9: e59867, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39226095

RESUMEN

BACKGROUND: Diabetic retinopathy (DR) affects about 25% of people with diabetes in Canada. Early detection of DR is essential for preventing vision loss. OBJECTIVE: We evaluated the real-world performance of an artificial intelligence (AI) system that analyzes fundus images for DR screening in a Quebec tertiary care center. METHODS: We prospectively recruited adult patients with diabetes at the Centre hospitalier de l'Université de Montréal (CHUM) in Montreal, Quebec, Canada. Patients underwent dual-pathway screening: first by the Computer Assisted Retinal Analysis (CARA) AI system (index test), then by standard ophthalmological examination (reference standard). We measured the AI system's sensitivity and specificity for detecting referable disease at the patient level, along with its performance for detecting any retinopathy and diabetic macular edema (DME) at the eye level, and potential cost savings. RESULTS: This study included 115 patients. CARA demonstrated a sensitivity of 87.5% (95% CI 71.9-95.0) and specificity of 66.2% (95% CI 54.3-76.3) for detecting referable disease at the patient level. For any retinopathy detection at the eye level, CARA showed 88.2% sensitivity (95% CI 76.6-94.5) and 71.4% specificity (95% CI 63.7-78.1). For DME detection, CARA had 100% sensitivity (95% CI 64.6-100) and 81.9% specificity (95% CI 75.6-86.8). Potential yearly savings from implementing CARA at the CHUM were estimated at CAD $245,635 (US $177,643.23, as of July 26, 2024) considering 5000 patients with diabetes. CONCLUSIONS: Our study indicates that integrating a semiautomated AI system for DR screening demonstrates high sensitivity for detecting referable disease in a real-world setting. This system has the potential to improve screening efficiency and reduce costs at the CHUM, but more work is needed to validate it.

2.
Ophthalmol Sci ; 4(6): 100566, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39139546

RESUMEN

Objective: Recent developments in artificial intelligence (AI) have positioned it to transform several stages of the clinical trial process. In this study, we explore the role of AI in clinical trial recruitment of individuals with geographic atrophy (GA), an advanced stage of age-related macular degeneration, amidst numerous ongoing clinical trials for this condition. Design: Cross-sectional study. Subjects: Retrospective dataset from the INSIGHT Health Data Research Hub at Moorfields Eye Hospital in London, United Kingdom, including 306 651 patients (602 826 eyes) with suspected retinal disease who underwent OCT imaging between January 1, 2008 and April 10, 2023. Methods: A deep learning model was trained on OCT scans to identify patients potentially eligible for GA trials, using AI-generated segmentations of retinal tissue. This method's efficacy was compared against a traditional keyword-based electronic health record (EHR) search. A clinical validation with fundus autofluorescence (FAF) images was performed to calculate the positive predictive value of this approach, by comparing AI predictions with expert assessments. Main Outcome Measures: The primary outcomes included the positive predictive value of AI in identifying trial-eligible patients, and the secondary outcome was the intraclass correlation between GA areas segmented on FAF by experts and AI-segmented OCT scans. Results: The AI system shortlisted a larger number of eligible patients with greater precision (1139, positive predictive value: 63%; 95% confidence interval [CI]: 54%-71%) compared with the EHR search (693, positive predictive value: 40%; 95% CI: 39%-42%). A combined AI-EHR approach identified 604 eligible patients with a positive predictive value of 86% (95% CI: 79%-92%). Intraclass correlation of GA area segmented on FAF versus AI-segmented area on OCT was 0.77 (95% CI: 0.68-0.84) for cases meeting trial criteria. The AI also adjusts to the distinct imaging criteria from several clinical trials, generating tailored shortlists ranging from 438 to 1817 patients. Conclusions: This study demonstrates the potential for AI in facilitating automated prescreening for clinical trials in GA, enabling site feasibility assessments, data-driven protocol design, and cost reduction. Once treatments are available, similar AI systems could also be used to identify individuals who may benefit from treatment. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

3.
Asia Pac J Ophthalmol (Phila) ; 13(4): 100087, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39069106

RESUMEN

PURPOSE: Saliency maps (SM) allow clinicians to better understand the opaque decision-making process in artificial intelligence (AI) models by visualising the important features responsible for predictions. This ultimately improves interpretability and confidence. In this work, we review the use case for SMs, exploring their impact on clinicians' understanding and trust in AI models. We use the following ophthalmic conditions as examples: (1) glaucoma, (2) myopia, (3) age-related macular degeneration, and (4) diabetic retinopathy. METHOD: A multi-field search on MEDLINE, Embase, and Web of Science was conducted using specific keywords. Only studies on the use of SMs in glaucoma, myopia, AMD, or DR were considered for inclusion. RESULTS: Findings reveal that SMs are often used to validate AI models and advocate for their adoption, potentially leading to biased claims. Overlooking the technical limitations of SMs, and the conductance of superficial assessments of their quality and relevance, was discerned. Uncertainties persist regarding the role of saliency maps in building trust in AI. It is crucial to enhance understanding of SMs' technical constraints and improve evaluation of their quality, impact, and suitability for specific tasks. Establishing a standardised framework for selecting and assessing SMs, as well as exploring their relationship with other reliability sources (e.g. safety and generalisability), is essential for enhancing clinicians' trust in AI. CONCLUSION: We conclude that SMs are not beneficial for interpretability and trust-building purposes in their current forms. Instead, SMs may confer benefits to model debugging, model performance enhancement, and hypothesis testing (e.g. novel biomarkers).


Asunto(s)
Inteligencia Artificial , Oftalmólogos , Humanos , Confianza , Glaucoma/fisiopatología
4.
Br J Ophthalmol ; 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38925907

RESUMEN

The rapid advancements in generative artificial intelligence are set to significantly influence the medical sector, particularly ophthalmology. Generative adversarial networks and diffusion models enable the creation of synthetic images, aiding the development of deep learning models tailored for specific imaging tasks. Additionally, the advent of multimodal foundational models, capable of generating images, text and videos, presents a broad spectrum of applications within ophthalmology. These range from enhancing diagnostic accuracy to improving patient education and training healthcare professionals. Despite the promising potential, this area of technology is still in its infancy, and there are several challenges to be addressed, including data bias, safety concerns and the practical implementation of these technologies in clinical settings.

5.
Int Ophthalmol ; 44(1): 254, 2024 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-38909150

RESUMEN

PURPOSE: To assess the quality of hydroxychloroquine (HCQ)-induced retinopathy screening at a Canadian tertiary center, we concentrate on risk factor documentation within the electronic health record, in accordance with the 2016 AAO guidelines. METHODS: We performed a retrospective quality assessment study based on chart review of patients who underwent screening for HCQ-induced retinopathy at the Centre Hospitalier de l'Université de Montréal (CHUM) from 2016 to 2019. We evaluated four key risk factors for HCQ-induced retinopathy: daily dose, duration of use, renal disease, and tamoxifen use, using a three-tier grading system (ideal, adequate, inadequate) for documentation assessment. Pareto and root cause analyses were conducted to identify potential improvement solutions. RESULTS: Documentation quality varied in our study: daily dosage was 33% ideal, 31% appropriate, and 36% inappropriate. Duration of use documentation was 75% ideal, 2% adequate, and 24% inadequate. Renal disease documentation was only 6% ideal, with 62% adequate and 32% of charts lacking any past medical history. Among women's charts, tamoxifen use wasn't documented at all, with 65% adequately documenting medication lists. Pareto analysis indicated that improving renal disease and tamoxifen documentation could reduce 64% of non-ideal records, and enhancing daily dose documentation could decrease this by up to 90%. CONCLUSION: Accurate documentation of key risk factors is critical for HCQ-induced retinopathy screening, impacting both exam initiation and frequency. Our study identifies potential improvements in the screening process at the hospital, referring physician, and ophthalmologist levels. Implementing integrated pathways could enhance patient experience and screening effectiveness.


Asunto(s)
Antirreumáticos , Hospitales de Enseñanza , Hidroxicloroquina , Enfermedades de la Retina , Humanos , Hidroxicloroquina/efectos adversos , Hidroxicloroquina/administración & dosificación , Estudios Retrospectivos , Femenino , Enfermedades de la Retina/inducido químicamente , Enfermedades de la Retina/diagnóstico , Masculino , Persona de Mediana Edad , Antirreumáticos/efectos adversos , Antirreumáticos/administración & dosificación , Canadá , Anciano , Factores de Riesgo , Tamizaje Masivo/métodos , Adulto
6.
Br J Ophthalmol ; 2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38834291

RESUMEN

Foundation models represent a paradigm shift in artificial intelligence (AI), evolving from narrow models designed for specific tasks to versatile, generalisable models adaptable to a myriad of diverse applications. Ophthalmology as a specialty has the potential to act as an exemplar for other medical specialties, offering a blueprint for integrating foundation models broadly into clinical practice. This review hopes to serve as a roadmap for eyecare professionals seeking to better understand foundation models, while equipping readers with the tools to explore the use of foundation models in their own research and practice. We begin by outlining the key concepts and technological advances which have enabled the development of these models, providing an overview of novel training approaches and modern AI architectures. Next, we summarise existing literature on the topic of foundation models in ophthalmology, encompassing progress in vision foundation models, large language models and large multimodal models. Finally, we outline major challenges relating to privacy, bias and clinical validation, and propose key steps forward to maximise the benefit of this powerful technology.

7.
JAMA Ophthalmol ; 142(6): 573-576, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38696177

RESUMEN

Importance: Vision-language models (VLMs) are a novel artificial intelligence technology capable of processing image and text inputs. While demonstrating strong generalist capabilities, their performance in ophthalmology has not been extensively studied. Objective: To assess the performance of the Gemini Pro VLM in expert-level tasks for macular diseases from optical coherence tomography (OCT) scans. Design, Setting, and Participants: This was a cross-sectional diagnostic accuracy study evaluating a generalist VLM on ophthalmology-specific tasks using the open-source Optical Coherence Tomography Image Database. The dataset included OCT B-scans from 50 unique patients: healthy individuals and those with macular hole, diabetic macular edema, central serous chorioretinopathy, and age-related macular degeneration. Each OCT scan was labeled for 10 key pathological features, referral recommendations, and treatments. The images were captured using a Cirrus high definition OCT machine (Carl Zeiss Meditec) at Sankara Nethralaya Eye Hospital, Chennai, India, and the dataset was published in December 2018. Image acquisition dates were not specified. Exposures: Gemini Pro, using a standard prompt to extract structured responses on December 15, 2023. Main Outcomes and Measures: The primary outcome was model responses compared against expert labels, calculating F1 scores for each pathological feature. Secondary outcomes included accuracy in diagnosis, referral urgency, and treatment recommendation. The model's internal concordance was evaluated by measuring the alignment between referral and treatment recommendations, independent of diagnostic accuracy. Results: The mean F1 score was 10.7% (95% CI, 2.4-19.2). Measurable F1 scores were obtained for macular hole (36.4%; 95% CI, 0-71.4), pigment epithelial detachment (26.1%; 95% CI, 0-46.2), subretinal hyperreflective material (24.0%; 95% CI, 0-45.2), and subretinal fluid (20.0%; 95% CI, 0-45.5). A correct diagnosis was achieved in 17 of 50 cases (34%; 95% CI, 22-48). Referral recommendations varied: 28 of 50 were correct (56%; 95% CI, 42-70), 10 of 50 were overcautious (20%; 95% CI, 10-32), and 12 of 50 were undercautious (24%; 95% CI, 12-36). Referral and treatment concordance were very high, with 48 of 50 (96%; 95 % CI, 90-100) and 48 of 49 (98%; 95% CI, 94-100) correct answers, respectively. Conclusions and Relevance: In this study, a generalist VLM demonstrated limited vision capabilities for feature detection and management of macular disease. However, it showed low self-contradiction, suggesting strong language capabilities. As VLMs continue to improve, validating their performance on large benchmarking datasets will help ascertain their potential in ophthalmology.


Asunto(s)
Tomografía de Coherencia Óptica , Tomografía de Coherencia Óptica/métodos , Humanos , Estudios Transversales , Inteligencia Artificial , Edema Macular/diagnóstico , Edema Macular/diagnóstico por imagen , Mácula Lútea/diagnóstico por imagen , Mácula Lútea/patología , Femenino , Reproducibilidad de los Resultados , Masculino , Retinopatía Diabética/diagnóstico , Enfermedades de la Retina/diagnóstico , Coriorretinopatía Serosa Central/diagnóstico , Degeneración Macular/diagnóstico , Perforaciones de la Retina/diagnóstico , Perforaciones de la Retina/diagnóstico por imagen
8.
Br J Ophthalmol ; 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38719344

RESUMEN

Foundation models are the next generation of artificial intelligence that has the potential to provide novel use cases for healthcare. Large language models (LLMs), a type of foundation model, are capable of language comprehension and the ability to generate human-like text. Researchers and developers have been tuning LLMs to optimise their performance in specific tasks, such as medical challenge problems. Until recently, tuning required technical programming expertise, but the release of custom generative pre-trained transformers (GPTs) by OpenAI has allowed users to tune their own GPTs with natural language. This has the potential to democratise access to high-quality bespoke LLMs globally. In this review, we provide an overview of LLMs, how they are tuned and how custom GPTs work. We provide three use cases of custom GPTs in ophthalmology to demonstrate the versatility and effectiveness of these tools. First, we present 'EyeTeacher', an educational aid that generates questions from clinical guidelines to facilitate learning. Second, we built 'EyeAssistant', a clinical support tool that is tuned with clinical guidelines to respond to various physician queries. Lastly, we design 'The GPT for GA', which offers clinicians a comprehensive summary of emerging management strategies for geographic atrophy by analysing peer-reviewed documents. The review underscores the significance of custom instructions and information retrieval in tuning GPTs for specific tasks in ophthalmology. We also discuss the evaluation of LLM responses and address critical aspects such as privacy and accountability in their clinical application. Finally, we discuss their potential in ophthalmic education and clinical practice.

9.
Am J Ophthalmol ; 265: 147-155, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38642698

RESUMEN

PURPOSE: An increase in fungal and particularly filamentous keratitis has been observed in many geographic areas, mostly in contact lens wearers. This study seeks to characterize long-term trends in fungal keratitis in a continental climate area to provide guidance for diagnosis and treatment. DESIGN: Retrospective multicentric case series. METHODS: Cases of microbiology-confirmed fungal keratitis from 2003 to 2022 presenting to tertiary care centers across Canada were included. Charts were reviewed for patient demographics, risk factors, visual acuity, and treatments undertaken. RESULTS: A total of 138 patients were identified: 75 had yeast keratitis while 63 had filamentous keratitis. Patients with yeast keratitis had more ocular surface disease (79% vs 28%) while patients with filamentous keratitis wore more refractive contact lenses (78% vs 19%). Candida species accounted for 96% of all yeast identified, while Aspergillus (32%) and Fusarium (26%) were the most common filamentous fungi species. The mean duration of treatment was 81 ± 96 days. Patients with yeast keratitis did not have significantly improved visual acuity with medical treatment (1.8 ± 1 LogMAR to 1.9 ± 1.5 LogMAR, P = .9980), in contrast to patients with filamentous keratitis (1.4 ± 1.2 LogMAR to 1.1 ± 1.3 LogMAR, P = .0093). CONCLUSIONS: Fungal keratitis is increasing in incidence, with contact lenses emerging as one of the leading risk factors. Significant differences in the risk factors and visual outcomes exist between yeast keratitis and filamentous keratitis which may guide diagnosis and treatment.


Asunto(s)
Antifúngicos , Infecciones Fúngicas del Ojo , Agudeza Visual , Humanos , Estudios Retrospectivos , Infecciones Fúngicas del Ojo/epidemiología , Infecciones Fúngicas del Ojo/microbiología , Infecciones Fúngicas del Ojo/diagnóstico , Infecciones Fúngicas del Ojo/tratamiento farmacológico , Masculino , Femenino , Canadá/epidemiología , Agudeza Visual/fisiología , Persona de Mediana Edad , Antifúngicos/uso terapéutico , Adulto , Hongos/aislamiento & purificación , Queratitis/epidemiología , Queratitis/microbiología , Queratitis/diagnóstico , Úlcera de la Córnea/microbiología , Úlcera de la Córnea/epidemiología , Úlcera de la Córnea/diagnóstico , Factores de Riesgo , Anciano , Incidencia , Adulto Joven
10.
Int J Retina Vitreous ; 10(1): 37, 2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38671486

RESUMEN

BACKGROUND: Code-free deep learning (CFDL) is a novel tool in artificial intelligence (AI). This study directly compared the discriminative performance of CFDL models designed by ophthalmologists without coding experience against bespoke models designed by AI experts in detecting retinal pathologies from optical coherence tomography (OCT) videos and fovea-centered images. METHODS: Using the same internal dataset of 1,173 OCT macular videos and fovea-centered images, model development was performed simultaneously but independently by an ophthalmology resident (CFDL models) and a postdoctoral researcher with expertise in AI (bespoke models). We designed a multi-class model to categorize video and fovea-centered images into five labels: normal retina, macular hole, epiretinal membrane, wet age-related macular degeneration and diabetic macular edema. We qualitatively compared point estimates of the performance metrics of the CFDL and bespoke models. RESULTS: For videos, the CFDL model demonstrated excellent discriminative performance, even outperforming the bespoke models for some metrics: area under the precision-recall curve was 0.984 (vs. 0.901), precision and sensitivity were both 94.1% (vs. 94.2%) and accuracy was 94.1% (vs. 96.7%). The fovea-centered CFDL model overall performed better than video-based model and was as accurate as the best bespoke model. CONCLUSION: This comparative study demonstrated that code-free models created by clinicians without coding expertise perform as accurately as expert-designed bespoke models at classifying various retinal pathologies from OCT videos and images. CFDL represents a step forward towards the democratization of AI in medicine, although its numerous limitations must be carefully addressed to ensure its effective application in healthcare.

11.
Transl Vis Sci Technol ; 13(4): 5, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38564199

RESUMEN

Purpose: The purpose of this study was to develop and validate RetinaVR, an affordable, portable, and fully immersive virtual reality (VR) simulator for vitreoretinal surgery training. Methods: We built RetinaVR as a standalone app on the Meta Quest 2 VR headset. It simulates core vitrectomy, peripheral shaving, membrane peeling, and endolaser application. In a validation study (n = 20 novices and experts), we measured: efficiency, safety, and module-specific performance. We first explored unadjusted performance differences through an effect size analysis. Then, a linear mixed-effects model was used to isolate the impact of age, sex, expertise, and experimental run on performance. Results: Experts were significantly safer in membrane peeling but not when controlling for other factors. Experts were significantly better in core vitrectomy, even when controlling for other factors (P = 0.014). Heatmap analysis of endolaser applications showed more consistent retinopexy among experts. Age had no impact on performance, but male subjects were faster in peripheral shaving (P = 0.036) and membrane peeling (P = 0.004). A learning curve was demonstrated with improving efficiency at each experimental run for all modules. Repetition also led to improved safety during membrane peeling (P = 0.003), and better task-specific performance during core vitrectomy (P = 0.038), peripheral shaving (P = 0.011), and endolaser application (P = 0.043). User experience was favorable to excellent in all spheres. Conclusions: RetinaVR demonstrates potential as an affordable, portable training tool for vitreoretinal surgery. Its construct validity is established, showing varying performance in a way that correlates with experimental runs, age, sex, and level of expertise. Translational Relevance: Fully immersive VR technology could revolutionize surgical training, making it more accessible, especially in developing nations.


Asunto(s)
Realidad Virtual , Cirugía Vitreorretiniana , Humanos , Masculino
12.
Graefes Arch Clin Exp Ophthalmol ; 262(9): 2785-2798, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38446200

RESUMEN

AIM: Code-free deep learning (CFDL) allows clinicians without coding expertise to build high-quality artificial intelligence (AI) models without writing code. In this review, we comprehensively review the advantages that CFDL offers over bespoke expert-designed deep learning (DL). As exemplars, we use the following tasks: (1) diabetic retinopathy screening, (2) retinal multi-disease classification, (3) surgical video classification, (4) oculomics and (5) resource management. METHODS: We performed a search for studies reporting CFDL applications in ophthalmology in MEDLINE (through PubMed) from inception to June 25, 2023, using the keywords 'autoML' AND 'ophthalmology'. After identifying 5 CFDL studies looking at our target tasks, we performed a subsequent search to find corresponding bespoke DL studies focused on the same tasks. Only English-written articles with full text available were included. Reviews, editorials, protocols and case reports or case series were excluded. We identified ten relevant studies for this review. RESULTS: Overall, studies were optimistic towards CFDL's advantages over bespoke DL in the five ophthalmological tasks. However, much of such discussions were identified to be mono-dimensional and had wide applicability gaps. High-quality assessment of better CFDL applicability over bespoke DL warrants a context-specific, weighted assessment of clinician intent, patient acceptance and cost-effectiveness. We conclude that CFDL and bespoke DL are unique in their own assets and are irreplaceable with each other. Their benefits are differentially valued on a case-to-case basis. Future studies are warranted to perform a multidimensional analysis of both techniques and to improve limitations of suboptimal dataset quality, poor applicability implications and non-regulated study designs. CONCLUSION: For clinicians without DL expertise and easy access to AI experts, CFDL allows the prototyping of novel clinical AI systems. CFDL models concert with bespoke models, depending on the task at hand. A multidimensional, weighted evaluation of the factors involved in the implementation of those models for a designated task is warranted.


Asunto(s)
Aprendizaje Profundo , Oftalmología , Humanos
13.
Br J Ophthalmol ; 2024 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-38365427

RESUMEN

BACKGROUND/AIMS: This study assesses the proficiency of Generative Pre-trained Transformer (GPT)-4 in answering questions about complex clinical ophthalmology cases. METHODS: We tested GPT-4 on 422 Journal of the American Medical Association Ophthalmology Clinical Challenges, and prompted the model to determine the diagnosis (open-ended question) and identify the next-step (multiple-choice question). We generated responses using two zero-shot prompting strategies, including zero-shot plan-and-solve+ (PS+), to improve the reasoning of the model. We compared the best-performing model to human graders in a benchmarking effort. RESULTS: Using PS+ prompting, GPT-4 achieved mean accuracies of 48.0% (95% CI (43.1% to 52.9%)) and 63.0% (95% CI (58.2% to 67.6%)) in diagnosis and next step, respectively. Next-step accuracy did not significantly differ by subspecialty (p=0.44). However, diagnostic accuracy in pathology and tumours was significantly higher than in uveitis (p=0.027). When the diagnosis was accurate, 75.2% (95% CI (68.6% to 80.9%)) of the next steps were correct. Conversely, when the diagnosis was incorrect, 50.2% (95% CI (43.8% to 56.6%)) of the next steps were accurate. The next step was three times more likely to be accurate when the initial diagnosis was correct (p<0.001). No significant differences were observed in diagnostic accuracy and decision-making between board-certified ophthalmologists and GPT-4. Among trainees, senior residents outperformed GPT-4 in diagnostic accuracy (p≤0.001 and 0.049) and in accuracy of next step (p=0.002 and 0.020). CONCLUSION: Improved prompting enhances GPT-4's performance in complex clinical situations, although it does not surpass ophthalmology trainees in our context. Specialised large language models hold promise for future assistance in medical decision-making and diagnosis.

14.
Ocul Immunol Inflamm ; : 1-7, 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-38411944

RESUMEN

PURPOSE: Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in detecting and localizing ocular toxoplasmosis (OT) lesions in fundus images and compares it to expert-designed models. METHODS: Ophthalmology trainees without coding experience designed AutoML models using 304 labelled fundus images. We designed a binary model to differentiate OT from normal and an object detection model to visually identify OT lesions. RESULTS: The AutoML model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 100%, specificity of 83% and accuracy of 93.5% (vs. 94%, 86% and 91% for the bespoke models). The AutoML object detection model had an AuPRC of 0.600 with a precision of 93.3% and recall of 56%. Using a diversified external validation dataset, our model correctly labeled 15 normal fundus images (100%) and 15 OT fundus images (100%), with a mean confidence score of 0.965 and 0.963, respectively. CONCLUSION: AutoML models created by ophthalmologists without coding experience were comparable or better than expert-designed bespoke models trained on the same dataset. By creatively using AutoML to identify OT lesions on fundus images, our approach brings the whole spectrum of DL model design into the hands of clinicians.

16.
Saudi J Ophthalmol ; 37(3): 200-206, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38074296

RESUMEN

PURPOSE: Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in diagnosing trachoma from field-collected conjunctival images and compares it to expert-designed DL models. METHODS: Two ophthalmology trainees without coding experience carried out AutoML model design using a publicly available image data set of field-collected conjunctival images (1656 labeled images). We designed two binary models to differentiate trachomatous inflammation-follicular (TF) and trachomatous inflammation-intense (TI) from normal. We then integrated an Edge model into an Android application using Google Firebase to make offline diagnoses. RESULTS: The AutoML models showed high diagnostic properties in the classification tasks that were comparable or better than the bespoke DL models. The TF model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 87%, specificity of 88%, and accuracy of 88%. The TI model had an AuPRC of 0.975, sensitivity of 95%, specificity of 92%, and accuracy of 93%. Through the Android app and using an external dataset, the AutoML model had an AuPRC of 0.875, sensitivity of 83%, specificity of 81%, and accuracy of 83%. CONCLUSION: AutoML models created by ophthalmologists without coding experience were comparable or better than bespoke models trained on the same dataset. Using AutoML to create models and edge computing to deploy them into smartphone-based apps, our approach brings the whole spectrum of DL model design into the hands of clinicians. This approach has the potential to democratize access to artificial intelligence.

17.
Artículo en Inglés | MEDLINE | ID: mdl-38100770

RESUMEN

PURPOSE: To demonstrate the role of optical coherence tomography angiography (OCT-A) in the management of dome-shaped maculopathy (DSM). METHODS: Retrospective case review. RESULTS: A 52-year-old woman was referred to our retina service for potential bilateral choroidal neovascular membrane (CNVM) and blurry vision bilaterally. Initial spectacle-corrected visual acuity (VA) was 20/30-2 in the right eye (RE) and 20/30+2 in the left eye (LE). DSM was diagnosed on OCT. In both eyes, OCT B-scan passing through the fovea showed shallow, irregular RPE elevation (SIRE) suspicious of occult (type 1) CNVM. The outer retina and choriocapillaris angiograms showed a zone of nonexudative CNVM in the RE and exudative CNVM in the LE. Given the persistent SRF with CNVM in the LE, we elected to perform intravitreal injections of ranibizumab 0.5 mg on a treat and extend regimen. Upon the most recent follow-up, the best corrected VA improved to 20/20 in the LE with no persisting SRF. CONCLUSION: We present a case where assessing disease progression, the development of CNVM and evaluating the efficiency of therapies were realized through the application of novel OCT-A technology. This diagnostic tool may be used to guide clinicians in their management of DSM, as demonstrated through our experience. OCT-A can also make it possible to visualize nonexudative CNVM lesions that may be missed on traditional imaging assessments.

18.
Case Rep Ophthalmol ; 14(1): 591-595, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37915517

RESUMEN

Paracentral acute middle maculopathy (PAMM) has recently been described following episodes of migraine. In this report, we present a case of PAMM and describe the role of en face optical coherence tomography (OCT). A 75-year-old woman presented with subjective vision loss over a 2-week period in the right eye. She was known for migraines with aura that presented with progressive spreading of positive and negative visual phenomena which usually resolved in under an hour. Her recent migraine episode was "atypical," as it lasted 3 days. She also experienced a monocular central scotoma with "black spots and jagged, zig-zag edges." The positive auras resolved spontaneously, whereas the central scotoma persisted. Spectral domain OCT showed an area of perifoveal hyperreflectivity from the inner plexiform to the outer plexiform layers consistent with PAMM. The mid-retina en face OCT and OCT angiography demonstrated an ovoid focal patch of hyperreflectivity with flow interruption, characteristic of globular PAMM. We diagnosed her with migraines with aura and presumed retinal vasospasm, complicated by retinal ischemia in the form of globular PAMM. Acute retinal ischemia, which may require urgent neurovascular workup and giant cell arteritis evaluation, must be considered in patients with migraines alongside persistent visual changes. Diagnosing PAMM requires a high level of suspicion since it can present without significant changes in visual acuity, visual fields, and fundus photographs. With the inclusion of en face OCT in the clinicians' diagnostic armamentarium, the slightest signs of retinal ischemic changes, such as PAMM, become evident.

19.
Br J Ophthalmol ; 2023 Nov 03.
Artículo en Inglés | MEDLINE | ID: mdl-37923374

RESUMEN

BACKGROUND: Evidence on the performance of Generative Pre-trained Transformer 4 (GPT-4), a large language model (LLM), in the ophthalmology question-answering domain is needed. METHODS: We tested GPT-4 on two 260-question multiple choice question sets from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions question banks. We compared the accuracy of GPT-4 models with varying temperatures (creativity setting) and evaluated their responses in a subset of questions. We also compared the best-performing GPT-4 model to GPT-3.5 and to historical human performance. RESULTS: GPT-4-0.3 (GPT-4 with a temperature of 0.3) achieved the highest accuracy among GPT-4 models, with 75.8% on the BCSC set and 70.0% on the OphthoQuestions set. The combined accuracy was 72.9%, which represents an 18.3% raw improvement in accuracy compared with GPT-3.5 (p<0.001). Human graders preferred responses from models with a temperature higher than 0 (more creative). Exam section, question difficulty and cognitive level were all predictive of GPT-4-0.3 answer accuracy. GPT-4-0.3's performance was numerically superior to human performance on the BCSC (75.8% vs 73.3%) and OphthoQuestions (70.0% vs 63.0%), but the difference was not statistically significant (p=0.55 and p=0.09). CONCLUSION: GPT-4, an LLM trained on non-ophthalmology-specific data, performs significantly better than its predecessor on simulated ophthalmology board-style exams. Remarkably, its performance tended to be superior to historical human performance, but that difference was not statistically significant in our study.

20.
Int J Med Inform ; 178: 105178, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37657204

RESUMEN

BACKGROUND AND OBJECTIVE: The detection of retinal diseases using optical coherence tomography (OCT) images and videos is a concrete example of a data classification problem. In recent years, Transformer architectures have been successfully applied to solve a variety of real-world classification problems. Although they have shown impressive discriminative abilities compared to other state-of-the-art models, improving their performance is essential, especially in healthcare-related problems. METHODS: This paper presents an effective technique named model-based transformer (MBT). It is based on popular pre-trained transformer models, particularly, vision transformer, swin transformer for OCT image classification, and multiscale vision transformer for OCT video classification. The proposed approach is designed to represent OCT data by taking advantage of an approximate sparse representation technique. Then, it estimates the optimal features, and performs data classification. RESULTS: The experiments are carried out using three real-world retinal datasets. The experimental results on OCT image and OCT video datasets show that the proposed method outperforms existing state-of-the-art deep learning approaches in terms of classification accuracy, precision, recall, and f1-score, kappa, AUC-ROC, and AUC-PR. It can also boost the performance of existing transformer models, including Vision transformer and Swin transformer for OCT image classification, and Multiscale Vision Transformers for OCT video classification. CONCLUSIONS: This work presents an approach for the automated detection of retinal diseases. Although deep neural networks have proven great potential in ophthalmology applications, our findings demonstrate for the first time a new way to identify retinal pathologies using OCT videos instead of images. Moreover, our proposal can help researchers enhance the discriminative capacity of a variety of powerful deep learning models presented in published papers. This can be valuable for future directions in medical research and clinical practice.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...