Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
1.
Br J Ophthalmol ; 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38834291

RESUMO

Foundation models represent a paradigm shift in artificial intelligence (AI), evolving from narrow models designed for specific tasks to versatile, generalisable models adaptable to a myriad of diverse applications. Ophthalmology as a specialty has the potential to act as an exemplar for other medical specialties, offering a blueprint for integrating foundation models broadly into clinical practice. This review hopes to serve as a roadmap for eyecare professionals seeking to better understand foundation models, while equipping readers with the tools to explore the use of foundation models in their own research and practice. We begin by outlining the key concepts and technological advances which have enabled the development of these models, providing an overview of novel training approaches and modern AI architectures. Next, we summarise existing literature on the topic of foundation models in ophthalmology, encompassing progress in vision foundation models, large language models and large multimodal models. Finally, we outline major challenges relating to privacy, bias and clinical validation, and propose key steps forward to maximise the benefit of this powerful technology.

2.
JAMA Ophthalmol ; 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38696177

RESUMO

Importance: Vision-language models (VLMs) are a novel artificial intelligence technology capable of processing image and text inputs. While demonstrating strong generalist capabilities, their performance in ophthalmology has not been extensively studied. Objective: To assess the performance of the Gemini Pro VLM in expert-level tasks for macular diseases from optical coherence tomography (OCT) scans. Design, Setting, and Participants: This was a cross-sectional diagnostic accuracy study evaluating a generalist VLM on ophthalmology-specific tasks using the open-source Optical Coherence Tomography Image Database. The dataset included OCT B-scans from 50 unique patients: healthy individuals and those with macular hole, diabetic macular edema, central serous chorioretinopathy, and age-related macular degeneration. Each OCT scan was labeled for 10 key pathological features, referral recommendations, and treatments. The images were captured using a Cirrus high definition OCT machine (Carl Zeiss Meditec) at Sankara Nethralaya Eye Hospital, Chennai, India, and the dataset was published in December 2018. Image acquisition dates were not specified. Exposures: Gemini Pro, using a standard prompt to extract structured responses on December 15, 2023. Main Outcomes and Measures: The primary outcome was model responses compared against expert labels, calculating F1 scores for each pathological feature. Secondary outcomes included accuracy in diagnosis, referral urgency, and treatment recommendation. The model's internal concordance was evaluated by measuring the alignment between referral and treatment recommendations, independent of diagnostic accuracy. Results: The mean F1 score was 10.7% (95% CI, 2.4-19.2). Measurable F1 scores were obtained for macular hole (36.4%; 95% CI, 0-71.4), pigment epithelial detachment (26.1%; 95% CI, 0-46.2), subretinal hyperreflective material (24.0%; 95% CI, 0-45.2), and subretinal fluid (20.0%; 95% CI, 0-45.5). A correct diagnosis was achieved in 17 of 50 cases (34%; 95% CI, 22-48). Referral recommendations varied: 28 of 50 were correct (56%; 95% CI, 42-70), 10 of 50 were overcautious (20%; 95% CI, 10-32), and 12 of 50 were undercautious (24%; 95% CI, 12-36). Referral and treatment concordance were very high, with 48 of 50 (96%; 95 % CI, 90-100) and 48 of 49 (98%; 95% CI, 94-100) correct answers, respectively. Conclusions and Relevance: In this study, a generalist VLM demonstrated limited vision capabilities for feature detection and management of macular disease. However, it showed low self-contradiction, suggesting strong language capabilities. As VLMs continue to improve, validating their performance on large benchmarking datasets will help ascertain their potential in ophthalmology.

3.
Br J Ophthalmol ; 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38719344

RESUMO

Foundation models are the next generation of artificial intelligence that has the potential to provide novel use cases for healthcare. Large language models (LLMs), a type of foundation model, are capable of language comprehension and the ability to generate human-like text. Researchers and developers have been tuning LLMs to optimise their performance in specific tasks, such as medical challenge problems. Until recently, tuning required technical programming expertise, but the release of custom generative pre-trained transformers (GPTs) by OpenAI has allowed users to tune their own GPTs with natural language. This has the potential to democratise access to high-quality bespoke LLMs globally. In this review, we provide an overview of LLMs, how they are tuned and how custom GPTs work. We provide three use cases of custom GPTs in ophthalmology to demonstrate the versatility and effectiveness of these tools. First, we present 'EyeTeacher', an educational aid that generates questions from clinical guidelines to facilitate learning. Second, we built 'EyeAssistant', a clinical support tool that is tuned with clinical guidelines to respond to various physician queries. Lastly, we design 'The GPT for GA', which offers clinicians a comprehensive summary of emerging management strategies for geographic atrophy by analysing peer-reviewed documents. The review underscores the significance of custom instructions and information retrieval in tuning GPTs for specific tasks in ophthalmology. We also discuss the evaluation of LLM responses and address critical aspects such as privacy and accountability in their clinical application. Finally, we discuss their potential in ophthalmic education and clinical practice.

4.
Transl Vis Sci Technol ; 13(4): 5, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38564199

RESUMO

Purpose: The purpose of this study was to develop and validate RetinaVR, an affordable, portable, and fully immersive virtual reality (VR) simulator for vitreoretinal surgery training. Methods: We built RetinaVR as a standalone app on the Meta Quest 2 VR headset. It simulates core vitrectomy, peripheral shaving, membrane peeling, and endolaser application. In a validation study (n = 20 novices and experts), we measured: efficiency, safety, and module-specific performance. We first explored unadjusted performance differences through an effect size analysis. Then, a linear mixed-effects model was used to isolate the impact of age, sex, expertise, and experimental run on performance. Results: Experts were significantly safer in membrane peeling but not when controlling for other factors. Experts were significantly better in core vitrectomy, even when controlling for other factors (P = 0.014). Heatmap analysis of endolaser applications showed more consistent retinopexy among experts. Age had no impact on performance, but male subjects were faster in peripheral shaving (P = 0.036) and membrane peeling (P = 0.004). A learning curve was demonstrated with improving efficiency at each experimental run for all modules. Repetition also led to improved safety during membrane peeling (P = 0.003), and better task-specific performance during core vitrectomy (P = 0.038), peripheral shaving (P = 0.011), and endolaser application (P = 0.043). User experience was favorable to excellent in all spheres. Conclusions: RetinaVR demonstrates potential as an affordable, portable training tool for vitreoretinal surgery. Its construct validity is established, showing varying performance in a way that correlates with experimental runs, age, sex, and level of expertise. Translational Relevance: Fully immersive VR technology could revolutionize surgical training, making it more accessible, especially in developing nations.


Assuntos
Realidade Virtual , Cirurgia Vitreorretiniana , Humanos , Masculino
5.
Am J Ophthalmol ; 265: 147-155, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38642698

RESUMO

PURPOSE: An increase in fungal and particularly filamentous keratitis has been observed in many geographic areas, mostly in contact lens wearers. This study seeks to characterize long-term trends in fungal keratitis in a continental climate area to provide guidance for diagnosis and treatment. DESIGN: Retrospective multicentric case series. METHODS: Cases of microbiology-confirmed fungal keratitis from 2003 to 2022 presenting to tertiary care centers across Canada were included. Charts were reviewed for patient demographics, risk factors, visual acuity, and treatments undertaken. RESULTS: A total of 138 patients were identified: 75 had yeast keratitis while 63 had filamentous keratitis. Patients with yeast keratitis had more ocular surface disease (79% vs 28%) while patients with filamentous keratitis wore more refractive contact lenses (78% vs 19%). Candida species accounted for 96% of all yeast identified, while Aspergillus (32%) and Fusarium (26%) were the most common filamentous fungi species. The mean duration of treatment was 81 ± 96 days. Patients with yeast keratitis did not have significantly improved visual acuity with medical treatment (1.8 ± 1 LogMAR to 1.9 ± 1.5 LogMAR, P = .9980), in contrast to patients with filamentous keratitis (1.4 ± 1.2 LogMAR to 1.1 ± 1.3 LogMAR, P = .0093). CONCLUSIONS: Fungal keratitis is increasing in incidence, with contact lenses emerging as one of the leading risk factors. Significant differences in the risk factors and visual outcomes exist between yeast keratitis and filamentous keratitis which may guide diagnosis and treatment.

6.
Int J Retina Vitreous ; 10(1): 37, 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38671486

RESUMO

BACKGROUND: Code-free deep learning (CFDL) is a novel tool in artificial intelligence (AI). This study directly compared the discriminative performance of CFDL models designed by ophthalmologists without coding experience against bespoke models designed by AI experts in detecting retinal pathologies from optical coherence tomography (OCT) videos and fovea-centered images. METHODS: Using the same internal dataset of 1,173 OCT macular videos and fovea-centered images, model development was performed simultaneously but independently by an ophthalmology resident (CFDL models) and a postdoctoral researcher with expertise in AI (bespoke models). We designed a multi-class model to categorize video and fovea-centered images into five labels: normal retina, macular hole, epiretinal membrane, wet age-related macular degeneration and diabetic macular edema. We qualitatively compared point estimates of the performance metrics of the CFDL and bespoke models. RESULTS: For videos, the CFDL model demonstrated excellent discriminative performance, even outperforming the bespoke models for some metrics: area under the precision-recall curve was 0.984 (vs. 0.901), precision and sensitivity were both 94.1% (vs. 94.2%) and accuracy was 94.1% (vs. 96.7%). The fovea-centered CFDL model overall performed better than video-based model and was as accurate as the best bespoke model. CONCLUSION: This comparative study demonstrated that code-free models created by clinicians without coding expertise perform as accurately as expert-designed bespoke models at classifying various retinal pathologies from OCT videos and images. CFDL represents a step forward towards the democratization of AI in medicine, although its numerous limitations must be carefully addressed to ensure its effective application in healthcare.

7.
Artigo em Inglês | MEDLINE | ID: mdl-38446200

RESUMO

AIM: Code-free deep learning (CFDL) allows clinicians without coding expertise to build high-quality artificial intelligence (AI) models without writing code. In this review, we comprehensively review the advantages that CFDL offers over bespoke expert-designed deep learning (DL). As exemplars, we use the following tasks: (1) diabetic retinopathy screening, (2) retinal multi-disease classification, (3) surgical video classification, (4) oculomics and (5) resource management. METHODS: We performed a search for studies reporting CFDL applications in ophthalmology in MEDLINE (through PubMed) from inception to June 25, 2023, using the keywords 'autoML' AND 'ophthalmology'. After identifying 5 CFDL studies looking at our target tasks, we performed a subsequent search to find corresponding bespoke DL studies focused on the same tasks. Only English-written articles with full text available were included. Reviews, editorials, protocols and case reports or case series were excluded. We identified ten relevant studies for this review. RESULTS: Overall, studies were optimistic towards CFDL's advantages over bespoke DL in the five ophthalmological tasks. However, much of such discussions were identified to be mono-dimensional and had wide applicability gaps. High-quality assessment of better CFDL applicability over bespoke DL warrants a context-specific, weighted assessment of clinician intent, patient acceptance and cost-effectiveness. We conclude that CFDL and bespoke DL are unique in their own assets and are irreplaceable with each other. Their benefits are differentially valued on a case-to-case basis. Future studies are warranted to perform a multidimensional analysis of both techniques and to improve limitations of suboptimal dataset quality, poor applicability implications and non-regulated study designs. CONCLUSION: For clinicians without DL expertise and easy access to AI experts, CFDL allows the prototyping of novel clinical AI systems. CFDL models concert with bespoke models, depending on the task at hand. A multidimensional, weighted evaluation of the factors involved in the implementation of those models for a designated task is warranted.

8.
Ocul Immunol Inflamm ; : 1-7, 2024 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-38411944

RESUMO

PURPOSE: Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in detecting and localizing ocular toxoplasmosis (OT) lesions in fundus images and compares it to expert-designed models. METHODS: Ophthalmology trainees without coding experience designed AutoML models using 304 labelled fundus images. We designed a binary model to differentiate OT from normal and an object detection model to visually identify OT lesions. RESULTS: The AutoML model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 100%, specificity of 83% and accuracy of 93.5% (vs. 94%, 86% and 91% for the bespoke models). The AutoML object detection model had an AuPRC of 0.600 with a precision of 93.3% and recall of 56%. Using a diversified external validation dataset, our model correctly labeled 15 normal fundus images (100%) and 15 OT fundus images (100%), with a mean confidence score of 0.965 and 0.963, respectively. CONCLUSION: AutoML models created by ophthalmologists without coding experience were comparable or better than expert-designed bespoke models trained on the same dataset. By creatively using AutoML to identify OT lesions on fundus images, our approach brings the whole spectrum of DL model design into the hands of clinicians.

9.
Br J Ophthalmol ; 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38365427

RESUMO

BACKGROUND/AIMS: This study assesses the proficiency of Generative Pre-trained Transformer (GPT)-4 in answering questions about complex clinical ophthalmology cases. METHODS: We tested GPT-4 on 422 Journal of the American Medical Association Ophthalmology Clinical Challenges, and prompted the model to determine the diagnosis (open-ended question) and identify the next-step (multiple-choice question). We generated responses using two zero-shot prompting strategies, including zero-shot plan-and-solve+ (PS+), to improve the reasoning of the model. We compared the best-performing model to human graders in a benchmarking effort. RESULTS: Using PS+ prompting, GPT-4 achieved mean accuracies of 48.0% (95% CI (43.1% to 52.9%)) and 63.0% (95% CI (58.2% to 67.6%)) in diagnosis and next step, respectively. Next-step accuracy did not significantly differ by subspecialty (p=0.44). However, diagnostic accuracy in pathology and tumours was significantly higher than in uveitis (p=0.027). When the diagnosis was accurate, 75.2% (95% CI (68.6% to 80.9%)) of the next steps were correct. Conversely, when the diagnosis was incorrect, 50.2% (95% CI (43.8% to 56.6%)) of the next steps were accurate. The next step was three times more likely to be accurate when the initial diagnosis was correct (p<0.001). No significant differences were observed in diagnostic accuracy and decision-making between board-certified ophthalmologists and GPT-4. Among trainees, senior residents outperformed GPT-4 in diagnostic accuracy (p≤0.001 and 0.049) and in accuracy of next step (p=0.002 and 0.020). CONCLUSION: Improved prompting enhances GPT-4's performance in complex clinical situations, although it does not surpass ophthalmology trainees in our context. Specialised large language models hold promise for future assistance in medical decision-making and diagnosis.

11.
Saudi J Ophthalmol ; 37(3): 200-206, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38074296

RESUMO

PURPOSE: Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in diagnosing trachoma from field-collected conjunctival images and compares it to expert-designed DL models. METHODS: Two ophthalmology trainees without coding experience carried out AutoML model design using a publicly available image data set of field-collected conjunctival images (1656 labeled images). We designed two binary models to differentiate trachomatous inflammation-follicular (TF) and trachomatous inflammation-intense (TI) from normal. We then integrated an Edge model into an Android application using Google Firebase to make offline diagnoses. RESULTS: The AutoML models showed high diagnostic properties in the classification tasks that were comparable or better than the bespoke DL models. The TF model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 87%, specificity of 88%, and accuracy of 88%. The TI model had an AuPRC of 0.975, sensitivity of 95%, specificity of 92%, and accuracy of 93%. Through the Android app and using an external dataset, the AutoML model had an AuPRC of 0.875, sensitivity of 83%, specificity of 81%, and accuracy of 83%. CONCLUSION: AutoML models created by ophthalmologists without coding experience were comparable or better than bespoke models trained on the same dataset. Using AutoML to create models and edge computing to deploy them into smartphone-based apps, our approach brings the whole spectrum of DL model design into the hands of clinicians. This approach has the potential to democratize access to artificial intelligence.

12.
Artigo em Inglês | MEDLINE | ID: mdl-38100770

RESUMO

PURPOSE: To demonstrate the role of optical coherence tomography angiography (OCT-A) in the management of dome-shaped maculopathy (DSM). METHODS: Retrospective case review. RESULTS: A 52-year-old woman was referred to our retina service for potential bilateral choroidal neovascular membrane (CNVM) and blurry vision bilaterally. Initial spectacle-corrected visual acuity (VA) was 20/30-2 in the right eye (RE) and 20/30+2 in the left eye (LE). DSM was diagnosed on OCT. In both eyes, OCT B-scan passing through the fovea showed shallow, irregular RPE elevation (SIRE) suspicious of occult (type 1) CNVM. The outer retina and choriocapillaris angiograms showed a zone of nonexudative CNVM in the RE and exudative CNVM in the LE. Given the persistent SRF with CNVM in the LE, we elected to perform intravitreal injections of ranibizumab 0.5 mg on a treat and extend regimen. Upon the most recent follow-up, the best corrected VA improved to 20/20 in the LE with no persisting SRF. CONCLUSION: We present a case where assessing disease progression, the development of CNVM and evaluating the efficiency of therapies were realized through the application of novel OCT-A technology. This diagnostic tool may be used to guide clinicians in their management of DSM, as demonstrated through our experience. OCT-A can also make it possible to visualize nonexudative CNVM lesions that may be missed on traditional imaging assessments.

13.
Case Rep Ophthalmol ; 14(1): 591-595, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37915517

RESUMO

Paracentral acute middle maculopathy (PAMM) has recently been described following episodes of migraine. In this report, we present a case of PAMM and describe the role of en face optical coherence tomography (OCT). A 75-year-old woman presented with subjective vision loss over a 2-week period in the right eye. She was known for migraines with aura that presented with progressive spreading of positive and negative visual phenomena which usually resolved in under an hour. Her recent migraine episode was "atypical," as it lasted 3 days. She also experienced a monocular central scotoma with "black spots and jagged, zig-zag edges." The positive auras resolved spontaneously, whereas the central scotoma persisted. Spectral domain OCT showed an area of perifoveal hyperreflectivity from the inner plexiform to the outer plexiform layers consistent with PAMM. The mid-retina en face OCT and OCT angiography demonstrated an ovoid focal patch of hyperreflectivity with flow interruption, characteristic of globular PAMM. We diagnosed her with migraines with aura and presumed retinal vasospasm, complicated by retinal ischemia in the form of globular PAMM. Acute retinal ischemia, which may require urgent neurovascular workup and giant cell arteritis evaluation, must be considered in patients with migraines alongside persistent visual changes. Diagnosing PAMM requires a high level of suspicion since it can present without significant changes in visual acuity, visual fields, and fundus photographs. With the inclusion of en face OCT in the clinicians' diagnostic armamentarium, the slightest signs of retinal ischemic changes, such as PAMM, become evident.

14.
Br J Ophthalmol ; 2023 Nov 03.
Artigo em Inglês | MEDLINE | ID: mdl-37923374

RESUMO

BACKGROUND: Evidence on the performance of Generative Pre-trained Transformer 4 (GPT-4), a large language model (LLM), in the ophthalmology question-answering domain is needed. METHODS: We tested GPT-4 on two 260-question multiple choice question sets from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions question banks. We compared the accuracy of GPT-4 models with varying temperatures (creativity setting) and evaluated their responses in a subset of questions. We also compared the best-performing GPT-4 model to GPT-3.5 and to historical human performance. RESULTS: GPT-4-0.3 (GPT-4 with a temperature of 0.3) achieved the highest accuracy among GPT-4 models, with 75.8% on the BCSC set and 70.0% on the OphthoQuestions set. The combined accuracy was 72.9%, which represents an 18.3% raw improvement in accuracy compared with GPT-3.5 (p<0.001). Human graders preferred responses from models with a temperature higher than 0 (more creative). Exam section, question difficulty and cognitive level were all predictive of GPT-4-0.3 answer accuracy. GPT-4-0.3's performance was numerically superior to human performance on the BCSC (75.8% vs 73.3%) and OphthoQuestions (70.0% vs 63.0%), but the difference was not statistically significant (p=0.55 and p=0.09). CONCLUSION: GPT-4, an LLM trained on non-ophthalmology-specific data, performs significantly better than its predecessor on simulated ophthalmology board-style exams. Remarkably, its performance tended to be superior to historical human performance, but that difference was not statistically significant in our study.

15.
Int J Med Inform ; 178: 105178, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37657204

RESUMO

BACKGROUND AND OBJECTIVE: The detection of retinal diseases using optical coherence tomography (OCT) images and videos is a concrete example of a data classification problem. In recent years, Transformer architectures have been successfully applied to solve a variety of real-world classification problems. Although they have shown impressive discriminative abilities compared to other state-of-the-art models, improving their performance is essential, especially in healthcare-related problems. METHODS: This paper presents an effective technique named model-based transformer (MBT). It is based on popular pre-trained transformer models, particularly, vision transformer, swin transformer for OCT image classification, and multiscale vision transformer for OCT video classification. The proposed approach is designed to represent OCT data by taking advantage of an approximate sparse representation technique. Then, it estimates the optimal features, and performs data classification. RESULTS: The experiments are carried out using three real-world retinal datasets. The experimental results on OCT image and OCT video datasets show that the proposed method outperforms existing state-of-the-art deep learning approaches in terms of classification accuracy, precision, recall, and f1-score, kappa, AUC-ROC, and AUC-PR. It can also boost the performance of existing transformer models, including Vision transformer and Swin transformer for OCT image classification, and Multiscale Vision Transformers for OCT video classification. CONCLUSIONS: This work presents an approach for the automated detection of retinal diseases. Although deep neural networks have proven great potential in ophthalmology applications, our findings demonstrate for the first time a new way to identify retinal pathologies using OCT videos instead of images. Moreover, our proposal can help researchers enhance the discriminative capacity of a variety of powerful deep learning models presented in published papers. This can be valuable for future directions in medical research and clinical practice.

16.
Cureus ; 15(7): e42168, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37602079

RESUMO

This article describes a case of untreated optic neuritis occurring in the setting of coronavirus disease 2019 (COVID-19) infection and provides new insights into the natural history of this condition. A 29-year-old male patient with no known ocular or systemic disease presented with pain on extraocular movements and sudden loss of vision in the inferior visual field affecting the right eye. He had tested positive for COVID-19 six days prior after experiencing mild upper respiratory symptoms. On examination, visual acuity was 20/20, and color vision was normal. A relative afferent pupillary defect was observed in the right eye. Fundoscopy revealed mild optic disc edema in the same eye. Optical coherence tomography showed increased retinal nerve fiber layer thickness of the right optic nerve head and visual field testing revealed an inferonasal defect. Extensive laboratory and imaging investigations failed to reveal an underlying etiology, supporting a diagnosis of COVID-19-associated optic neuritis. The patient improved spontaneously without treatment. At the five-month follow-up, minor optic atrophy and a small residual visual field defect remained.

17.
Ophthalmology ; 130(12): 1313-1326, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37541626

RESUMO

PURPOSE: Individuals with Zellweger spectrum disorder (ZSD) manifest a spectrum of clinical phenotypes but almost all have retinal degeneration leading to blindness. The onset, extent, and progression of retinal findings have not been well described. It is crucial to understand the natural history of vision loss in ZSD to define reliable endpoints for future interventional trials. Herein, we describe ophthalmic findings in the largest number of ZSD patients to date. DESIGN: Retrospective review of longitudinal data from medical charts and review of cross-sectional data from the literature. PARTICIPANTS: Sixty-six patients with ZSD in the retrospective cohort and 119 patients reported in the literature, divided into 4 disease phenotypes based on genotype or clinical severity. METHODS: We reviewed ophthalmology records collected from the retrospective cohort (Clinicaltrials.gov NCT01668186) and performed a scoping review of the literature for ophthalmic findings in patients with ZSD. We extracted available ophthalmic data and analyzed by age and disease severity. MAIN OUTCOME MEASURES: Visual acuity (VA), posterior and anterior segment descriptions, nystagmus, refraction, electroretinography findings, visual evoked potentials, and OCT results and images. RESULTS: Visual acuity was worse at younger ages in those with severe disease compared with older patients with intermediate to mild disease for all 78 participants analyzed, with a median VA of 0.93 logarithm of the minimum angle of resolution (Snellen 20/320). Longitudinal VA data revealed slow loss over time and legal blindness onset at an average age of 7.8 years. Funduscopy showed retinal pigmentation, macular abnormalities, small or pale optic discs, and attenuated vessels with higher prevalence in milder severity groups and did not change with age. Electroretinography waveforms were diminished in 91% of patients, 46% of which were extinguished and did not change with age. OCT in milder patients revealed schitic changes in 18 of 23 individuals (age range 1.8 to 30 years), with evolution or stable macular edema. CONCLUSIONS: In ZSD, VA slowly deteriorates and is associated with disease severity, serial electroretinography is not useful for documenting vision loss progression, and intraretinal schitic changes may be common. Multiple systematic measures are required to assess retinal dystrophy accurately in ZSD, including functional vision measures. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.


Assuntos
Potenciais Evocados Visuais , Síndrome de Zellweger , Humanos , Criança , Lactente , Pré-Escolar , Adolescente , Adulto Jovem , Adulto , Estudos Transversais , Estudos Retrospectivos , Cegueira , Retina
18.
Retina ; 43(9): e53-e55, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37490754
19.
Ophthalmol Sci ; 3(4): 100324, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37334036

RESUMO

Purpose: Foundation models are a novel type of artificial intelligence algorithms, in which models are pretrained at scale on unannotated data and fine-tuned for a myriad of downstream tasks, such as generating text. This study assessed the accuracy of ChatGPT, a large language model (LLM), in the ophthalmology question-answering space. Design: Evaluation of diagnostic test or technology. Participants: ChatGPT is a publicly available LLM. Methods: We tested 2 versions of ChatGPT (January 9 "legacy" and ChatGPT Plus) on 2 popular multiple choice question banks commonly used to prepare for the high-stakes Ophthalmic Knowledge Assessment Program (OKAP) examination. We generated two 260-question simulated exams from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions online question bank. We carried out logistic regression to determine the effect of the examination section, cognitive level, and difficulty index on answer accuracy. We also performed a post hoc analysis using Tukey's test to decide if there were meaningful differences between the tested subspecialties. Main Outcome Measures: We reported the accuracy of ChatGPT for each examination section in percentage correct by comparing ChatGPT's outputs with the answer key provided by the question banks. We presented logistic regression results with a likelihood ratio (LR) chi-square. We considered differences between examination sections statistically significant at a P value of < 0.05. Results: The legacy model achieved 55.8% accuracy on the BCSC set and 42.7% on the OphthoQuestions set. With ChatGPT Plus, accuracy increased to 59.4% ± 0.6% and 49.2% ± 1.0%, respectively. Accuracy improved with easier questions when controlling for the examination section and cognitive level. Logistic regression analysis of the legacy model showed that the examination section (LR, 27.57; P = 0.006) followed by question difficulty (LR, 24.05; P < 0.001) were most predictive of ChatGPT's answer accuracy. Although the legacy model performed best in general medicine and worst in neuro-ophthalmology (P < 0.001) and ocular pathology (P = 0.029), similar post hoc findings were not seen with ChatGPT Plus, suggesting more consistent results across examination sections. Conclusion: ChatGPT has encouraging performance on a simulated OKAP examination. Specializing LLMs through domain-specific pretraining may be necessary to improve their performance in ophthalmic subspecialties. Financial Disclosures: Proprietary or commercial disclosure may be found after the references.

20.
BMJ Case Rep ; 16(5)2023 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-37247951

RESUMO

We report the case of a woman in her 50s who underwent, 5 years prior, a total gastrectomy after neoadjuvant chemotherapy for diffuse-type gastric cancer diagnosed during a workup for isolated gastric primary light chain (AL) amyloidosis. At the time of diagnosis, immunoglobulins light chain measurements and bone marrow biopsy were performed to rule out multiple myeloma and came back normal. Three years later, the patient developed systemic amyloidosis involving the heart and the lungs, after which she developed multiple myeloma. Isolated amyloid deposits in the stomach are a rare finding. While AL amyloidosis is frequently found in concomitance with multiple myeloma, late progression of primary AL amyloidosis to systemic amyloidosis and multiple myeloma is uncommon.


Assuntos
Amiloidose , Amiloidose de Cadeia Leve de Imunoglobulina , Linite Plástica , Mieloma Múltiplo , Neoplasias Gástricas , Feminino , Humanos , Mieloma Múltiplo/complicações , Mieloma Múltiplo/diagnóstico , Mieloma Múltiplo/tratamento farmacológico , Amiloidose de Cadeia Leve de Imunoglobulina/complicações , Amiloidose de Cadeia Leve de Imunoglobulina/diagnóstico , Linite Plástica/complicações , Linite Plástica/diagnóstico , Amiloidose/complicações , Amiloidose/diagnóstico , Amiloidose/patologia , Neoplasias Gástricas/complicações , Neoplasias Gástricas/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA