RESUMEN
PURPOSE: To develop and validate the performance of a high myopia (HM)-specific normative database of peripapillary retinal nerve fiber layer (pRNFL) thickness in differentiating HM from highly myopic glaucoma (HMG). DESIGN: Cross-sectional multicenter study. PARTICIPANTS: A total of 1367 Chinese participants (2325 eyes) with nonpathologic HM or HMG were included from 4 centers. After quality control, 1108 eyes from 694 participants with HM were included in the normative database; 459 eyes from 408 participants (323 eyes with HM and 136 eyes with HMG) and 322 eyes from 197 participants (131 eyes with HM and 191 eyes with HMG) were included in the internal and external validation sets, respectively. Only HMG eyes with an intraocular pressure > 21 mmHg were included. METHODS: The pRNFL thickness was measured with swept-source (SS) OCT. Four strategies of pRNFL-specified values were examined, including global and quadrantic pRNFL thickness below the lowest fifth or the lowest first percentile of the normative database. MAIN OUTCOMES MEASURES: The accuracy, sensitivity, and specificity of the HM-specific normative database for detecting HMG. RESULTS: Setting the fifth percentile of the global pRNFL thickness as the threshold, using the HM-specific normative database, we achieved an accuracy of 0.93 (95% confidence interval [CI], 0.90-0.95) and 0.85 (95% CI, 0.81-0.89), and, using the first percentile as the threshold, we acheived an accuracy of 0.85 (95% CI, 0.81-0.88) and 0.70 (95% CI, 0.65-0.75) in detecting HMG in the internal and external validation sets, respectively. The fifth percentile of the global pRNFL thickness achieved high sensitivities of 0.75 (95% CI, 0.67-0.82) and 0.75 (95% CI, 0.68-0.81) and specificities of 1.00 (95% CI, 0.99-1.00) and 1.00 (95% CI, 0.97-1.00) in the internal and external validation datasets, respectively. Compared with the built-in database of the OCT device, the HM-specific normative database showed a higher sensitivity and specificity than the corresponding pRNFL thickness below the fifth or first percentile (P < 0.001 for all). CONCLUSIONS: The HM-specific normative database is more capable of detecting HMG eyes than the SS OCT built-in database, which may be an effective tool for differential diagnosis between HMG and HM. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Asunto(s)
Glaucoma , Miopía , Humanos , Estudios Transversales , Pueblos del Este de Asia , Miopía/diagnóstico , Retina , Glaucoma/diagnóstico , Fibras NerviosasRESUMEN
Optical coherence tomography (OCT) is a non-invasive optical imaging modality, which provides rapid, high-resolution and cross-sectional morphology of macular area and optic nerve head for diagnosis and managing of different eye diseases. However, interpreting OCT images requires experts in both OCT images and eye diseases since many factors such as artefacts and concomitant diseases can affect the accuracy of quantitative measurements made by post-processing algorithms. Currently, there is a growing interest in applying deep learning (DL) methods to analyse OCT images automatically. This review summarises the trends in DL-based OCT image analysis in ophthalmology, discusses the current gaps, and provides potential research directions. DL in OCT analysis shows promising performance in several tasks: (1) layers and features segmentation and quantification; (2) disease classification; (3) disease progression and prognosis; and (4) referral triage level prediction. Different studies and trends in the development of DL-based OCT image analysis are described and the following challenges are identified and described: (1) public OCT data are scarce and scattered; (2) models show performance discrepancies in real-world settings; (3) models lack of transparency; (4) there is a lack of societal acceptance and regulatory standards; and (5) OCT is still not widely available in underprivileged areas. More work is needed to tackle the challenges and gaps, before DL is further applied in OCT image analysis for clinical use.
Asunto(s)
Aprendizaje Profundo , Oftalmopatías , Disco Óptico , Humanos , Tomografía de Coherencia Óptica/métodos , Estudios Transversales , Oftalmopatías/diagnóstico por imagenRESUMEN
BACKGROUND: To establish the independent association between blood pressure (BP) and retinal vascular caliber, especially the retinal venular caliber, in a population of 12-year-old Chinese children. METHODS: We have examined 1501 students in the 7th grade with mean age of 12.7 years. A non-mydriatic fundus camera (Canon CR-2, Tokyo, Japan) was used to capture 450 fundus images of the right eyes. Retinal vascular caliber was measured using a computer-based program (IVAN). BP was measured using an automated sphygmomanometer (HEM-907, Omron, Kyoto, Japan). RESULTS: The mean retinal arteriolar caliber was 145.3 µm (95% confidence interval [CI], 110.6-189.6 µm) and the mean venular caliber was 212.7 µm (95% CI, 170.6-271.3 µm). After controlling for age, sex, axial length, BMI, waist, spherical equivalent, birth weight, gestational age and fellow retinal vessel caliber, children in the highest quartile of BP had significantly narrower retinal arteriolar caliber than those with lower quartiles (P for trend< 0.05). Each 10-mmHg increase in BP was associated with narrowing of the retinal arterioles by 3.00 µm (multivariable-adjusted P < 0.001), and the results were consist in three BP measurements. The association between BP measures and retinal venular caliber did not persist after adjusting for fellow arteriolar caliber. And there was no significant interaction between BP and sex, age, BMI, and birth status. CONCLUSIONS: In a large population of adolescent Chinese children, higher BP was found to be associated with narrower retinal arterioles, but not with retinal venules. Sex and other confounding factors had no effect on the relationship of BP and retinal vessel diameter.
Asunto(s)
Arteriolas/fisiología , Presión Sanguínea/fisiología , Vasos Retinianos/fisiología , Vénulas/fisiología , Adolescente , Longitud Axial del Ojo/fisiología , Índice de Masa Corporal , Niño , China , Estudios Transversales , Femenino , Humanos , Masculino , Análisis de Regresión , Factores SexualesRESUMEN
Light is a major environmental factor that affects metabolic pathways and stimulates the production of secondary metabolites in potato. However, adaptive changes in potato metabolic pathways and physiological functions triggered by light are partly explained by gene expression changes. Regulation of secondary metabolic pathways in potato has been extensively studied at transcriptional level, but little is known about the mechanisms of post-transcriptional regulation by miRNAs. To identify light-responsive miRNAs/mRNAs and construct putative metabolism pathways regulated by the miRNA-mRNA pairs, an integrated omics (sRNAome and transcriptome) analysis was performed to potato under light stimulus. A total of 31 and 48 miRNAs were identified to be differentially expressed in the leaves and tubers, respectively. Among the DEGs, 1353 genes in the leaves and 1841 genes in the tubers were upregulated, while 1595 genes in the leaves and 897 genes in the tubers were downregulated by light. Mapman enrichment analyses showed that genes related to MVA pathway, alkaloids-like, phenylpropanoids, flavonoids, and carotenoids metabolism were significantly upregulated, while genes associated with major CHO metabolism were repressed in the leaves and tubers. Integrated miRNA and mRNA profiles revealed that light-responsive miRNAs are important regulators in alkaloids metabolism, UMP salvage, lipid biosynthesis, and cellulose catabolism. Moreover, several miRNAs may participate in glycoalkaloids metabolism via JA signaling pathway, UDP-glucose biosynthesis and hydroxylation reaction. This study provides a global view of miRNA and mRNA expression profiles in potato response to light, our results suggest that miRNAs might play important roles in secondary metabolic pathways, especially in glycoalkaloid biosynthesis. The findings will enlighten us on the genetic regulation of secondary metabolite pathways and pave the way for future application of genetically engineered potato.
Asunto(s)
Solanum tuberosum/genética , Solanum tuberosum/metabolismo , Transcriptoma , Regulación de la Expresión Génica de las Plantas , Luz , Redes y Vías Metabólicas , MicroARNs/genética , Hojas de la Planta/genética , Tubérculos de la Planta/genética , ARN Mensajero/genética , ARN de Planta/genética , Metabolismo SecundarioRESUMEN
Optical coherence tomography angiography (OCTA) can visualize retinal microvasculature and is important to qualitatively and quantitatively identify potential biomarkers for different retinal diseases. However, the resolution of optical coherence tomography (OCT) angiograms inevitably decreases when increasing the field-of-view (FOV) given a fixed acquisition time. To address this issue, we propose a novel reference-based super-resolution (RefSR) framework to preserve the resolution of the OCT angiograms while increasing the scanning area. Specifically, textures from the normal RefSR pipeline are used to train a learnable texture generator (LTG), which is designed to generate textures according to the input. The key difference between the proposed method and traditional RefSR models is that the textures used during inference are generated by the LTG instead of being searched from a single reference (Ref) image. Since the LTG is optimized throughout the whole training process, the available texture space is significantly enlarged and no longer limited to a single Ref image, but extends to all textures contained in the training samples. Moreover, our proposed LTGNet does not require an Ref image at the inference phase, thereby becoming invulnerable to the selection of the Ref image. Both experimental and visual results show that LTGNet has competitive performance and robustness over state-of-the-art methods, indicating good reliability and promise in real-life deployment. The source code is available at https://github.com/RYY0722/LTGNet.
RESUMEN
BACKGROUND: Diabetic macular edema (DME) is the leading cause of vision loss in people with diabetes. Application of artificial intelligence (AI) in interpreting fundus photography (FP) and optical coherence tomography (OCT) images allows prompt detection and intervention. PURPOSE: To evaluate the performance of AI in detecting DME from FP or OCT images and identify potential factors affecting model performances. DATA SOURCES: We searched seven electronic libraries up to 12 February 2023. STUDY SELECTION: We included studies using AI to detect DME from FP or OCT images. DATA EXTRACTION: We extracted study characteristics and performance parameters. DATA SYNTHESIS: Fifty-three studies were included in the meta-analysis. FP-based algorithms of 25 studies yielded pooled area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of 0.964, 92.6%, and 91.1%, respectively. OCT-based algorithms of 28 studies yielded pooled AUROC, sensitivity, and specificity of 0.985, 95.9%, and 97.9%, respectively. Potential factors improving model performance included deep learning techniques, larger size, and more diversity in training data sets. Models demonstrated better performance when validated internally than externally, and those trained with multiple data sets showed better results upon external validation. LIMITATIONS: Analyses were limited by unstandardized algorithm outcomes and insufficient data in patient demographics, OCT volumetric scans, and external validation. CONCLUSIONS: This meta-analysis demonstrates satisfactory performance of AI in detecting DME from FP or OCT images. External validation is warranted for future studies to evaluate model generalizability. Further investigations may estimate optimal sample size, effect of class balance, patient demographics, and additional benefits of OCT volumetric scans.
Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Edema Macular , Humanos , Retinopatía Diabética/diagnóstico por imagen , Retinopatía Diabética/complicaciones , Edema Macular/diagnóstico por imagen , Edema Macular/etiología , Inteligencia Artificial , Tomografía de Coherencia Óptica/métodos , Fotograbar/métodosRESUMEN
Alzheimer's disease (AD) is the leading cause of dementia worldwide. Current diagnostic modalities of AD generally focus on detecting the presence of amyloid ß and tau protein in the brain (for example, positron emission tomography [PET] and cerebrospinal fluid testing), but these are limited by their high cost, invasiveness, and lack of expertise. Retinal imaging exhibits potential in AD screening and risk stratification, as the retina provides a platform for the optical visualization of the central nervous system in vivo, with vascular and neuronal changes that mirror brain pathology. Given the paradigm shift brought by advances in artificial intelligence and the emergence of disease-modifying therapies, this article aims to summarize and review the current literature to highlight 8 trends in an evolving landscape regarding the role and potential value of retinal imaging in AD screening.
RESUMEN
AIMS: To develop and externally test deep learning (DL) models for assessing the image quality of three-dimensional (3D) macular scans from Cirrus and Spectralis optical coherence tomography devices. METHODS: We retrospectively collected two data sets including 2277 Cirrus 3D scans and 1557 Spectralis 3D scans, respectively, for training (70%), fine-tuning (10%) and internal validation (20%) from electronic medical and research records at The Chinese University of Hong Kong Eye Centre and the Hong Kong Eye Hospital. Scans with various eye diseases (eg, diabetic macular oedema, age-related macular degeneration, polypoidal choroidal vasculopathy and pathological myopia), and scans of normal eyes from adults and children were included. Two graders labelled each 3D scan as gradable or ungradable, according to standardised criteria. We used a 3D version of the residual network (ResNet)-18 for Cirrus 3D scans and a multiple-instance learning pipline with ResNet-18 for Spectralis 3D scans. Two deep learning (DL) models were further tested via three unseen Cirrus data sets from Singapore and five unseen Spectralis data sets from India, Australia and Hong Kong, respectively. RESULTS: In the internal validation, the models achieved the area under curves (AUCs) of 0.930 (0.885-0.976) and 0.906 (0.863-0.948) for assessing the Cirrus 3D scans and Spectralis 3D scans, respectively. In the external testing, the models showed robust performance with AUCs ranging from 0.832 (0.730-0.934) to 0.930 (0.906-0.953) and 0.891 (0.836-0.945) to 0.962 (0.918-1.000), respectively. CONCLUSIONS: Our models could be used for filtering out ungradable 3D scans and further incorporated with a disease-detection DL model, allowing a fully automated eye disease detection workflow.
Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional , Mácula Lútea , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Tomografía de Coherencia Óptica/normas , Estudios Retrospectivos , Masculino , Femenino , Mácula Lútea/diagnóstico por imagen , Mácula Lútea/patología , Imagenología Tridimensional/métodos , Persona de Mediana Edad , Adulto , Anciano , Enfermedades de la Retina/diagnóstico por imagen , Enfermedades de la Retina/diagnóstico , Reproducibilidad de los Resultados , NiñoRESUMEN
BACKGROUND: Early identification and prevention of frailty are very important for patients with cirrhosis. METHODS: The study was the first to use Liver Frailty Index in out-patient patients with cirrhosis in China, and to analyze the influencing factors. RESULT: This study included 387 patients with cirrhosis. Frailty was diagnosed using the Liver Frailty Index. Multiple Logistic regression model were used to analyze influencing factors of frailty in out-patient patients with cirrhosis. Frailty was diagnosed in 9.6% of patients and prefrailty was diagnosed in 54.8% of patients. Age, sex, BMI, education level, monthly economic income, number of unplanned hospital admissions in the past year, cause of cirrhosis, Child-Pugh classification of cirrhosis, nutritional risk, physical activity, gait speed and Activity of Daily Living (ADL) Scale in the frailty, prefrailty and no frailty of groups were statistically significant. Age (OR, 1.103; CI, 0.064-0.132), BMI (OR, 0.817; CI, -0.302 to -0.104), education level (OR, 4.321; CI, 0.754-2.173), physical activity (OR, 3.580; CI, 0.534-2.016) and gait speed (OR, 0.001; CI, -8.188 to -4.972) were influential factors of frailty in out-patient patients with cirrhosis. CONCLUSION: Out-patient patients with cirrhosis have a high incidence of frailty and prefrailty. Elderly, reduced gait speed, no physical activity and low culture level are risk factors for frailty and prefrailty, and we should be identification and intervention early.
Asunto(s)
Anciano Frágil , Fragilidad , Humanos , Anciano , Estudios Transversales , Pacientes Ambulatorios , Fragilidad/diagnóstico , Fragilidad/epidemiología , Cirrosis HepáticaRESUMEN
Alzheimer's disease (AD) remains a global health challenge in the 21st century due to its increasing prevalence as the major cause of dementia. State-of-the-art artificial intelligence (AI)-based tests could potentially improve population-based strategies to detect and manage AD. Current retinal imaging demonstrates immense potential as a non-invasive screening measure for AD, by studying qualitative and quantitative changes in the neuronal and vascular structures of the retina that are often associated with degenerative changes in the brain. On the other hand, the tremendous success of AI, especially deep learning, in recent years has encouraged its incorporation with retinal imaging for predicting systemic diseases. Further development in deep reinforcement learning (DRL), defined as a subfield of machine learning that combines deep learning and reinforcement learning, also prompts the question of how it can work hand in hand with retinal imaging as a viable tool for automated prediction of AD. This review aims to discuss potential applications of DRL in using retinal imaging to study AD, and their synergistic application to unlock other possibilities, such as AD detection and prediction of AD progression. Challenges and future directions, such as the use of inverse DRL in defining reward function, lack of standardization in retinal imaging, and data availability, will also be addressed to bridge gaps for its transition into clinical use.
Asunto(s)
Enfermedad de Alzheimer , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/complicaciones , Inteligencia Artificial , Imagen por Resonancia Magnética/métodos , Retina/diagnóstico por imagen , Aprendizaje AutomáticoRESUMEN
Optical coherence tomography angiography (OCT-A) provides depth-resolved visualization of the retinal microvasculature without intravenous dye injection. It facilitates investigations of various retinal vascular diseases and glaucoma by assessment of qualitative and quantitative microvascular changes in the different retinal layers and radial peripapillary layer non-invasively, individually, and efficiently. Deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, has been applied in OCT-A image analysis in recent years and achieved good performance for different tasks, such as image quality control, segmentation, and classification. DL technologies have further facilitated the potential implementation of OCT-A in eye clinics in an automated and efficient manner and enhanced its clinical values for detecting and evaluating various vascular retinopathies. Nevertheless, the deployment of this combination in real-world clinics is still in the "proof-of-concept" stage due to several limitations, such as small training sample size, lack of standardized data preprocessing, insufficient testing in external datasets, and absence of standardized results interpretation. In this review, we introduce the existing applications of DL in OCT-A, summarize the potential challenges of the clinical deployment, and discuss future research directions.
RESUMEN
INTRODUCTION: Generative pretrained transformer-4 (GPT-4) has gained widespread attention from society, and its potential has been extensively evaluated in many areas. However, investigation of GPT-4's use in medicine, especially in the ophthalmology field, is still limited. This study aims to evaluate GPT-4's capability to identify rare ophthalmic diseases in three simulated scenarios for different end-users, including patients, family physicians, and junior ophthalmologists. METHODS: We selected ten treatable rare ophthalmic disease cases from the publicly available EyeRounds service. We gradually increased the amount of information fed into GPT-4 to simulate the scenarios of patient, family physician, and junior ophthalmologist using GPT-4. GPT-4's responses were evaluated from two aspects: suitability (appropriate or inappropriate) and accuracy (right or wrong) by senior ophthalmologists (> 10 years' experiences). RESULTS: Among the 30 responses, 83.3% were considered "appropriate" by senior ophthalmologists. In the scenarios of simulated patient, family physician, and junior ophthalmologist, seven (70%), ten (100%), and eight (80%) responses were graded as "appropriate" by senior ophthalmologists. However, compared to the ground truth, GPT-4 could only output several possible diseases generally without "right" responses in the simulated patient scenarios. In contrast, in the simulated family physician scenario, 50% of GPT-4's responses were "right," and in the simulated junior ophthalmologist scenario, the model achieved a higher "right" rate of 90%. CONCLUSION: To our knowledge, this is the first proof-of-concept study that evaluates GPT-4's capacity to identify rare eye diseases in simulated scenarios involving patients, family physicians, and junior ophthalmologists. The results indicate that GPT-4 has the potential to serve as a consultation assisting tool for patients and family physicians to receive referral suggestions and an assisting tool for junior ophthalmologists to diagnose rare eye diseases. However, it is important to approach GPT-4 with caution and acknowledge the need for verification and careful referrals in clinical settings.
RESUMEN
Diagnosis and detection of progression of glaucoma remains challenging. Artificial intelligence-based tools have the potential to improve and standardize the assessment of glaucoma but development of these algorithms is difficult given the multimodal and variable nature of the diagnosis. Currently, most algorithms are focused on a single imaging modality, specifically screening and diagnosis based on fundus photos or optical coherence tomography images. Use of anterior segment optical coherence tomography and goniophotographs is limited. The majority of algorithms designed for disease progression prediction are based on visual fields. No studies in our literature search assessed the use of artificial intelligence for treatment response prediction and no studies conducted prospective testing of their algorithms. Additional challenges to the development of artificial intelligence-based tools include scarcity of data and a lack of consensus in diagnostic criteria. Although research in the use of artificial intelligence for glaucoma is promising, additional work is needed to develop clinically usable tools.
Asunto(s)
Aprendizaje Profundo , Glaucoma , Humanos , Inteligencia Artificial , Estudios Prospectivos , Glaucoma/diagnóstico , AlgoritmosRESUMEN
AIMS: We investigated the demographic, ocular, diabetes-related and systemic factors associated with a binary outcome of diabetic macular ischaemia (DMI) as assessed by optical coherence tomography angiography (OCTA) evaluation of non-perfusion at the level of the superficial capillary plexus (SCP) and deep capillary plexus (DCP) in a cohort of patients with diabetes mellitus (DM). MATERIALS AND METHODS: 617 patients with DM were recruited from July 2015 to December 2020 at the Chinese University of Hong Kong Eye Centre. Image quality assessment (gradable or ungradable for assessing DMI) and DMI evaluation (presence or absence of DMI) were assessed at the level of the SCP and DCP by OCTA. RESULTS: 1107 eyes from 593 subjects were included in the final analysis. 560 (50.59%) eyes had DMI at the level of SCP, and 647 (58.45%) eyes had DMI at the level of DCP. Among eyes without diabetic retinopathy (DR), DMI was observed in 19.40% and 24.13% of eyes at SCP and DCP, respectively. In the multivariable logistic regression models, older age, poorer visual acuity, thinner ganglion cell-inner plexiform layer thickness, worsened DR severity, higher haemoglobin A1c level, lower estimated glomerular filtration rate and higher low-density lipoprotein cholesterol level were associated with SCP-DMI. In addition to the aforementioned factors, presence of diabetic macular oedema and shorter axial length were associated with DCP-DMI. CONCLUSION: We reported a series of associated factors of SCP-DMI and DCP-DMI. The binary outcome of DMI might promote a simplified OCTA-based DMI evaluation before subsequent quantitative analysis for assessing DMI extent and fulfil the urge for an updating diabetic retinal disease staging to be implemented with OCTA.
Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Angiografía con Fluoresceína/métodos , Vasos Retinianos , Retina , Retinopatía Diabética/diagnóstico , Tomografía de Coherencia Óptica/métodos , Isquemia/diagnósticoRESUMEN
BACKGROUND: Deep learning (DL) is promising to detect glaucoma. However, patients' privacy and data security are major concerns when pooling all data for model development. We developed a privacy-preserving DL model using the federated learning (FL) paradigm to detect glaucoma from optical coherence tomography (OCT) images. METHODS: This is a multicentre study. The FL paradigm consisted of a 'central server' and seven eye centres in Hong Kong, the USA and Singapore. Each centre first trained a model locally with its own OCT optic disc volumetric dataset and then uploaded its model parameters to the central server. The central server used FedProx algorithm to aggregate all centres' model parameters. Subsequently, the aggregated parameters are redistributed to each centre for its local model optimisation. We experimented with three three-dimensional (3D) networks to evaluate the stabilities of the FL paradigm. Lastly, we tested the FL model on two prospectively collected unseen datasets. RESULTS: We used 9326 volumetric OCT scans from 2785 subjects. The FL model performed consistently well with different networks in 7 centres (accuracies 78.3%-98.5%, 75.9%-97.0%, and 78.3%-97.5%, respectively) and stably in the 2 unseen datasets (accuracies 84.8%-87.7%, 81.3%-84.8%, and 86.0%-87.8%, respectively). The FL model achieved non-inferior performance in classifying glaucoma compared with the traditional model and significantly outperformed the individual models. CONCLUSION: The 3D FL model could leverage all the datasets and achieve generalisable performance, without data exchange across centres. This study demonstrated an OCT-based FL paradigm for glaucoma identification with ensured patient privacy and data security, charting another course toward the real-world transition of artificial intelligence in ophthalmology.
RESUMEN
The advent of generative artificial intelligence and large language models has ushered in transformative applications within medicine. Specifically in ophthalmology, large language models offer unique opportunities to revolutionise digital eye care, address clinical workflow inefficiencies, and enhance patient experiences across diverse global eye care landscapes. Yet alongside these prospects lie tangible and ethical challenges, encompassing data privacy, security, and the intricacies of embedding large language models into clinical routines. This Viewpoint highlights the promising applications of large language models in ophthalmology, while weighing up the practical and ethical barriers towards their real-world implementation. This Viewpoint seeks to stimulate broader discourse on the potential of large language models in ophthalmology and to galvanise both clinicians and researchers into tackling the prevailing challenges and optimising the benefits of large language models while curtailing the associated risks.
Asunto(s)
Medicina , Oftalmología , Humanos , Inteligencia Artificial , Lenguaje , PrivacidadRESUMEN
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a "centralised location". However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.
RESUMEN
Purpose: To develop a three-dimensional (3D) deep learning algorithm to detect glaucoma using spectral-domain optical coherence tomography (SD-OCT) optic nerve head (ONH) cube scans and validate its performance on ethnically diverse real-world datasets and on cropped ONH scans. Methods: In total, 2461 Cirrus SD-OCT ONH scans of 1012 eyes were obtained from the Glaucoma Clinic Imaging Database at the Byers Eye Institute, Stanford University, from March 2010 to December 2017. A 3D deep neural network was trained and tested on this unique raw OCT cube dataset to identify a multimodal definition of glaucoma excluding other concomitant retinal disease and optic neuropathies. A total of 1022 scans of 363 glaucomatous eyes (207 patients) and 542 scans of 291 normal eyes (167 patients) from Stanford were included in training, and 142 scans of 48 glaucomatous eyes (27 patients) and 61 scans of 39 normal eyes (23 patients) were included in the validation set. A total of 3371 scans (Cirrus SD-OCT) from four different countries were used for evaluation of the model: the non overlapping test dataset from Stanford (USA) consisted of 694 scans: 241 scans from 113 normal eyes of 66 patients and 453 scans of 157 glaucomatous eyes of 89 patients. The datasets from Hong Kong (total of 1625 scans; 666 OCT scans from 196 normal eyes of 99 patients and 959 scans of 277 glaucomatous eyes of 155 patients), India (total of 672 scans; 211 scans from 147 normal eyes of 98 patients and 461 scans from 171 glaucomatous eyes of 101 patients), and Nepal (total of 380 scans; 158 scans from 143 normal eyes of 89 patients and 222 scans from 174 glaucomatous eyes of 109 patients) were used for external evaluation. The performance of the model was then evaluated on manually cropped scans from Stanford using a new algorithm called DiagFind. The ONH region was cropped by identifying the appropriate zone of the image in the expected location relative to Bruch's Membrane Opening (BMO) using a commercially available imaging software. Subgroup analyses were performed in groups stratified by eyes, myopia severity of glaucoma, and on a set of glaucoma cases without field defects. Saliency maps were generated to highlight the areas the model used to make a prediction. The model's performance was compared to that of a glaucoma specialist using all available information on a subset of cases. Results: The 3D deep learning system achieved area under the curve (AUC) values of 0.91 (95% CI, 0.90-0.92), 0.80 (95% CI, 0.78-0.82), 0.94 (95% CI, 0.93-0.96), and 0.87 (95% CI, 0.85-0.90) on Stanford, Hong Kong, India, and Nepal datasets, respectively, to detect perimetric glaucoma and AUC values of 0.99 (95% CI, 0.97-1.00), 0.96 (95% CI, 0.93-1.00), and 0.92 (95% CI, 0.89-0.95) on severe, moderate, and mild myopia cases, respectively, and an AUC of 0.77 on cropped scans. The model achieved an AUC value of 0.92 (95% CI, 0.90-0.93) versus that of the human grader with an AUC value of 0.91 on the same subset of scans (\(P=0.99\)). The performance of the model in terms of recall on glaucoma cases without field defects was found to be 0.76 (0.68-0.85). Saliency maps highlighted the lamina cribrosa in glaucomatous eyes versus superficial retina in normal eyes as the regions associated with classification. Conclusions: A 3D convolutional neural network (CNN) trained on SD-OCT ONH cubes can distinguish glaucoma from normal cases in diverse datasets obtained from four different countries. The model trained on additional random cropping data augmentation performed reasonably on manually cropped scans, indicating the importance of lamina cribrosa in glaucoma detection. Translational Relevance: A 3D CNN trained on SD-OCT ONH cubes was developed to detect glaucoma in diverse datasets obtained from four different countries and on cropped scans. The model identified lamina cribrosa as the region associated with glaucoma detection.
Asunto(s)
Aprendizaje Profundo , Glaucoma , Miopía , Disco Óptico , Enfermedades del Nervio Óptico , Glaucoma/diagnóstico , Humanos , Disco Óptico/diagnóstico por imagen , Enfermedades del Nervio Óptico/diagnósticoRESUMEN
Importance: Myopia in school-aged children is a public health issue worldwide; consequently, effective interventions to prevent onset and progression are required. Objective: To investigate whether SMS text messages to parents increase light exposure and time outdoors in school-aged children and provide effective myopia control. Design, Setting, and Participants: This randomized clinical trial was conducted in China from May 2017 to May 2018, with participants observed for 3 years. Of 528â¯965 primary school-aged children from Anyang, 3113 were randomly selected. Of these, 268 grade 2 schoolchildren were selected and randomly assigned to SMS and control groups. Data were analyzed from June to December 2021. Interventions: Parents of children in the SMS group were sent text messages twice daily for 1 year to take their children outdoors. All children wore portable light meters to record light exposure on 3 randomly selected days (2 weekdays and 1 weekend day) before and after the intervention. Main Outcomes and Measures: The co-primary outcomes were change in axial length (axial elongation) and change in spherical equivalent refraction (myopic shift) from baseline as measured at the end of the intervention and 3 years later. A secondary outcome was myopia prevalence. Results: Of 268 grade 2 schoolchildren, 121 (45.1%) were girls, and the mean (SD) age was 8.4 (0.3) years. Compared with the control group, the SMS intervention group demonstrated greater light exposure and higher time outdoors during weekends, and the intervention had significant effect on axial elongation (coefficient, 0.09; 95% CI, 0.02-0.17; P = .01). Axial elongation was lower in the SMS group than in the control group during the intervention (0.27 mm [95% CI, 0.24-0.30] vs 0.31 mm [95% CI, 0.29-0.34]; P = .03) and at year 2 (0.39 mm [95% CI, 0.35-0.42] vs 0.46 mm [95% CI, 0.42-0.50]; P = .009) and year 3 (0.30 mm [95% CI, 0.27-0.33] vs 0.35 mm [95% CI, 0.33-0.37]; P = .005) after the intervention. Myopic shift was lower in the SMS group than in the control group at year 2 (-0.69 diopters [D] [95% CI, -0.78 to -0.60] vs -0.82 D [95% CI, -0.91 to -0.73]; P = .04) and year 3 (-0.47 D [95% CI, -0.54 to -0.39] vs -0.60 D [95% CI, -0.67 to -0.53]; P = .01) after the intervention, as was myopia prevalence (year 2: 38.3% [51 of 133] vs 51.1% [68 of 133]; year 3: 46.6% [62 of 133] vs 65.4% [87 of 133]). Conclusions and Relevance: In this randomized clinical trial, SMS text messages to parents resulted in lower axial elongation and myopia progression in schoolchildren over 3 years, possibly through increased outdoor time and light exposure, showing promise for reducing myopia prevalence. Trial Registration: Chinese Clinical Trial Registry Identifier: ChiCTR-IOC-17010525.
Asunto(s)
Miopía , Envío de Mensajes de Texto , Niño , Femenino , Humanos , Masculino , Miopía/epidemiología , Miopía/prevención & control , Refracción Ocular , Prevalencia , Padres , Progresión de la EnfermedadRESUMEN
Purpose: We aim to develop a multi-task three-dimensional (3D) deep learning (DL) model to detect glaucomatous optic neuropathy (GON) and myopic features (MF) simultaneously from spectral-domain optical coherence tomography (SDOCT) volumetric scans. Methods: Each volumetric scan was labelled as GON according to the criteria of retinal nerve fibre layer (RNFL) thinning, with a structural defect that correlated in position with the visual field defect (i.e., reference standard). MF were graded by the SDOCT en face images, defined as presence of peripapillary atrophy (PPA), optic disc tilting, or fundus tessellation. The multi-task DL model was developed by ResNet with output of Yes/No GON and Yes/No MF. SDOCT scans were collected in a tertiary eye hospital (Hong Kong SAR, China) for training (80%), tuning (10%), and internal validation (10%). External testing was performed on five independent datasets from eye centres in Hong Kong, the United States, and Singapore, respectively. For GON detection, we compared the model to the average RNFL thickness measurement generated from the SDOCT device. To investigate whether MF can affect the model's performance on GON detection, we conducted subgroup analyses in groups stratified by Yes/No MF. The area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and accuracy were reported. Results: A total of 8,151 SDOCT volumetric scans from 3,609 eyes were collected. For detecting GON, in the internal validation, the proposed 3D model had significantly higher AUROC (0.949 vs. 0.913, p < 0.001) than average RNFL thickness in discriminating GON from normal. In the external testing, the two approaches had comparable performance. In the subgroup analysis, the multi-task DL model performed significantly better in the group of "no MF" (0.883 vs. 0.965, p-value < 0.001) in one external testing dataset, but no significant difference in internal validation and other external testing datasets. The multi-task DL model's performance to detect MF was also generalizable in all datasets, with the AUROC values ranging from 0.855 to 0.896. Conclusion: The proposed multi-task 3D DL model demonstrated high generalizability in all the datasets and the presence of MF did not affect the accuracy of GON detection generally.