Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Br J Ophthalmol ; 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38839251

RESUMO

BACKGROUND/AIMS: The aim of this study was to develop and evaluate digital ray, based on preoperative and postoperative image pairs using style transfer generative adversarial networks (GANs), to enhance cataractous fundus images for improved retinopathy detection. METHODS: For eligible cataract patients, preoperative and postoperative colour fundus photographs (CFP) and ultra-wide field (UWF) images were captured. Then, both the original CycleGAN and a modified CycleGAN (C2ycleGAN) framework were adopted for image generation and quantitatively compared using Frechet Inception Distance (FID) and Kernel Inception Distance (KID). Additionally, CFP and UWF images from another cataract cohort were used to test model performances. Different panels of ophthalmologists evaluated the quality, authenticity and diagnostic efficacy of the generated images. RESULTS: A total of 959 CFP and 1009 UWF image pairs were included in model development. FID and KID indicated that images generated by C2ycleGAN presented significantly improved quality. Based on ophthalmologists' average ratings, the percentages of inadequate-quality images decreased from 32% to 18.8% for CFP, and from 18.7% to 14.7% for UWF. Only 24.8% and 13.8% of generated CFP and UWF images could be recognised as synthetic. The accuracy of retinopathy detection significantly increased from 78% to 91% for CFP and from 91% to 93% for UWF. For retinopathy subtype diagnosis, the accuracies also increased from 87%-94% to 91%-100% for CFP and from 87%-95% to 93%-97% for UWF. CONCLUSION: Digital ray could generate realistic postoperative CFP and UWF images with enhanced quality and accuracy for overall detection and subtype diagnosis of retinopathies, especially for CFP.\ TRIAL REGISTRATION NUMBER: This study was registered with ClinicalTrials.gov (NCT05491798).

2.
JAMA Ophthalmol ; 141(11): 1045-1051, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37856107

RESUMO

Importance: Retinal diseases are the leading cause of irreversible blindness worldwide, and timely detection contributes to prevention of permanent vision loss, especially for patients in rural areas with limited medical resources. Deep learning systems (DLSs) based on fundus images with a 45° field of view have been extensively applied in population screening, while the feasibility of using ultra-widefield (UWF) fundus image-based DLSs to detect retinal lesions in patients in rural areas warrants exploration. Objective: To explore the performance of a DLS for multiple retinal lesion screening using UWF fundus images from patients in rural areas. Design, Setting, and Participants: In this diagnostic study, a previously developed DLS based on UWF fundus images was used to screen for 5 retinal lesions (retinal exudates or drusen, glaucomatous optic neuropathy, retinal hemorrhage, lattice degeneration or retinal breaks, and retinal detachment) in 24 villages of Yangxi County, China, between November 17, 2020, and March 30, 2021. Interventions: The captured images were analyzed by the DLS and ophthalmologists. Main Outcomes and Measures: The performance of the DLS in rural screening was compared with that of the internal validation in the previous model development stage. The image quality, lesion proportion, and complexity of lesion composition were compared between the model development stage and the rural screening stage. Results: A total of 6222 eyes in 3149 participants (1685 women [53.5%]; mean [SD] age, 70.9 [9.1] years) were screened. The DLS achieved a mean (SD) area under the receiver operating characteristic curve (AUC) of 0.918 (0.021) (95% CI, 0.892-0.944) for detecting 5 retinal lesions in the entire data set when applied for patients in rural areas, which was lower than that reported at the model development stage (AUC, 0.998 [0.002] [95% CI, 0.995-1.000]; P < .001). Compared with the fundus images in the model development stage, the fundus images in this rural screening study had an increased frequency of poor quality (13.8% [860 of 6222] vs 0%), increased variation in lesion proportions (0.1% [6 of 6222]-36.5% [2271 of 6222] vs 14.0% [2793 of 19 891]-21.3% [3433 of 16 138]), and an increased complexity of lesion composition. Conclusions and Relevance: This diagnostic study suggests that the DLS exhibited excellent performance using UWF fundus images as a screening tool for 5 retinal lesions in patients in a rural setting. However, poor image quality, diverse lesion proportions, and a complex set of lesions may have reduced the performance of the DLS; these factors in targeted screening scenarios should be taken into consideration in the model development stage to ensure good performance.


Assuntos
Aprendizado Profundo , Doenças Retinianas , Humanos , Feminino , Idoso , Sensibilidade e Especificidade , Fundo de Olho , Retina/diagnóstico por imagem , Retina/patologia , Doenças Retinianas/diagnóstico por imagem , Doenças Retinianas/patologia
3.
STAR Protoc ; 4(4): 102565, 2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-37733597

RESUMO

Data quality issues have been acknowledged as one of the greatest obstacles in medical artificial intelligence research. Here, we present DeepFundus, which employs deep learning techniques to perform multidimensional classification of fundus image quality and provide real-time guidance for on-site image acquisition. We describe steps for data preparation, model training, model inference, model evaluation, and the visualization of results using heatmaps. This protocol can be implemented in Python using either the suggested dataset or a customized dataset. For complete details on the use and execution of this protocol, please refer to Liu et al.1.


Assuntos
Pesquisa Biomédica , Aprendizado Profundo , Inteligência Artificial
4.
Cell Rep Med ; 4(2): 100912, 2023 02 21.
Artigo em Inglês | MEDLINE | ID: mdl-36669488

RESUMO

Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.


Assuntos
Inteligência Artificial , Citometria de Fluxo , Curva ROC , Área Sob a Curva
5.
Br J Ophthalmol ; 2022 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-36428006

RESUMO

AIMS: To characterise retinal microvascular alterations in the eyes of pregnant patients with anaemia (PA) and to compare the alterations with those in healthy controls (HC) using optical coherence tomography angiography (OCTA). METHODS: This nested case‒control study included singleton PA and HC from the Eye Health in Pregnancy Study. Fovea avascular zone (FAZ) metrics, perfusion density (PD) in the superficial capillary plexus, deep capillary plexus and flow deficit (FD) density in the choriocapillaris (CC) were quantified using FIJI software. Linear regressions were conducted to evaluate the differences in OCTA metrics between PA and HC. Subgroup analyses were performed based on comparisons between PA diagnosed in the early or late trimester and HC. RESULTS: In total, 99 eyes of 99 PA and 184 eyes of 184 HC were analysed. PA had a significantly reduced FAZ perimeter (ß coefficient=-0.310, p<0.001), area (ß coefficient=-0.121, p=0.001) and increased circularity (ß coefficient=0.037, p<0.001) compared with HC. Furthermore, higher PD in the central (ß coefficient=0.327, p=0.001) and outer (ß coefficient=0.349, p=0.007) regions were observed in PA. PA diagnosed in the first trimester had more extensive central FD (ß coefficient=4.199, p=0.003) in the CC, indicating impaired perfusion in the CC. CONCLUSION: It was found that anaemia during pregnancy was associated with macular microvascular abnormalities, which differed in PA as pregnancy progressed. The results suggest that quantitative OCTA metrics may be useful for risk evaluation before clinical diagnosis. TRIAL REGISTRATION NUMBERS: 2021KYPJ098 and ChiCTR2100049850.

8.
Asia Pac J Ophthalmol (Phila) ; 10(3): 234-243, 2021 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-34224468

RESUMO

ABSTRACT: Teleophthalmology, a subfield of telemedicine, has recently been widely applied in ophthalmic disease management, accelerated by ubiquitous connectivity via mobile computing and communication applications. Teleophthalmology has strengths in overcoming geographic barriers and broadening access to medical resources, as a supplement to face-to-face clinical settings. Eyes, especially the anterior segment, are one of the most researched superficial parts of the human body. Therefore, ophthalmic images, easily captured by portable devices, have been widely applied in teleophthalmology, boosted by advancements in software and hardware in recent years. This review aims to revise current teleophthalmology applications in the anterior segment and other diseases from a temporal and spatial perspective, and summarize common scenarios in teleophthalmology, including screening, diagnosis, treatment, monitoring, postoperative follow-up, and tele-education of patients and clinical practitioners. Further, challenges in the current application of teleophthalmology and the future development of teleophthalmology are discussed.


Assuntos
Oftalmopatias , Oftalmologia , Telemedicina , Olho , Oftalmopatias/diagnóstico , Oftalmopatias/terapia , Humanos , Programas de Rastreamento
9.
Invest Ophthalmol Vis Sci ; 62(2): 35, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-33620373

RESUMO

Purpose: To investigate environmental factors associated with corneal morphologic changes. Methods: A cross-sectional study was conducted, which enrolled adults of the Han ethnicity aged 18 to 44 years from 20 cities. The cornea-related morphology was measured using an ocular anterior segment analysis system. The geographic indexes of each city and meteorological indexes of daily city-level data from the past 40 years (1980-2019) were obtained. Correlation analyses at the city level and multilevel model analyses at the eye level were performed. Results: In total, 114,067 eyes were used for analysis. In the correlation analyses at the city level, the corneal thickness was positively correlated with the mean values of precipitation (highest r [correlation coefficient]: >0.700), temperature, and relative humidity (RH), as well as the amount of annual variation in precipitation (r: 0.548 to 0.721), and negatively correlated with the mean daily difference in the temperature (DIF T), duration of sunshine, and variance in RH (r: -0.694 to 0.495). In contrast, the anterior chamber (AC) volume was negatively correlated with the mean values of precipitation, temperature, RH, and the amount of annual variation in precipitation (r: -0.672 to -0.448), and positively associated with the mean DIF T (r = 0.570) and variance in temperature (r = 0.507). In total 19,988 eyes were analyzed at the eye level. After adjusting for age, precipitation was the major explanatory factor among the environmental factors for the variability in corneal thickness and AC volume. Conclusions: Individuals who were raised in warm and wet environments had thicker corneas and smaller AC volumes than those from cold and dry ambient environments. Our findings demonstrate the role of local environmental factors in corneal-related morphology.


Assuntos
Córnea/anatomia & histologia , Doenças da Córnea/diagnóstico , Exposição Ambiental , Adolescente , Adulto , China/epidemiologia , Doenças da Córnea/epidemiologia , Estudos Transversais , Feminino , Humanos , Incidência , Masculino , Adulto Jovem
12.
Ann Transl Med ; 8(11): 708, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32617328

RESUMO

BACKGROUND: To investigate the attitude and formal suggestions on talent cultivation in the field of medical artificial intelligence (AI). METHODS: An electronic questionnaire was sent to both medical-related field or non-medical field population using the WenJuanXing web-application via social media. The questionnaire was designed to collect: (I) demographic information; (II) perception of medical AI; (III) willingness to participate in the medical AI related teaching activities; (IV) teaching content of medical AI; (V) the role of medical AI teaching; (VI) future career planning. Respondents' anonymity was ensured. RESULTS: A total of 710 respondents provided valid answers to the questionnaire (57.75% medical related, 42.25% non-medical). About 73.8% of respondents acquired related information from network and social platform. More than half the respondents had basic perception of AI applicational scenarios and specialties in medicine, meanwhile were willing to participate in related general science activities (conference and lectures). Respondents from medical healthcare related fields, with high academic qualifications of male ones demonstrated showed significant better understanding and stronger willingness (P<0.05). The majority agreed medical AI courses should be set as major elective (42.82%) during undergraduate stages (89.58%) involving medical and computer science contents. An overwhelming majority of respondents (>80%) acknowledged the potential roles of medical AI teaching. Surgeon, ophthalmologist, physicians and researchers are the top tier considerations for ideal career regardless of AI influence. Radiology and clinical laboratory subjects are more preferred considering the development of medical AI (P>0.05). CONCLUSIONS: The potential role of medical AI talent cultivation is widely acknowledged by public. Medical related professions demonstrated higher level of perception and stronger willingness for medical AI educational events. Merging subjects as radiology and clinical laboratory subjects are preferred with broad talents demands and bright prospects.

13.
Transl Vis Sci Technol ; 9(2): 3, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-32518708

RESUMO

Purpose: To develop and evaluate a deep learning (DL) system for retinal hemorrhage (RH) screening using ultra-widefield fundus (UWF) images. Methods: A total of 16,827 UWF images from 11,339 individuals were used to develop the DL system. Three experienced retina specialists were recruited to grade UWF images independently. Three independent data sets from 3 different institutions were used to validate the effectiveness of the DL system. The data set from Zhongshan Ophthalmic Center (ZOC) was selected to compare the classification performance of the DL system and general ophthalmologists. A heatmap was generated to identify the most important area used by the DL model to classify RH and to discern whether the RH involved the anatomical macula. Results: In the three independent data sets, the DL model for detecting RH achieved areas under the curve of 0.997, 0.998, and 0.999, with sensitivities of 97.6%, 96.7%, and 98.9% and specificities of 98.0%, 98.7%, and 99.4%. In the ZOC data set, the sensitivity of the DL model was better than that of the general ophthalmologists, although the general ophthalmologists had slightly higher specificities. The heatmaps highlighted RH regions in all true-positive images, and the RH within the anatomical macula was determined based on heatmaps. Conclusions: Our DL system showed reliable performance for detecting RH and could be used to screen for RH-related diseases. Translational Relevance: As a screening tool, this automated system may aid early diagnosis and management of RH-related retinal and systemic diseases by allowing timely referral.


Assuntos
Aprendizado Profundo , Doenças Retinianas , Hemorragia Retiniana , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , Feminino , Fundo de Olho , Humanos , Masculino , Pessoa de Meia-Idade , Retina/diagnóstico por imagem , Hemorragia Retiniana/diagnóstico por imagem , Adulto Jovem
14.
Nat Biomed Eng ; 4(8): 767-777, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32572198

RESUMO

The development of artificial intelligence algorithms typically demands abundant high-quality data. In medicine, the datasets that are required to train the algorithms are often collected for a single task, such as image-level classification. Here, we report a workflow for the segmentation of anatomical structures and the annotation of pathological features in slit-lamp images, and the use of the workflow to improve the performance of a deep-learning algorithm for diagnosing ophthalmic disorders. We used the workflow to generate 1,772 general classification labels, 13,404 segmented anatomical structures and 8,329 pathological features from 1,772 slit-lamp images. The algorithm that was trained with the image-level classification labels and the anatomical and pathological labels showed better diagnostic performance than the algorithm that was trained with only the image-level classification labels, performed similar to three ophthalmologists across four clinically relevant retrospective scenarios and correctly diagnosed most of the consensus outcomes of 615 clinical reports in prospective datasets for the same four scenarios. The dense anatomical annotation of medical images may improve their use for automated classification and detection tasks.


Assuntos
Aprendizado Profundo , Oftalmopatias/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Microscopia com Lâmpada de Fenda/métodos , Algoritmos , Oftalmopatias/patologia , Humanos , Processamento de Imagem Assistida por Computador , Estudos Prospectivos , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA