Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
World J Diabetes ; 15(4): 697-711, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38680694

RESUMO

BACKGROUND: The importance of age on the development of ocular conditions has been reported by numerous studies. Diabetes may have different associations with different stages of ocular conditions, and the duration of diabetes may affect the development of diabetic eye disease. While there is a dose-response relationship between the age at diagnosis of diabetes and the risk of cardiovascular disease and mortality, whether the age at diagnosis of diabetes is associated with incident ocular conditions remains to be explored. It is unclear which types of diabetes are more predictive of ocular conditions. AIM: To examine associations between the age of diabetes diagnosis and the incidence of cataract, glaucoma, age-related macular degeneration (AMD), and vision acuity. METHODS: Our analysis was using the UK Biobank. The cohort included 8709 diabetic participants and 17418 controls for ocular condition analysis, and 6689 diabetic participants and 13378 controls for vision analysis. Ocular diseases were identified using inpatient records until January 2021. Vision acuity was assessed using a chart. RESULTS: During a median follow-up of 11.0 years, 3874, 665, and 616 new cases of cataract, glaucoma, and AMD, respectively, were identified. A stronger association between diabetes and incident ocular conditions was observed where diabetes was diagnosed at a younger age. Individuals with type 2 diabetes (T2D) diagnosed at < 45 years [HR (95%CI): 2.71 (1.49-4.93)], 45-49 years [2.57 (1.17-5.65)], 50-54 years [1.85 (1.13-3.04)], or 50-59 years of age [1.53 (1.00-2.34)] had a higher risk of AMD independent of glycated haemoglobin. T2D diagnosed < 45 years [HR (95%CI): 2.18 (1.71-2.79)], 45-49 years [1.54 (1.19-2.01)], 50-54 years [1.60 (1.31-1.96)], or 55-59 years of age [1.21 (1.02-1.43)] was associated with an increased cataract risk. T2D diagnosed < 45 years of age only was associated with an increased risk of glaucoma [HR (95%CI): 1.76 (1.00-3.12)]. HRs (95%CIs) for AMD, cataract, and glaucoma associated with type 1 diabetes (T1D) were 4.12 (1.99-8.53), 2.95 (2.17-4.02), and 2.40 (1.09-5.31), respectively. In multivariable-adjusted analysis, individuals with T2D diagnosed < 45 years of age [ß 95%CI: 0.025 (0.009,0.040)] had a larger increase in LogMAR. The ß (95%CI) for LogMAR associated with T1D was 0.044 (0.014, 0.073). CONCLUSION: The younger age at the diagnosis of diabetes is associated with a larger relative risk of incident ocular diseases and greater vision loss.

2.
Ophthalmol Retina ; 2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38280426

RESUMO

OBJECTIVE: We aimed to develop a deep learning system capable of identifying subjects with cognitive impairment quickly and easily based on multimodal ocular images. DESIGN: Cross sectional study. SUBJECTS: Participants of Beijing Eye Study 2011 and patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. METHODS: We trained and validated a deep learning algorithm to assess cognitive impairment using retrospectively collected data from the Beijing Eye Study 2011. Cognitive impairment was defined as a Mini-Mental State Examination score < 24. Based on fundus photographs and OCT images, we developed 5 models based on the following sets of images: macula-centered fundus photographs, optic disc-centered fundus photographs, fundus photographs of both fields, OCT images, and fundus photographs of both fields with OCT (multimodal). The performance of the models was evaluated and compared in an external validation data set, which was collected from patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. MAIN OUTCOME MEASURES: Area under the curve (AUC). RESULTS: A total of 9424 retinal photographs and 4712 OCT images were used to develop the model. The external validation sets from each center included 1180 fundus photographs and 590 OCT images. Model comparison revealed that the multimodal performed best, achieving an AUC of 0.820 in the internal validation set, 0.786 in external validation set 1, and 0.784 in external validation set 2. We evaluated the performance of the multi-model in different sexes and different age groups; there were no significant differences. The heatmap analysis showed that signals around the optic disc in fundus photographs and the retina and choroid around the macular and optic disc regions in OCT images were used by the multimodal to identify participants with cognitive impairment. CONCLUSIONS: Fundus photographs and OCT can provide valuable information on cognitive function. Multimodal models provide richer information compared with single-mode models. Deep learning algorithms based on multimodal retinal images may be capable of screening cognitive impairment. This technique has potential value for broader implementation in community-based screening or clinic settings. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

3.
Front Cell Dev Biol ; 10: 906042, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35938155

RESUMO

Background: Cataract is the leading cause of blindness worldwide. In order to achieve large-scale cataract screening and remarkable performance, several studies have applied artificial intelligence (AI) to cataract detection based on fundus images. However, the fundus images they used are original from normal optical circumstances, which is less impractical due to the existence of poor-quality fundus images for inappropriate optical conditions in actual scenarios. Furthermore, these poor-quality images are easily mistaken as cataracts because both show fuzzy imaging characteristics, which may decline the performance of cataract detection. Therefore, we aimed to develop and validate an antiinterference AI model for rapid and efficient diagnosis based on fundus images. Materials and Methods: The datasets (including both cataract and noncataract labels) were derived from the Chinese PLA general hospital. The antiinterference AI model consisted of two AI submodules, a quality recognition model for cataract labeling and a convolutional neural networks-based model for cataract classification. The quality recognition model was performed to distinguish poor-quality images from normal-quality images and further generate the pseudo labels related to image quality for noncataract. Through this, the original binary-class label (cataract and noncataract) was adjusted to three categories (cataract, noncataract with normal-quality images, and noncataract with poor-quality images), which could be used to guide the model to distinguish cataract from suspected cataract fundus images. In the cataract classification stage, the convolutional-neural-network-based model was proposed to classify cataracts based on the label of the previous stage. The performance of the model was internally validated and externally tested in real-world settings, and the evaluation indicators included area under the receiver operating curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). Results: In the internal and external validation, the antiinterference AI model showed robust performance in cataract diagnosis (three classifications with AUCs >91%, ACCs >84%, SENs >71%, and SPEs >89%). Compared with the model that was trained on the binary-class label, the antiinterference cataract model improved its performance by 10%. Conclusion: We proposed an efficient antiinterference AI model for cataract diagnosis, which could achieve accurate cataract screening even with the interference of poor-quality images and help the government formulate a more accurate aid policy.

4.
Front Cell Dev Biol ; 9: 653692, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33898450

RESUMO

This study aimed to develop an automated computer-based algorithm to estimate axial length and subfoveal choroidal thickness (SFCT) based on color fundus photographs. In the population-based Beijing Eye Study 2011, we took fundus photographs and measured SFCT by optical coherence tomography (OCT) and axial length by optical low-coherence reflectometry. Using 6394 color fundus images taken from 3468 participants, we trained and evaluated a deep-learning-based algorithm for estimation of axial length and SFCT. The algorithm had a mean absolute error (MAE) for estimating axial length and SFCT of 0.56 mm [95% confidence interval (CI): 0.53,0.61] and 49.20 µm (95% CI: 45.83,52.54), respectively. Estimated values and measured data showed coefficients of determination of r 2 = 0.59 (95% CI: 0.50,0.65) for axial length and r 2 = 0.62 (95% CI: 0.57,0.67) for SFCT. Bland-Altman plots revealed a mean difference in axial length and SFCT of -0.16 mm (95% CI: -1.60,1.27 mm) and of -4.40 µm (95% CI, -131.8,122.9 µm), respectively. For the estimation of axial length, heat map analysis showed that signals predominantly from overall of the macular region, the foveal region, and the extrafoveal region were used in the eyes with an axial length of < 22 mm, 22-26 mm, and > 26 mm, respectively. For the estimation of SFCT, the convolutional neural network (CNN) used mostly the central part of the macular region, the fovea or perifovea, independently of the SFCT. Our study shows that deep-learning-based algorithms may be helpful in estimating axial length and SFCT based on conventional color fundus images. They may be a further step in the semiautomatic assessment of the eye.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...