Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Br J Ophthalmol ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38955480

RESUMO

AIM: To investigate the association of floor area ratio (FAR), an indicator of built environments, and myopia onset. METHODS: This prospective cohort study recruited 136 753 children aged 6-10 years from 108 schools in Shenzhen, China at baseline (2016-2017). Refractive power was measured with non-cycloplegic autorefraction over a 2-year follow-up period. FAR was objectively evaluated using geographical information system technology. Mixed-effects logistic regression models were constructed to examine the association of FAR with a 2-year cumulative incidence of myopia among individuals without baseline myopia; multiple linear regression model, with a 2-year cumulative incidence rate of myopia at each school. RESULTS: Of 101 624 non-myopic children (56.3% boys; mean (SE) age, 7.657±1.182 years) included in the study, 26 391 (26.0%) of them developed myopia after 2 years. In the individual-level analysis adjusting for demographic, socioeconomic and greenness factors, an IQR in FAR was associated with a decreased risk of 2-year myopia incidence (OR 0.898, 95% CI 0.866 to 0.932, p<0.001). Similar findings were observed in the analysis additionally adjusted for genetic and behavioural factors (OR 0.821, 95% CI 0.766 to 0.880, p<0.001). In the school-level, an IQR increase in FAR was found to be associated with a 2.0% reduction in the 2-year incidence rate of myopia (95% CI 1.3% to 2.6%, p<0.001). CONCLUSIONS: Exposure to higher FAR was associated with a decreased myopia incidence, providing insights into myopia prevention through school built environments in China.

2.
Br J Ophthalmol ; 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38839251

RESUMO

BACKGROUND/AIMS: The aim of this study was to develop and evaluate digital ray, based on preoperative and postoperative image pairs using style transfer generative adversarial networks (GANs), to enhance cataractous fundus images for improved retinopathy detection. METHODS: For eligible cataract patients, preoperative and postoperative colour fundus photographs (CFP) and ultra-wide field (UWF) images were captured. Then, both the original CycleGAN and a modified CycleGAN (C2ycleGAN) framework were adopted for image generation and quantitatively compared using Frechet Inception Distance (FID) and Kernel Inception Distance (KID). Additionally, CFP and UWF images from another cataract cohort were used to test model performances. Different panels of ophthalmologists evaluated the quality, authenticity and diagnostic efficacy of the generated images. RESULTS: A total of 959 CFP and 1009 UWF image pairs were included in model development. FID and KID indicated that images generated by C2ycleGAN presented significantly improved quality. Based on ophthalmologists' average ratings, the percentages of inadequate-quality images decreased from 32% to 18.8% for CFP, and from 18.7% to 14.7% for UWF. Only 24.8% and 13.8% of generated CFP and UWF images could be recognised as synthetic. The accuracy of retinopathy detection significantly increased from 78% to 91% for CFP and from 91% to 93% for UWF. For retinopathy subtype diagnosis, the accuracies also increased from 87%-94% to 91%-100% for CFP and from 87%-95% to 93%-97% for UWF. CONCLUSION: Digital ray could generate realistic postoperative CFP and UWF images with enhanced quality and accuracy for overall detection and subtype diagnosis of retinopathies, especially for CFP.\ TRIAL REGISTRATION NUMBER: This study was registered with ClinicalTrials.gov (NCT05491798).

3.
Nat Commun ; 15(1): 3650, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38688925

RESUMO

Utilization of digital technologies for cataract screening in primary care is a potential solution for addressing the dilemma between the growing aging population and unequally distributed resources. Here, we propose a digital technology-driven hierarchical screening (DH screening) pattern implemented in China to promote the equity and accessibility of healthcare. It consists of home-based mobile artificial intelligence (AI) screening, community-based AI diagnosis, and referral to hospitals. We utilize decision-analytic Markov models to evaluate the cost-effectiveness and cost-utility of different cataract screening strategies (no screening, telescreening, AI screening and DH screening). A simulated cohort of 100,000 individuals from age 50 is built through a total of 30 1-year Markov cycles. The primary outcomes are incremental cost-effectiveness ratio and incremental cost-utility ratio. The results show that DH screening dominates no screening, telescreening and AI screening in urban and rural China. Annual DH screening emerges as the most economically effective strategy with 341 (338 to 344) and 1326 (1312 to 1340) years of blindness avoided compared with telescreening, and 37 (35 to 39) and 140 (131 to 148) years compared with AI screening in urban and rural settings, respectively. The findings remain robust across all sensitivity analyses conducted. Here, we report that DH screening is cost-effective in urban and rural China, and the annual screening proves to be the most cost-effective option, providing an economic rationale for policymakers promoting public eye health in low- and middle-income countries.


Assuntos
Catarata , Análise Custo-Benefício , Programas de Rastreamento , Humanos , China/epidemiologia , Catarata/economia , Catarata/diagnóstico , Catarata/epidemiologia , Pessoa de Meia-Idade , Programas de Rastreamento/economia , Programas de Rastreamento/métodos , Masculino , Tecnologia Digital/economia , Feminino , Cadeias de Markov , Idoso , Inteligência Artificial , Telemedicina/economia , Telemedicina/métodos
4.
Nat Commun ; 14(1): 7126, 2023 11 06.
Artigo em Inglês | MEDLINE | ID: mdl-37932255

RESUMO

Age is closely related to human health and disease risks. However, chronologically defined age often disagrees with biological age, primarily due to genetic and environmental variables. Identifying effective indicators for biological age in clinical practice and self-monitoring is important but currently lacking. The human lens accumulates age-related changes that are amenable to rapid and objective assessment. Here, using lens photographs from 20 to 96-year-olds, we develop LensAge to reflect lens aging via deep learning. LensAge is closely correlated with chronological age of relatively healthy individuals (R2 > 0.80, mean absolute errors of 4.25 to 4.82 years). Among the general population, we calculate the LensAge index by contrasting LensAge and chronological age to reflect the aging rate relative to peers. The LensAge index effectively reveals the risks of age-related eye and systemic disease occurrence, as well as all-cause mortality. It outperforms chronological age in reflecting age-related disease risks (p < 0.001). More importantly, our models can conveniently work based on smartphone photographs, suggesting suitability for routine self-examination of aging status. Overall, our study demonstrates that the LensAge index may serve as an ideal quantitative indicator for clinically assessing and self-monitoring biological age in humans.


Assuntos
Aprendizado Profundo , Cristalino , Humanos , Pré-Escolar , Envelhecimento/genética
5.
NPJ Digit Med ; 6(1): 192, 2023 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-37845275

RESUMO

Image quality variation is a prominent cause of performance degradation for intelligent disease diagnostic models in clinical applications. Image quality issues are particularly prominent in infantile fundus photography due to poor patient cooperation, which poses a high risk of misdiagnosis. Here, we developed a deep learning-based image quality assessment and enhancement system (DeepQuality) for infantile fundus images to improve infant retinopathy screening. DeepQuality can accurately detect various quality defects concerning integrity, illumination, and clarity with area under the curve (AUC) values ranging from 0.933 to 0.995. It can also comprehensively score the overall quality of each fundus photograph. By analyzing 2,015,758 infantile fundus photographs from real-world settings using DeepQuality, we found that 58.3% of them had varying degrees of quality defects, and large variations were observed among different regions and categories of hospitals. Additionally, DeepQuality provides quality enhancement based on the results of quality assessment. After quality enhancement, the performance of retinopathy of prematurity (ROP) diagnosis of clinicians was significantly improved. Moreover, the integration of DeepQuality and AI diagnostic models can effectively improve the model performance for detecting ROP. This study may be an important reference for the future development of other image-based intelligent disease screening systems.

6.
STAR Protoc ; 4(4): 102565, 2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-37733597

RESUMO

Data quality issues have been acknowledged as one of the greatest obstacles in medical artificial intelligence research. Here, we present DeepFundus, which employs deep learning techniques to perform multidimensional classification of fundus image quality and provide real-time guidance for on-site image acquisition. We describe steps for data preparation, model training, model inference, model evaluation, and the visualization of results using heatmaps. This protocol can be implemented in Python using either the suggested dataset or a customized dataset. For complete details on the use and execution of this protocol, please refer to Liu et al.1.


Assuntos
Pesquisa Biomédica , Aprendizado Profundo , Inteligência Artificial
8.
Cell Rep Med ; 4(2): 100912, 2023 02 21.
Artigo em Inglês | MEDLINE | ID: mdl-36669488

RESUMO

Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.


Assuntos
Inteligência Artificial , Citometria de Fluxo , Curva ROC , Área Sob a Curva
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...