Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Lancet Healthy Longev ; : 100593, 2024 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-39362226

RESUMO

BACKGROUND: Biological ageing markers are useful to risk stratify morbidity and mortality more precisely than chronological age. In this study, we aimed to develop a novel deep-learning-based biological ageing marker (referred to as RetiPhenoAge hereafter) using retinal images and PhenoAge, a composite biomarker of phenotypic age. METHODS: We used retinal photographs from the UK Biobank dataset to train a deep-learning algorithm to predict the composite score of PhenoAge. We used a deep convolutional neural network architecture with multiple layers to develop our deep-learning-based biological ageing marker, as RetiPhenoAge, with the aim of identifying patterns and features in the retina associated with variations of blood biomarkers related to renal, immune, liver functions, inflammation, and energy metabolism, and chronological age. We determined the performance of this biological ageing marker for the prediction of morbidity (cardiovascular disease and cancer events) and mortality (all-cause, cardiovascular disease, and cancer) in three independent cohorts (UK Biobank, the Singapore Epidemiology of Eye Diseases [SEED], and the Age-Related Eye Disease Study [AREDS] from the USA). We also compared the performance of RetiPhenoAge with two other known ageing biomarkers (hand grip strength and adjusted leukocyte telomere length) and one lifestyle factor (physical activity) for risk stratification of mortality and morbidity. We explored the underlying biology of RetiPhenoAge by assessing its associations with different systemic characteristics (eg, diabetes or hypertension) and blood metabolite levels. We also did a genome-wide association study to identify genetic variants associated with RetiPhenoAge, followed by expression quantitative trait loci mapping, a gene-based analysis, and a gene-set analysis. Cox proportional hazards models were used to estimate the hazard ratios (HRs) and corresponding 95% CIs for the associations between RetiPhenoAge and the different morbidity and mortality outcomes. FINDINGS: Retinal photographs for 34 061 UK Biobank participants were used to train the model, and data for 9429 participants from the SEED cohort and for 3986 participants from the AREDS cohort were included in the study. RetiPhenoAge was associated with all-cause mortality (HR 1·92 [95% CI 1·42-2·61]), cardiovascular disease mortality (1·97 [1·02-3·82]), cancer mortality (2·07 [1·29-3·33]), and cardiovascular disease events (1·70 [1·17-2·47]), independent of PhenoAge and other possible confounders. Similar findings were found in the two independent cohorts (HR 1·67 [1·21-2·31] for cardiovascular disease mortality in SEED and 2·07 [1·10-3·92] in AREDS). RetiPhenoAge had stronger associations with mortality and morbidity than did hand grip strength, telomere length, and physical activity. We identified two genetic variants that were significantly associated with RetiPhenoAge (single nucleotide polymorphisms rs3791224 and rs8001273), and were linked to expression quantitative trait locis in various tissues, including the heart, kidneys, and the brain. INTERPRETATION: Our new deep-learning-derived biological ageing marker is a robust predictor of mortality and morbidity outcomes and could be used as a novel non-invasive method to measure ageing. FUNDING: Singapore National Medical Research Council and Agency for Science, Technology and Research, Singapore.

2.
EPMA J ; 13(4): 547-560, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36505893

RESUMO

Aims: Computer-aided detection systems for retinal fluid could be beneficial for disease monitoring and management by chronic age-related macular degeneration (AMD) and diabetic retinopathy (DR) patients, to assist in disease prevention via early detection before the disease progresses to a "wet AMD" pathology or diabetic macular edema (DME), requiring treatment. We propose a proof-of-concept AI-based app to help predict fluid via a "fluid score", prevent fluid progression, and provide personalized, serial monitoring, in the context of predictive, preventive, and personalized medicine (PPPM) for patients at risk of retinal fluid complications. Methods: The app comprises a convolutional neural network-Vision Transformer (CNN-ViT)-based segmentation deep learning (DL) network, trained on a small dataset of 100 training images (augmented to 992 images) from the Singapore Epidemiology of Eye Diseases (SEED) study, together with a CNN-based classification network trained on 8497 images, that can detect fluid vs. non-fluid optical coherence tomography (OCT) scans. Both networks are validated on external datasets. Results: Internal testing for our segmentation network produced an IoU score of 83.0% (95% CI = 76.7-89.3%) and a DICE score of 90.4% (86.3-94.4%); for external testing, we obtained an IoU score of 66.7% (63.5-70.0%) and a DICE score of 78.7% (76.0-81.4%). Internal testing of our classification network produced an area under the receiver operating characteristics curve (AUC) of 99.18%, and a Youden index threshold of 0.3806; for external testing, we obtained an AUC of 94.55%, and an accuracy of 94.98% and an F1 score of 85.73% with Youden index. Conclusion: We have developed an AI-based app with an alternative transformer-based segmentation algorithm that could potentially be applied in the clinic with a PPPM approach for serial monitoring, and could allow for the generation of retrospective data to research into the varied use of treatments for AMD and DR. The modular system of our app can be scaled to add more iterative features based on user feedback for more efficient monitoring. Further study and scaling up of the algorithm dataset could potentially boost its usability in a real-world clinical setting. Supplementary information: The online version contains supplementary material available at 10.1007/s13167-022-00301-5.

3.
Br J Ophthalmol ; 106(12): 1642-1647, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34244208

RESUMO

BACKGROUND/AIMS: To evaluate the performances of deep learning (DL) algorithms for detection of presence and extent pterygium, based on colour anterior segment photographs (ASPs) taken from slit-lamp and hand-held cameras. METHODS: Referable pterygium was defined as having extension towards the cornea from the limbus of >2.50 mm or base width at the limbus of >5.00 mm. 2503 images from the Singapore Epidemiology of Eye Diseases (SEED) study were used as the development set. Algorithms were validated on an internal set from the SEED cohort (629 images (55.3% pterygium, 8.4% referable pterygium)), and tested on two external clinic-based sets (set 1 with 2610 images (2.8% pterygium, 0.7% referable pterygium, from slit-lamp ASP); and set 2 with 3701 images, 2.5% pterygium, 0.9% referable pterygium, from hand-held ASP). RESULTS: The algorithm's area under the receiver operating characteristic curve (AUROC) for detection of any pterygium was 99.5%(sensitivity=98.6%; specificity=99.0%) in internal test set, 99.1% (sensitivity=95.9%, specificity=98.5%) in external test set 1 and 99.7% (sensitivity=100.0%; specificity=88.3%) in external test set 2. For referable pterygium, the algorithm's AUROC was 98.5% (sensitivity=94.0%; specificity=95.3%) in internal test set, 99.7% (sensitivity=87.2%; specificity=99.4%) in external set 1 and 99.0% (sensitivity=94.3%; specificity=98.0%) in external set 2. CONCLUSION: DL algorithms based on ASPs can detect presence of and referable-level pterygium with optimal sensitivity and specificity. These algorithms, particularly if used with a handheld camera, may potentially be used as a simple screening tool for detection of referable pterygium. Further validation in community setting is warranted. SYNOPSIS/PRECIS: DL algorithms based on ASPs can detect presence of and referable-level pterygium optimally, and may be used as a simple screening tool for the detection of referable pterygium in community screenings.


Assuntos
Aprendizado Profundo , Oftalmopatias , Pterígio , Humanos , Pterígio/diagnóstico , Algoritmos , Área Sob a Curva , Oftalmopatias/diagnóstico
4.
Comput Biol Med ; 137: 104675, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34425417

RESUMO

BACKGROUND: Granular dystrophy is the most common stromal dystrophy. To perform automated segmentation of corneal stromal deposits, we trained and tested a deep learning (DL) algorithm from patients with corneal stromal dystrophy and compared its performance with human segmentation. METHODS: In this retrospective cross-sectional study, we included slit-lamp photographs by sclerotic scatter from patients with corneal stromal dystrophy and real-world slit-lamp photographs via various techniques (diffuse illumination, tangential illumination, and sclerotic scatter). Our data set included 1007 slit-lamp photographs of semi-automatically generated handcraft masks on granular and linear lesions from corneal stromal dystrophy patients (806 for the training set and 201 for test set). For external test (140 photographs), we applied the DL algorithm and compared between automated and human segmentation. For performance, we estimated the intersection of union (IoU), global accuracy, and boundary F1 (BF) score for segmentation. RESULTS: In 201 internal test set, IoU, global accuracy, and BF score with 95 % confidence Interval were 0.81 (0.79-0.82), 0.99 (0.98-0.99), and 0.93 (0.92-0.95), respectively. In 140 heterogenous external test set as a real-world data, those were 0.64 (0.61-0.67), 0.95 (0.94-0.96), and 0.70 (0.64-0.76) via DL algorithm and 0.56 (0.51-0.61), 0.95 (0.94-0.96), and 0.70 (0.65-0.74) via human rater, respectively. CONCLUSIONS: We developed an automated segmentation DL algorithm for corneal stromal deposits in patients with corneal stromal dystrophy. Segmentation on corneal deposits was accurate via the DL algorithm in the well-controlled dataset and showed reasonable performance in a real-world setting. We suggest this automatic segmentation of corneal deposits helps to monitor the disease and can evaluate possible new treatments.


Assuntos
Distrofias Hereditárias da Córnea , Aprendizado Profundo , Algoritmos , Distrofias Hereditárias da Córnea/diagnóstico por imagem , Estudos Transversais , Humanos , Estudos Retrospectivos
5.
Lancet Digit Health ; 2(10): e526-e536, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33328047

RESUMO

BACKGROUND: The application of deep learning to retinal photographs has yielded promising results in predicting age, sex, blood pressure, and haematological parameters. However, the broader applicability of retinal photograph-based deep learning for predicting other systemic biomarkers and the generalisability of this approach to various populations remains unexplored. METHODS: With use of 236 257 retinal photographs from seven diverse Asian and European cohorts (two health screening centres in South Korea, the Beijing Eye Study, three cohorts in the Singapore Epidemiology of Eye Diseases study, and the UK Biobank), we evaluated the capacities of 47 deep-learning algorithms to predict 47 systemic biomarkers as outcome variables, including demographic factors (age and sex); body composition measurements; blood pressure; haematological parameters; lipid profiles; biochemical measures; biomarkers related to liver function, thyroid function, kidney function, and inflammation; and diabetes. The standard neural network architecture of VGG16 was adopted for model development. FINDINGS: In addition to previously reported systemic biomarkers, we showed quantification of body composition indices (muscle mass, height, and bodyweight) and creatinine from retinal photographs. Body muscle mass could be predicted with an R2 of 0·52 (95% CI 0·51-0·53) in the internal test set, and of 0·33 (0·30-0·35) in one external test set with muscle mass measurement available. The R2 value for the prediction of height was 0·42 (0·40-0·43), of bodyweight was 0·36 (0·34-0·37), and of creatinine was 0·38 (0·37-0·40) in the internal test set. However, the performances were poorer in external test sets (with the lowest performance in the European cohort), with R2 values ranging between 0·08 and 0·28 for height, 0·04 and 0·19 for bodyweight, and 0·01 and 0·26 for creatinine. Of the 47 systemic biomarkers, 37 could not be predicted well from retinal photographs via deep learning (R2≤0·14 across all external test sets). INTERPRETATION: Our work provides new insights into the potential use of retinal photographs to predict systemic biomarkers, including body composition indices and serum creatinine, using deep learning in populations with a similar ethnic background. Further evaluations are warranted to validate these findings and evaluate the clinical utility of these algorithms. FUNDING: Agency for Science, Technology, and Research and National Medical Research Council, Singapore; Korea Institute for Advancement of Technology.


Assuntos
Algoritmos , Composição Corporal , Creatinina/sangue , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Modelos Biológicos , Retina , Área Sob a Curva , Ásia , Pequim , Biomarcadores , Etnicidade , Europa (Continente) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Músculos , Redes Neurais de Computação , Fotografação , Curva ROC , República da Coreia , Singapura , Reino Unido
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA