Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nat Med ; 30(2): 584-594, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38177850

RESUMO

Diabetic retinopathy (DR) is the leading cause of preventable blindness worldwide. The risk of DR progression is highly variable among different individuals, making it difficult to predict risk and personalize screening intervals. We developed and validated a deep learning system (DeepDR Plus) to predict time to DR progression within 5 years solely from fundus images. First, we used 717,308 fundus images from 179,327 participants with diabetes to pretrain the system. Subsequently, we trained and validated the system with a multiethnic dataset comprising 118,868 images from 29,868 participants with diabetes. For predicting time to DR progression, the system achieved concordance indexes of 0.754-0.846 and integrated Brier scores of 0.153-0.241 for all times up to 5 years. Furthermore, we validated the system in real-world cohorts of participants with diabetes. The integration with clinical workflow could potentially extend the mean screening interval from 12 months to 31.97 months, and the percentage of participants recommended to be screened at 1-5 years was 30.62%, 20.00%, 19.63%, 11.85% and 17.89%, respectively, while delayed detection of progression to vision-threatening DR was 0.18%. Altogether, the DeepDR Plus system could predict individualized risk and time to DR progression over 5 years, potentially allowing personalized screening intervals.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Cegueira
2.
Commun Med (Lond) ; 3(1): 184, 2023 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-38104223

RESUMO

BACKGROUND: Cataract diagnosis typically requires in-person evaluation by an ophthalmologist. However, color fundus photography (CFP) is widely performed outside ophthalmology clinics, which could be exploited to increase the accessibility of cataract screening by automated detection. METHODS: DeepOpacityNet was developed to detect cataracts from CFP and highlight the most relevant CFP features associated with cataracts. We used 17,514 CFPs from 2573 AREDS2 participants curated from the Age-Related Eye Diseases Study 2 (AREDS2) dataset, of which 8681 CFPs were labeled with cataracts. The ground truth labels were transferred from slit-lamp examination of nuclear cataracts and reading center grading of anterior segment photographs for cortical and posterior subcapsular cataracts. DeepOpacityNet was internally validated on an independent test set (20%), compared to three ophthalmologists on a subset of the test set (100 CFPs), externally validated on three datasets obtained from the Singapore Epidemiology of Eye Diseases study (SEED), and visualized to highlight important features. RESULTS: Internally, DeepOpacityNet achieved a superior accuracy of 0.66 (95% confidence interval (CI): 0.64-0.68) and an area under the curve (AUC) of 0.72 (95% CI: 0.70-0.74), compared to that of other state-of-the-art methods. DeepOpacityNet achieved an accuracy of 0.75, compared to an accuracy of 0.67 for the ophthalmologist with the highest performance. Externally, DeepOpacityNet achieved AUC scores of 0.86, 0.88, and 0.89 on SEED datasets, demonstrating the generalizability of our proposed method. Visualizations show that the visibility of blood vessels could be characteristic of cataract absence while blurred regions could be characteristic of cataract presence. CONCLUSIONS: DeepOpacityNet could detect cataracts from CFPs in AREDS2 with performance superior to that of ophthalmologists and generate interpretable results. The code and models are available at https://github.com/ncbi/DeepOpacityNet ( https://doi.org/10.5281/zenodo.10127002 ).


Cataracts are cloudy areas in the eye that impact sight. Diagnosis typically requires in-person evaluation by an ophthalmologist. In this study, a computer program was developed that can identify cataracts from specialist photographs of the eye. The computer program successfully identified cataracts and was better able to identify these than ophthalmologists. This computer program could be introduced to improve the diagnosis of cataracts in eye clinics.

3.
Taiwan J Ophthalmol ; 13(2): 168-183, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37484617

RESUMO

Artificial intelligence (AI) has been widely used in ophthalmology for disease detection and monitoring progression. For glaucoma research, AI has been used to understand progression patterns and forecast disease trajectory based on analysis of clinical and imaging data. Techniques such as machine learning, natural language processing, and deep learning have been employed for this purpose. The results from studies using AI for forecasting glaucoma progression however vary considerably due to dataset constraints, lack of a standard progression definition and differences in methodology and approach. While glaucoma detection and screening have been the focus of most research that has been published in the last few years, in this narrative review we focus on studies that specifically address glaucoma progression. We also summarize the current evidence, highlight studies that have translational potential, and provide suggestions on how future research that addresses glaucoma progression can be improved.

4.
EPMA J ; 13(4): 547-560, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36505893

RESUMO

Aims: Computer-aided detection systems for retinal fluid could be beneficial for disease monitoring and management by chronic age-related macular degeneration (AMD) and diabetic retinopathy (DR) patients, to assist in disease prevention via early detection before the disease progresses to a "wet AMD" pathology or diabetic macular edema (DME), requiring treatment. We propose a proof-of-concept AI-based app to help predict fluid via a "fluid score", prevent fluid progression, and provide personalized, serial monitoring, in the context of predictive, preventive, and personalized medicine (PPPM) for patients at risk of retinal fluid complications. Methods: The app comprises a convolutional neural network-Vision Transformer (CNN-ViT)-based segmentation deep learning (DL) network, trained on a small dataset of 100 training images (augmented to 992 images) from the Singapore Epidemiology of Eye Diseases (SEED) study, together with a CNN-based classification network trained on 8497 images, that can detect fluid vs. non-fluid optical coherence tomography (OCT) scans. Both networks are validated on external datasets. Results: Internal testing for our segmentation network produced an IoU score of 83.0% (95% CI = 76.7-89.3%) and a DICE score of 90.4% (86.3-94.4%); for external testing, we obtained an IoU score of 66.7% (63.5-70.0%) and a DICE score of 78.7% (76.0-81.4%). Internal testing of our classification network produced an area under the receiver operating characteristics curve (AUC) of 99.18%, and a Youden index threshold of 0.3806; for external testing, we obtained an AUC of 94.55%, and an accuracy of 94.98% and an F1 score of 85.73% with Youden index. Conclusion: We have developed an AI-based app with an alternative transformer-based segmentation algorithm that could potentially be applied in the clinic with a PPPM approach for serial monitoring, and could allow for the generation of retrospective data to research into the varied use of treatments for AMD and DR. The modular system of our app can be scaled to add more iterative features based on user feedback for more efficient monitoring. Further study and scaling up of the algorithm dataset could potentially boost its usability in a real-world clinical setting. Supplementary information: The online version contains supplementary material available at 10.1007/s13167-022-00301-5.

5.
JMIR Med Inform ; 9(8): e25165, 2021 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-34402800

RESUMO

BACKGROUND: Deep learning algorithms have been built for the detection of systemic and eye diseases based on fundus photographs. The retina possesses features that can be affected by gender differences, and the extent to which these features are captured via photography differs depending on the retinal image field. OBJECTIVE: We aimed to compare deep learning algorithms' performance in predicting gender based on different fields of fundus photographs (optic disc-centered, macula-centered, and peripheral fields). METHODS: This retrospective cross-sectional study included 172,170 fundus photographs of 9956 adults aged ≥40 years from the Singapore Epidemiology of Eye Diseases Study. Optic disc-centered, macula-centered, and peripheral field fundus images were included in this study as input data for a deep learning model for gender prediction. Performance was estimated at the individual level and image level. Receiver operating characteristic curves for binary classification were calculated. RESULTS: The deep learning algorithms predicted gender with an area under the receiver operating characteristic curve (AUC) of 0.94 at the individual level and an AUC of 0.87 at the image level. Across the three image field types, the best performance was seen when using optic disc-centered field images (younger subgroups: AUC=0.91; older subgroups: AUC=0.86), and algorithms that used peripheral field images had the lowest performance (younger subgroups: AUC=0.85; older subgroups: AUC=0.76). Across the three ethnic subgroups, algorithm performance was lowest in the Indian subgroup (AUC=0.88) compared to that in the Malay (AUC=0.91) and Chinese (AUC=0.91) subgroups when the algorithms were tested on optic disc-centered images. Algorithms' performance in gender prediction at the image level was better in younger subgroups (aged <65 years; AUC=0.89) than in older subgroups (aged ≥65 years; AUC=0.82). CONCLUSIONS: We confirmed that gender among the Asian population can be predicted with fundus photographs by using deep learning, and our algorithms' performance in terms of gender prediction differed according to the field of fundus photographs, age subgroups, and ethnic groups. Our work provides a further understanding of using deep learning models for the prediction of gender-related diseases. Further validation of our findings is still needed.

6.
J Pharm Sci ; 102(11): 4100-8, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24027112

RESUMO

Microneedles are being fast recognized as a useful alternative to injections in delivering drugs, vaccines, and cosmetics transdermally. Owing to skin's inherent elastic properties, microneedles require an optimal geometry for skin penetration. In vitro studies, using rat skin to characterize microneedle penetration in vivo, require substrates with suitable mechanical properties to mimic human skin's subcutaneous tissues. We tested the effect of these two parameters on microneedle penetration. Geometry in terms of center-to-center spacing of needles was investigated for its effect on skin penetration, when placed on substrates of different hardness. Both hard (clay) and soft (polydimethylsiloxane, PDMS) substrates underneath rat skin and full-thickness pig skin were used as animal models and human skins were used as references. It was observed that there was an increase in percentage penetration with an increase in needle spacing. Microneedle penetration with PDMS as a support under stretched rat skin correlated better with that on full-thickness human skin, while penetration observed was higher when clay was used as a substrate. We showed optimal geometries for efficient penetration together with recommendation for a substrate that could better mimic the mechanical properties of human subcutaneous tissues, when using microneedles fabricated from poly(ethylene glycol)-based materials.


Assuntos
Sistemas de Liberação de Medicamentos/instrumentação , Microinjeções/instrumentação , Agulhas , Pele/metabolismo , Administração Cutânea , Anestésicos Locais/administração & dosagem , Animais , Dimetilpolisiloxanos/química , Desenho de Equipamento , Humanos , Lidocaína/administração & dosagem , Polietilenoglicóis/química , Ratos , Absorção Cutânea , Suínos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA