Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
STAR Protoc ; 5(3): 103134, 2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38900632

RESUMEN

Fundus fluorescein angiography (FFA) examinations are widely used in the evaluation of fundus disease conditions to facilitate further treatment suggestions. Here, we present a protocol for performing deep learning-based FFA image analytics with classification and segmentation tasks. We describe steps for data preparation, model implementation, statistical analysis, and heatmap visualization. The protocol is applicable in Python using customized data and can achieve the whole process from diagnosis to treatment suggestion of ischemic retinal diseases. For complete details on the use and execution of this protocol, please refer to Zhao et al.1.

2.
Nat Commun ; 14(1): 7126, 2023 11 06.
Artículo en Inglés | MEDLINE | ID: mdl-37932255

RESUMEN

Age is closely related to human health and disease risks. However, chronologically defined age often disagrees with biological age, primarily due to genetic and environmental variables. Identifying effective indicators for biological age in clinical practice and self-monitoring is important but currently lacking. The human lens accumulates age-related changes that are amenable to rapid and objective assessment. Here, using lens photographs from 20 to 96-year-olds, we develop LensAge to reflect lens aging via deep learning. LensAge is closely correlated with chronological age of relatively healthy individuals (R2 > 0.80, mean absolute errors of 4.25 to 4.82 years). Among the general population, we calculate the LensAge index by contrasting LensAge and chronological age to reflect the aging rate relative to peers. The LensAge index effectively reveals the risks of age-related eye and systemic disease occurrence, as well as all-cause mortality. It outperforms chronological age in reflecting age-related disease risks (p < 0.001). More importantly, our models can conveniently work based on smartphone photographs, suggesting suitability for routine self-examination of aging status. Overall, our study demonstrates that the LensAge index may serve as an ideal quantitative indicator for clinically assessing and self-monitoring biological age in humans.


Asunto(s)
Aprendizaje Profundo , Cristalino , Humanos , Preescolar , Envejecimiento/genética
3.
JAMA Ophthalmol ; 141(11): 1045-1051, 2023 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-37856107

RESUMEN

Importance: Retinal diseases are the leading cause of irreversible blindness worldwide, and timely detection contributes to prevention of permanent vision loss, especially for patients in rural areas with limited medical resources. Deep learning systems (DLSs) based on fundus images with a 45° field of view have been extensively applied in population screening, while the feasibility of using ultra-widefield (UWF) fundus image-based DLSs to detect retinal lesions in patients in rural areas warrants exploration. Objective: To explore the performance of a DLS for multiple retinal lesion screening using UWF fundus images from patients in rural areas. Design, Setting, and Participants: In this diagnostic study, a previously developed DLS based on UWF fundus images was used to screen for 5 retinal lesions (retinal exudates or drusen, glaucomatous optic neuropathy, retinal hemorrhage, lattice degeneration or retinal breaks, and retinal detachment) in 24 villages of Yangxi County, China, between November 17, 2020, and March 30, 2021. Interventions: The captured images were analyzed by the DLS and ophthalmologists. Main Outcomes and Measures: The performance of the DLS in rural screening was compared with that of the internal validation in the previous model development stage. The image quality, lesion proportion, and complexity of lesion composition were compared between the model development stage and the rural screening stage. Results: A total of 6222 eyes in 3149 participants (1685 women [53.5%]; mean [SD] age, 70.9 [9.1] years) were screened. The DLS achieved a mean (SD) area under the receiver operating characteristic curve (AUC) of 0.918 (0.021) (95% CI, 0.892-0.944) for detecting 5 retinal lesions in the entire data set when applied for patients in rural areas, which was lower than that reported at the model development stage (AUC, 0.998 [0.002] [95% CI, 0.995-1.000]; P < .001). Compared with the fundus images in the model development stage, the fundus images in this rural screening study had an increased frequency of poor quality (13.8% [860 of 6222] vs 0%), increased variation in lesion proportions (0.1% [6 of 6222]-36.5% [2271 of 6222] vs 14.0% [2793 of 19 891]-21.3% [3433 of 16 138]), and an increased complexity of lesion composition. Conclusions and Relevance: This diagnostic study suggests that the DLS exhibited excellent performance using UWF fundus images as a screening tool for 5 retinal lesions in patients in a rural setting. However, poor image quality, diverse lesion proportions, and a complex set of lesions may have reduced the performance of the DLS; these factors in targeted screening scenarios should be taken into consideration in the model development stage to ensure good performance.


Asunto(s)
Aprendizaje Profundo , Enfermedades de la Retina , Humanos , Femenino , Anciano , Sensibilidad y Especificidad , Fondo de Ojo , Retina/diagnóstico por imagen , Retina/patología , Enfermedades de la Retina/diagnóstico por imagen , Enfermedades de la Retina/patología
4.
NPJ Digit Med ; 6(1): 192, 2023 Oct 16.
Artículo en Inglés | MEDLINE | ID: mdl-37845275

RESUMEN

Image quality variation is a prominent cause of performance degradation for intelligent disease diagnostic models in clinical applications. Image quality issues are particularly prominent in infantile fundus photography due to poor patient cooperation, which poses a high risk of misdiagnosis. Here, we developed a deep learning-based image quality assessment and enhancement system (DeepQuality) for infantile fundus images to improve infant retinopathy screening. DeepQuality can accurately detect various quality defects concerning integrity, illumination, and clarity with area under the curve (AUC) values ranging from 0.933 to 0.995. It can also comprehensively score the overall quality of each fundus photograph. By analyzing 2,015,758 infantile fundus photographs from real-world settings using DeepQuality, we found that 58.3% of them had varying degrees of quality defects, and large variations were observed among different regions and categories of hospitals. Additionally, DeepQuality provides quality enhancement based on the results of quality assessment. After quality enhancement, the performance of retinopathy of prematurity (ROP) diagnosis of clinicians was significantly improved. Moreover, the integration of DeepQuality and AI diagnostic models can effectively improve the model performance for detecting ROP. This study may be an important reference for the future development of other image-based intelligent disease screening systems.

5.
STAR Protoc ; 4(4): 102565, 2023 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-37733597

RESUMEN

Data quality issues have been acknowledged as one of the greatest obstacles in medical artificial intelligence research. Here, we present DeepFundus, which employs deep learning techniques to perform multidimensional classification of fundus image quality and provide real-time guidance for on-site image acquisition. We describe steps for data preparation, model training, model inference, model evaluation, and the visualization of results using heatmaps. This protocol can be implemented in Python using either the suggested dataset or a customized dataset. For complete details on the use and execution of this protocol, please refer to Liu et al.1.


Asunto(s)
Investigación Biomédica , Aprendizaje Profundo , Inteligencia Artificial
6.
Cell Rep Med ; 4(10): 101197, 2023 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-37734379

RESUMEN

Ischemic retinal diseases (IRDs) are a series of common blinding diseases that depend on accurate fundus fluorescein angiography (FFA) image interpretation for diagnosis and treatment. An artificial intelligence system (Ai-Doctor) was developed to interpret FFA images. Ai-Doctor performed well in image phase identification (area under the curve [AUC], 0.991-0.999, range), diabetic retinopathy (DR) and branch retinal vein occlusion (BRVO) diagnosis (AUC, 0.979-0.992), and non-perfusion area segmentation (Dice similarity coefficient [DSC], 89.7%-90.1%) and quantification. The segmentation model was expanded to unencountered IRDs (central RVO and retinal vasculitis), with DSCs of 89.2% and 83.6%, respectively. A clinically applicable ischemia index (CAII) was proposed to evaluate ischemic degree; patients with CAII values exceeding 0.17 in BRVO and 0.08 in DR may be associated with increased possibility for laser therapy. Ai-Doctor is expected to achieve accurate FFA image interpretation for IRDs, potentially reducing the reliance on retinal specialists.


Asunto(s)
Retinopatía Diabética , Oclusión de la Vena Retiniana , Humanos , Inteligencia Artificial , Angiografía con Fluoresceína/métodos , Oclusión de la Vena Retiniana/diagnóstico , Oclusión de la Vena Retiniana/terapia , Retinopatía Diabética/diagnóstico por imagen , Retinopatía Diabética/terapia , Isquemia/diagnóstico , Isquemia/terapia
7.
Cell Rep Med ; 4(2): 100912, 2023 02 21.
Artículo en Inglés | MEDLINE | ID: mdl-36669488

RESUMEN

Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.


Asunto(s)
Inteligencia Artificial , Citometría de Flujo , Curva ROC , Área Bajo la Curva
8.
Nat Med ; 29(2): 493-503, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36702948

RESUMEN

Early detection of visual impairment is crucial but is frequently missed in young children, who are capable of only limited cooperation with standard vision tests. Although certain features of visually impaired children, such as facial appearance and ocular movements, can assist ophthalmic practice, applying these features to real-world screening remains challenging. Here, we present a mobile health (mHealth) system, the smartphone-based Apollo Infant Sight (AIS), which identifies visually impaired children with any of 16 ophthalmic disorders by recording and analyzing their gazing behaviors and facial features under visual stimuli. Videos from 3,652 children (≤48 months in age; 54.5% boys) were prospectively collected to develop and validate this system. For detecting visual impairment, AIS achieved an area under the receiver operating curve (AUC) of 0.940 in an internal validation set and an AUC of 0.843 in an external validation set collected in multiple ophthalmology clinics across China. In a further test of AIS for at-home implementation by untrained parents or caregivers using their smartphones, the system was able to adapt to different testing conditions and achieved an AUC of 0.859. This mHealth system has the potential to be used by healthcare professionals, parents and caregivers for identifying young children with visual impairment across a wide range of ophthalmic disorders.


Asunto(s)
Aprendizaje Profundo , Teléfono Inteligente , Masculino , Lactante , Humanos , Niño , Preescolar , Femenino , Ojo , Personal de Salud , Trastornos de la Visión/diagnóstico
9.
Br J Ophthalmol ; 2022 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-36428006

RESUMEN

AIMS: To characterise retinal microvascular alterations in the eyes of pregnant patients with anaemia (PA) and to compare the alterations with those in healthy controls (HC) using optical coherence tomography angiography (OCTA). METHODS: This nested case‒control study included singleton PA and HC from the Eye Health in Pregnancy Study. Fovea avascular zone (FAZ) metrics, perfusion density (PD) in the superficial capillary plexus, deep capillary plexus and flow deficit (FD) density in the choriocapillaris (CC) were quantified using FIJI software. Linear regressions were conducted to evaluate the differences in OCTA metrics between PA and HC. Subgroup analyses were performed based on comparisons between PA diagnosed in the early or late trimester and HC. RESULTS: In total, 99 eyes of 99 PA and 184 eyes of 184 HC were analysed. PA had a significantly reduced FAZ perimeter (ß coefficient=-0.310, p<0.001), area (ß coefficient=-0.121, p=0.001) and increased circularity (ß coefficient=0.037, p<0.001) compared with HC. Furthermore, higher PD in the central (ß coefficient=0.327, p=0.001) and outer (ß coefficient=0.349, p=0.007) regions were observed in PA. PA diagnosed in the first trimester had more extensive central FD (ß coefficient=4.199, p=0.003) in the CC, indicating impaired perfusion in the CC. CONCLUSION: It was found that anaemia during pregnancy was associated with macular microvascular abnormalities, which differed in PA as pregnancy progressed. The results suggest that quantitative OCTA metrics may be useful for risk evaluation before clinical diagnosis. TRIAL REGISTRATION NUMBERS: 2021KYPJ098 and ChiCTR2100049850.

10.
J Diabetes Res ; 2022: 5779210, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35493607

RESUMEN

Purpose: To predict visual acuity (VA) 1 month after anti-vascular endothelial growth factor (VEGF) therapy in patients with diabetic macular edema (DME) by using machine learning. Methods: This retrospective study included 281 eyes with DME receiving intravitreal anti-VEGF treatment from January 1, 2019, to April 1, 2021. Eighteen features from electronic medical records and measurements data from OCT images were extracted. The data obtained from January 1, 2019, to November 1, 2020, were used as the training set; the data obtained from November 1, 2020, to April 1, 2021, were used as the validation set. Six different machine learning algorithms were used to predict VA in patients after anti-VEGF therapy. After the initial detailed investigation, we designed an optimization model for convenient application. The VA predicted by machine learning was compared with the ground truth. Results: The ensemble algorithm (linear regression + random forest regressor) performed best in VA and VA variance predictions. In the validation set, the mean absolute errors (MAEs) of VA predictions were 0.137-0.153 logMAR (within 7-8 letters), and the mean square errors (MSEs) were 0.033-0.045 logMAR (within 2-3 letters) for the 1-month VA predictions, respectively. For the prediction of VA variance at 1 month, the MAEs were 0.164-0.169 logMAR (within 9 letters), and the MSEs were 0.056-0.059 logMAR (within 3 letters), respectively. Conclusions: Our machine learning models could accurately predict VA and VA variance in DME patients receiving anti-VEGF therapy 1 month after, which would be much valuable to guide precise individualized interventions and manage expectations in clinical practice.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Edema Macular , Inhibidores de la Angiogénesis/uso terapéutico , Diabetes Mellitus/tratamiento farmacológico , Retinopatía Diabética/tratamiento farmacológico , Humanos , Inyecciones Intravítreas , Aprendizaje Automático , Edema Macular/tratamiento farmacológico , Estudios Retrospectivos , Agudeza Visual
11.
J Dent ; 118: 103947, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35021070

RESUMEN

OBJECTIVES: This study aimed to establish and validate machine learning models for prognosis prediction in endodontic microsurgery, avoiding treatment failure and supporting clinical decision-making. METHODS: A total of 234 teeth from 178 patients were included in this study. We developed gradient boosting machine (GBM) and random forest (RF) models. For each model, 80% of the data were randomly selected for the training set and the remaining 20% were used as the test set. A stratified 5-fold cross-validation approach was used in model training and testing. Correlation analysis and importance ranking were conducted for feature selection. The predictive accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, and the area under the curve (AUC) of receiver operating characteristic (ROC) curves were calculated to evaluate the predictive performance. RESULTS: There were eight important predictors, including tooth type, lesion size, type of bone defect, root filling density, root filling length, apical extension of post, age, and sex. For the GBM model, the predictive accuracy was 0.80, with a sensitivity of 0.92, specificity of 0.71, PPV of 0.71, NPV of 0.92, F1 of 0.80, and AUC of 0.88. For the RF model, the accuracy was 0.80, with a sensitivity of 0.85, specificity of 0.76, PPV of 0.73, NPV of 0.87, F1 of 0.79, and AUC of 0.83. CONCLUSIONS: The trained models were developed by eight common variables, showing the potential ability to predict the prognosis of endodontic microsurgery. The GBM model outperformed the RF model slightly on our dataset. CLINICAL SIGNIFICANCE: Clinicians can use machine learning models for preoperative analysis in endodontic microsurgery. The models are expected to improve the efficiency of clinical decision-making and assist in clinician-patient communication.


Asunto(s)
Aprendizaje Automático , Microcirugia , Toma de Decisiones Clínicas , Humanos , Valor Predictivo de las Pruebas , Pronóstico
12.
Front Bioeng Biotechnol ; 9: 651340, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34805102

RESUMEN

Subretinal fluid (SRF) can lead to irreversible visual loss in patients with central serous chorioretinopathy (CSC) if not absorbed in time. Early detection and intervention of SRF can help improve visual prognosis and reduce irreversible damage to the retina. As fundus image is the most commonly used and easily obtained examination for patients with CSC, the purpose of our research is to investigate whether and to what extent SRF depicted on fundus images can be assessed using deep learning technology. In this study, we developed a cascaded deep learning system based on fundus image for automated SRF detection and macula-on/off serous retinal detachment discerning. The performance of our system is reliable, and its accuracy of SRF detection is higher than that of experienced retinal specialists. In addition, the system can automatically indicate whether the SRF progression involves the macula to provide guidance of urgency for patients. The implementation of our deep learning system could effectively reduce the extent of vision impairment resulting from SRF in patients with CSC by providing timely identification and referral.

13.
Front Bioeng Biotechnol ; 9: 662749, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34295877

RESUMEN

Aim: After neoadjuvant chemotherapy (NACT), tumor shrinkage pattern is a more reasonable outcome to decide a possible breast-conserving surgery (BCS) than pathological complete response (pCR). The aim of this article was to establish a machine learning model combining radiomics features from multiparametric MRI (mpMRI) and clinicopathologic characteristics, for early prediction of tumor shrinkage pattern prior to NACT in breast cancer. Materials and Methods: This study included 199 patients with breast cancer who successfully completed NACT and underwent following breast surgery. For each patient, 4,198 radiomics features were extracted from the segmented 3D regions of interest (ROI) in mpMRI sequences such as T1-weighted dynamic contrast-enhanced imaging (T1-DCE), fat-suppressed T2-weighted imaging (T2WI), and apparent diffusion coefficient (ADC) map. The feature selection and supervised machine learning algorithms were used to identify the predictors correlated with tumor shrinkage pattern as follows: (1) reducing the feature dimension by using ANOVA and the least absolute shrinkage and selection operator (LASSO) with 10-fold cross-validation, (2) splitting the dataset into a training dataset and testing dataset, and constructing prediction models using 12 classification algorithms, and (3) assessing the model performance through an area under the curve (AUC), accuracy, sensitivity, and specificity. We also compared the most discriminative model in different molecular subtypes of breast cancer. Results: The Multilayer Perception (MLP) neural network achieved higher AUC and accuracy than other classifiers. The radiomics model achieved a mean AUC of 0.975 (accuracy = 0.912) on the training dataset and 0.900 (accuracy = 0.828) on the testing dataset with 30-round 6-fold cross-validation. When incorporating clinicopathologic characteristics, the mean AUC was 0.985 (accuracy = 0.930) on the training dataset and 0.939 (accuracy = 0.870) on the testing dataset. The model further achieved good AUC on the testing dataset with 30-round 5-fold cross-validation in three molecular subtypes of breast cancer as following: (1) HR+/HER2-: 0.901 (accuracy = 0.816), (2) HER2+: 0.940 (accuracy = 0.865), and (3) TN: 0.837 (accuracy = 0.811). Conclusions: It is feasible that our machine learning model combining radiomics features and clinical characteristics could provide a potential tool to predict tumor shrinkage patterns prior to NACT. Our prediction model will be valuable in guiding NACT and surgical treatment in breast cancer.

14.
Front Bioeng Biotechnol ; 9: 646479, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33748090

RESUMEN

The results of visual prediction reflect the tendency and speed of visual development during a future period, based on which ophthalmologists and guardians can know the potential visual prognosis in advance, decide on an intervention plan, and contribute to visual development. In our study, we developed an intelligent system based on the features of optical coherence tomography images for long-term prediction of best corrected visual acuity (BCVA) 3 and 5 years in advance. Two hundred eyes of 132 patients were included. Six machine learning algorithms were applied. In the BCVA predictions, small errors within two lines of the visual chart were achieved. The mean absolute errors (MAEs) between the prediction results and ground truth were 0.1482-0.2117 logMAR for 3-year predictions and 0.1198-0.1845 logMAR for 5-year predictions; the root mean square errors (RMSEs) were 0.1916-0.2942 logMAR for 3-year predictions and 0.1692-0.2537 logMAR for 5-year predictions. This is the first study to predict post-therapeutic BCVAs in young children. This work establishes a reliable method to predict prognosis 5 years in advance. The application of our research contributes to the design of visual intervention plans and visual prognosis.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...