Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
J Hypertens ; 42(4): 701-710, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38230614

RESUMEN

INTRODUCTION: Early prediction of preeclampsia (PE) is of universal importance in controlling the disease process. Our study aimed to assess the feasibility of using retinal fundus images to predict preeclampsia via deep learning in singleton pregnancies. METHODS: This prospective cohort study was conducted at Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine. Eligible participants included singleton pregnancies who presented for prenatal visits before 14 weeks of gestation from September 1, 2020, to February 1, 2022. Retinal fundus images were obtained using a nonmydriatic digital retinal camera during their initial prenatal visit upon admission before 20 weeks of gestation. In addition, we generated fundus scores, which indicated the predictive value of hypertension, using a hypertension detection model. To evaluate the predictive value of the retinal fundus image-based deep learning algorithm for preeclampsia, we conducted stratified analyses and measured the area under the curve (AUC), sensitivity, and specificity. We then conducted sensitivity analyses for validation. RESULTS: Our study analyzed a total of 1138 women, 92 pregnancies developed into hypertension disorders of pregnancy (HDP), including 26 cases of gestational hypertension and 66 cases of preeclampsia. The adjusted odds ratio (aOR) of the fundus scores was 2.582 (95% CI, 1.883-3.616; P  < 0.001). Otherwise, in the categories of prepregnancy BMI less than 28.0 and at least 28.0, the aORs were 3.073 (95%CI, 2.265-4.244; P  < 0.001) and 5.866 (95% CI, 3.292-11.531; P  < 0.001). In the categories of maternal age less than 35.0 and at least 35.0, the aORs were 2.845 (95% CI, 1.854-4.463; P  < 0.001) and 2.884 (95% CI, 1.794-4.942; P  < 0.001). The AUC of the fundus score combined with risk factors was 0.883 (sensitivity, 0.722; specificity, 0.934; 95% CI, 0.834-0.932) for predicting preeclampsia. CONCLUSION: Our study demonstrates that the use of deep learning algorithm-based retinal fundus images offers promising predictive value for the early detection of preeclampsia.


Asunto(s)
Aprendizaje Profundo , Hipertensión Inducida en el Embarazo , Preeclampsia , Femenino , Embarazo , Humanos , Preeclampsia/diagnóstico por imagen , Estudios Prospectivos , China , Hipertensión Inducida en el Embarazo/diagnóstico
2.
Age Ageing ; 51(12)2022 12 05.
Artículo en Inglés | MEDLINE | ID: mdl-36580391

RESUMEN

BACKGROUND: the Cardiovascular Risk Factors, Aging, and Incidence of Dementia (CAIDE) dementia risk score is a recognised tool for dementia risk stratification. However, its application is limited due to the requirements for multidimensional information and fasting blood draw. Consequently, an effective and non-invasive tool for screening individuals with high dementia risk in large population-based settings is urgently needed. METHODS: a deep learning algorithm based on fundus photographs for estimating the CAIDE dementia risk score was developed and internally validated by a medical check-up dataset included 271,864 participants in 19 province-level administrative regions of China, and externally validated based on an independent dataset included 20,690 check-up participants in Beijing. The performance for identifying individuals with high dementia risk (CAIDE dementia risk score ≥ 10 points) was evaluated by area under the receiver operating curve (AUC) with 95% confidence interval (CI). RESULTS: the algorithm achieved an AUC of 0.944 (95% CI: 0.939-0.950) in the internal validation group and 0.926 (95% CI: 0.913-0.939) in the external group, respectively. Besides, the estimated CAIDE dementia risk score derived from the algorithm was significantly associated with both comprehensive cognitive function and specific cognitive domains. CONCLUSIONS: this algorithm trained via fundus photographs could well identify individuals with high dementia risk in a population setting. Therefore, it has the potential to be utilised as a non-invasive and more expedient method for dementia risk stratification. It might also be adopted in dementia clinical trials, incorporated as inclusion criteria to efficiently select eligible participants.


Asunto(s)
Aprendizaje Profundo , Demencia , Humanos , Demencia/diagnóstico , Demencia/epidemiología , Demencia/psicología , Envejecimiento/psicología , Factores de Riesgo , Cognición
4.
Front Med (Lausanne) ; 9: 920716, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35755054

RESUMEN

Background: Thyroid-associated ophthalmopathy (TAO) is one of the most common orbital diseases that seriously threatens visual function and significantly affects patients' appearances, rendering them unable to work. This study established an intelligent diagnostic system for TAO based on facial images. Methods: Patient images and data were obtained from medical records of patients with TAO who visited Shanghai Changzheng Hospital from 2013 to 2018. Eyelid retraction, ocular dyskinesia, conjunctival congestion, and other signs were noted on the images. Patients were classified according to the types, stages, and grades of TAO based on the diagnostic criteria. The diagnostic system consisted of multiple task-specific models. Results: The intelligent diagnostic system accurately diagnosed TAO in three stages. The built-in models pre-processed the facial images and diagnosed multiple TAO signs, with average areas under the receiver operating characteristic curves exceeding 0.85 (F1 score >0.80). Conclusion: The intelligent diagnostic system introduced in this study accurately identified several common signs of TAO.

5.
JAMA Netw Open ; 5(5): e229960, 2022 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-35503220

RESUMEN

Importance: The lack of experienced ophthalmologists limits the early diagnosis of retinal diseases. Artificial intelligence can be an efficient real-time way for screening retinal diseases. Objective: To develop and prospectively validate a deep learning (DL) algorithm that, based on ocular fundus images, recognizes numerous retinal diseases simultaneously in clinical practice. Design, Setting, and Participants: This multicenter, diagnostic study at 65 public medical screening centers and hospitals in 19 Chinese provinces included individuals attending annual routine medical examinations and participants of population-based and community-based studies. Exposures: Based on 120 002 ocular fundus photographs, the Retinal Artificial Intelligence Diagnosis System (RAIDS) was developed to identify 10 retinal diseases. RAIDS was validated in a prospective collected data set, and the performance between RAIDS and ophthalmologists was compared in the data sets of the population-based Beijing Eye Study and the community-based Kailuan Eye Study. Main Outcomes and Measures: The performance of each classifier included sensitivity, specificity, accuracy, F1 score, and Cohen κ score. Results: In the prospective validation data set of 208 758 images collected from 110 784 individuals (median [range] age, 42 [8-87] years; 115 443 [55.3%] female), RAIDS achieved a sensitivity of 89.8% (95% CI, 89.5%-90.1%) to detect any of 10 retinal diseases. RAIDS differentiated 10 retinal diseases with accuracies ranging from 95.3% to 99.9%, without marked differences between medical screening centers and geographical regions in China. Compared with retinal specialists, RAIDS achieved a higher sensitivity for detection of any retinal abnormality (RAIDS, 91.7% [95% CI, 90.6%-92.8%]; certified ophthalmologists, 83.7% [95% CI, 82.1%-85.1%]; junior retinal specialists, 86.4% [95% CI, 84.9%-87.7%]; and senior retinal specialists, 88.5% [95% CI, 87.1%-89.8%]). RAIDS reached a superior or similar diagnostic sensitivity compared with senior retinal specialists in the detection of 7 of 10 retinal diseases (ie, referral diabetic retinopathy, referral possible glaucoma, macular hole, epiretinal macular membrane, hypertensive retinopathy, myelinated fibers, and retinitis pigmentosa). It achieved a performance comparable with the performance by certified ophthalmologists in 2 diseases (ie, age-related macular degeneration and retinal vein occlusion). Compared with ophthalmologists, RAIDS needed 96% to 97% less time for the image assessment. Conclusions and Relevance: In this diagnostic study, the DL system was associated with accurately distinguishing 10 retinal diseases in real time. This technology may help overcome the lack of experienced ophthalmologists in underdeveloped areas.


Asunto(s)
Retinopatía Diabética , Enfermedades del Nervio Óptico , Enfermedades de la Retina , Adulto , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Femenino , Humanos , Masculino , Retina/diagnóstico por imagen , Enfermedades de la Retina/diagnóstico por imagen
6.
Sci Rep ; 11(1): 19291, 2021 09 29.
Artículo en Inglés | MEDLINE | ID: mdl-34588493

RESUMEN

Epiretinal membrane (ERM) is a common ophthalmological disorder of high prevalence. Its symptoms include metamorphopsia, blurred vision, and decreased visual acuity. Early diagnosis and timely treatment of ERM is crucial to preventing vision loss. Although optical coherence tomography (OCT) is regarded as a de facto standard for ERM diagnosis due to its intuitiveness and high sensitivity, ophthalmoscopic examination or fundus photographs still have the advantages of price and accessibility. Artificial intelligence (AI) has been widely applied in the health care industry for its robust and significant performance in detecting various diseases. In this study, we validated the use of a previously trained deep neural network based-AI model in ERM detection based on color fundus photographs. An independent test set of fundus photographs was labeled by a group of ophthalmologists according to their corresponding OCT images as the gold standard. Then the test set was interpreted by other ophthalmologists and AI model without knowing their OCT results. Compared with manual diagnosis based on fundus photographs alone, the AI model had comparable accuracy (AI model 77.08% vs. integrated manual diagnosis 75.69%, χ2 = 0.038, P = 0.845, McNemar's test), higher sensitivity (75.90% vs. 63.86%, χ2 = 4.500, P = 0.034, McNemar's test), under the cost of lower but reasonable specificity (78.69% vs. 91.80%, χ2 = 6.125, P = 0.013, McNemar's test). Thus our AI model can serve as a possible alternative for manual diagnosis in ERM screening.


Asunto(s)
Aprendizaje Profundo , Membrana Epirretinal/diagnóstico , Fondo de Ojo , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Anciano de 80 o más Años , Conjuntos de Datos como Asunto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Oftalmoscopía , Fotograbar , Estudios Retrospectivos , Tomografía de Coherencia Óptica
7.
Lancet Digit Health ; 3(8): e486-e495, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34325853

RESUMEN

BACKGROUND: Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted. METHODS: In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FINDINGS: The area under the receiver operating characteristic curve (AUC) in the internal validation set was 0·955 (SD 0·046). AUC values in the external test set were 0·965 (0·035) in tertiary hospitals, 0·983 (0·031) in community hospitals, and 0·953 (0·042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0·960, 95% CI 0·957-0·964 in referable diabetic retinopathy). INTERPRETATION: Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care. FUNDING: This study was funded by the National Key R&D Programme of China, the Science and Technology Planning Projects of Guangdong Province, the National Natural Science Foundation of China, the Natural Science Foundation of Guangdong Province, and the Fundamental Research Funds for the Central Universities. TRANSLATION: For the Chinese translation of the abstract see Supplementary Materials section.


Asunto(s)
Aprendizaje Profundo , Sistemas Especialistas , Procesamiento de Imagen Asistido por Computador/métodos , Tamizaje Masivo/métodos , Modelos Biológicos , Retina , Enfermedades de la Retina/diagnóstico , Área Bajo la Curva , Inteligencia Artificial , Tecnología Biomédica , China , Retinopatía Diabética/diagnóstico , Fondo de Ojo , Humanos , Oftalmólogos , Fotograbar , Curva ROC
8.
Br J Ophthalmol ; 103(11): 1553-1560, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31481392

RESUMEN

PURPOSE: To establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage. METHODS: The training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel healthcare facilities and capture modes. The datasets were labelled using a three-step strategy: (1) capture mode recognition; (2) cataract diagnosis as a normal lens, cataract or a postoperative eye and (3) detection of referable cataracts with respect to aetiology and severity. Moreover, we integrated the cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary healthcare and specialised hospital services. RESULTS: The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in three-step tasks: (1) capture mode recognition (area under the curve (AUC) 99.28%-99.71%), (2) cataract diagnosis (normal lens, cataract or postoperative eye with AUCs of 99.82%, 99.96% and 99.93% for mydriatic-slit lamp mode and AUCs >99% for other capture modes) and (3) detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30.3% of people be 'referred', substantially increasing the ophthalmologist-to-population service ratio by 10.2-fold compared with the traditional pattern. CONCLUSIONS: The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.


Asunto(s)
Inteligencia Artificial , Catarata/diagnóstico , Colaboración Intersectorial , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Área Bajo la Curva , Catarata/clasificación , Catarata/epidemiología , Extracción de Catarata , Femenino , Humanos , Masculino , Tamizaje Masivo , Persona de Mediana Edad , Curva ROC , Microscopía con Lámpara de Hendidura , Trastornos de la Visión/rehabilitación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...