Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Ultrasound Med Biol ; 49(12): 2497-2509, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37730479

RESUMEN

OBJECTIVE: The goal of the work described here was to develop and assess a deep learning-based model that could automatically segment anterior chamber angle (ACA) tissues; classify iris curvature (I-Curv), iris root insertion (IRI), and angle closure (AC); automatically locate scleral spur; and measure ACA parameters in ultrasound biomicroscopy (UBM) images. METHODS: A total of 11,006 UBM images were obtained from 1538 patients with primary angle-closure glaucoma who were admitted to the Eye Center of Renmin Hospital of Wuhan University (Wuhan, China) to develop an imaging database. The UNet++ network was used to segment ACA tissues automatically. In addition, two support vector machine (SVM) algorithms were developed to classify I-Curv and AC, and a logistic regression (LR) algorithm was developed to classify IRI. Meanwhile, an algorithm was developed to automatically locate the scleral spur and measure ACA parameters. An external data set of 1,658 images from Huangshi Aier Eye Hospital was used to evaluate the performance of the model under different conditions. An additional 439 images were collected to compare the performance of the model with experts. RESULTS: The model achieved accuracies of 95.2%, 88.9% and 85.6% in classification of AC, I-Curv and IRI, respectively. Compared with ophthalmologists, the model achieved an accuracy of 0.765 in classifying AC, I-Curv and IRI, indicating that its high accuracy was as high as that of the ophthalmologists (p > 0.05). The average relative errors (AREs) of ACA parameters were smaller than 15% in the internal data sets. Intraclass correlation coefficients (ICCs) of all the angle-related parameters were greater than 0.911. ICC values of all iris thickness parameters were greater than 0.884. The accurate measurement of ACA parameters partly depended on accurate localization of the scleral spur (p < 0.001). CONCLUSION: The model could effectively and accurately evaluate the ACA automatically based on fully automated analysis of UBM images, and it can potentially be a promising tool to assist ophthalmologists. The present study suggested that the deep learning model can be extensively applied to the evaluation of ACA and AC-related biometric risk factors, and it may broaden the application of UBM imaging in the clinical research of primary angle-closure glaucoma.


Asunto(s)
Aprendizaje Profundo , Glaucoma de Ángulo Cerrado , Humanos , Glaucoma de Ángulo Cerrado/diagnóstico por imagen , Microscopía Acústica/métodos , Gonioscopía , Tomografía de Coherencia Óptica/métodos , Cámara Anterior
2.
Front Med (Lausanne) ; 10: 1164188, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37153082

RESUMEN

Objective: In order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians' workload. Methods: A total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman's membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance. Results: The accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman's membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886. Conclusion: A computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes.

4.
Lancet Digit Health ; 3(11): e697-e706, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34538736

RESUMEN

BACKGROUND: Inadequate bowel preparation is associated with a decrease in adenoma detection rate (ADR). A deep learning-based bowel preparation assessment system based on the Boston bowel preparation scale (BBPS) has been previously established to calculate the automatic BBPS (e-BBPS) score (ranging 0-20). The aims of this study were to investigate whether there was a statistically inverse relationship between the e-BBPS score and the ADR, and to determine the threshold of e-BBPS score for adequate bowel preparation in colonoscopy screening. METHODS: In this prospective, observational study, we trained and internally validated the e-BBPS system using retrospective colonoscopy images and videos from the Endoscopy Center of Wuhan University, annotated by endoscopists. We externally validated the system using colonoscopy images and videos from the First People's Hospital of Yichang and the Third Hospital of Wuhan. To prospectively validate the system, we recruited consecutive patients at Renmin Hospital of Wuhan University aged between 18 and 75 years undergoing colonoscopy. The exclusion criteria included: contraindication to colonoscopy, family polyposis syndrome, inflammatory bowel disease, history of surgery for colorectal or colorectal cancer, known or suspected bowel obstruction or perforation, patients who were pregnant or lactating, inability to receive caecal intubation, and lumen obstruction. We did colonoscopy procedures and collected withdrawal videos, which were reviewed and the e-BBPS system was applied to all colon segments. The primary outcome of this study was ADR, defined as the proportion of patients with one or more conventional adenomas detected during colonoscopy. We calculated the ADR of each e-BBPS score and did a correlation analysis using Spearman analysis. FINDINGS: From May 11 to Aug 10, 2020, 616 patients underwent screening colonoscopies, which evaluated. There was a significant inverse correlation between the e-BBPS score and ADR (Spearman's rank -0·976, p<0·010). The ADR for the e-BBPS scores 1-8 was 28·57%, 28·68%, 26·79%, 19·19%, 17·57%, 17·07%, 14·81%, and 0%, respectively. According to the 25% ADR standard for screening colonoscopy, an e-BBPS score of 3 was set as a threshold to guarantee an ADR of more than 25%, and so high-quality endoscopy. Patients with scores of more than 3 had a significantly lower ADR than those with a score of 3 or less (ADR 15·93% vs 28·03%, p<0·001, 95% CI 0·28-0·66, odds ratio 0·43). INTERPRETATION: The e-BBPS system has potential to provide a more objective and refined threshold for the quantification of adequate bowel preparation. FUNDING: Project of Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision and Hubei Province Major Science and Technology Innovation Project.


Asunto(s)
Adenoma/diagnóstico , Colon , Colonoscopía/métodos , Neoplasias Colorrectales/diagnóstico , Aprendizaje Profundo , Tamizaje Masivo/métodos , Modelos Biológicos , Adolescente , Adulto , Anciano , Colon/patología , Neoplasias Colorrectales/patología , Detección Precoz del Cáncer/métodos , Femenino , Humanos , Persona de Mediana Edad , Estudios Prospectivos , Estudios Retrospectivos , Adulto Joven
5.
Transl Vis Sci Technol ; 10(4): 22, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34004002

RESUMEN

Purpose: The purpose of this study was to construct a deep learning system for rapidly and accurately screening retinal detachment (RD), vitreous detachment (VD), and vitreous hemorrhage (VH) in ophthalmic ultrasound in real time. Methods: We used a deep convolutional neural network to develop a deep learning system to screen multiple abnormal findings in ophthalmic ultrasonography with 3580 images for classification and 941 images for segmentation. Sixty-two videos were used as the test dataset in real time. External data containing 598 images were also used for validation. Another 155 images were collected to compare the performance of the model to experts. In addition, a study was conducted to assess the effect of the model in improving lesions recognition of the trainees. Results: The model achieved 0.94, 0.90, 0.92, 0.94, and 0.91 accuracy in recognizing normal, VD, VH, RD, and other lesions. Compared with the ophthalmologists, the modal achieved a 0.73 accuracy in classifying RD, VD, and VH, which has a better performance than most experts (P < 0.05). In the videos, the model had a 0.81 accuracy. With the model assistant, the accuracy of the trainees improved from 0.84 to 0.94. Conclusions: The model could serve as a screening tool to rapidly identify patients with RD, VD, and VH. In addition, it also has potential to be a good tool to assist training. Translational Relevance: We developed a deep learning model to make the ultrasound work more accurately and efficiently.


Asunto(s)
Aprendizaje Profundo , Desprendimiento de Retina , Desprendimiento del Vítreo , Humanos , Redes Neurales de la Computación , Ultrasonografía , Desprendimiento del Vítreo/diagnóstico por imagen
6.
EBioMedicine ; 65: 103238, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33639404

RESUMEN

BACKGROUND: Detailed evaluation of bile duct (BD) is main focus during endoscopic ultrasound (EUS). The aim of this study was to develop a system for EUS BD scanning augmentation. METHODS: The scanning was divided into 4 stations. We developed a station classification model and a BD segmentation model with 10681 images and 2529 images, respectively. 1704 images and 667 images were applied to classification and segmentation internal validation. For classification and segmentation video validation, 264 and 517 videos clips were used. For man-machine contest, an independent data set contained 120 images was applied. 799 images from other two hospitals were used for external validation. A crossover study was conducted to evaluate the system effect on reducing difficulty in ultrasound images interpretation. FINDINGS: For classification, the model achieved an accuracy of 93.3% in image set and 90.1% in video set. For segmentation, the model had a dice of 0.77 in image set, sensitivity of 89.48% and specificity of 82.3% in video set. For external validation, the model achieved 82.6% accuracy in classification. In man-machine contest, the models achieved 88.3% accuracy in classification and 0.72 dice in BD segmentation, which is comparable to that of expert. In the crossover study, trainees' accuracy improved from 60.8% to 76.3% (P < 0.01, 95% C.I. 20.9-27.2). INTERPRETATION: We developed a deep learning-based augmentation system for EUS BD scanning augmentation. FUNDING: Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Hubei Province Major Science and Technology Innovation Project, National Natural Science Foundation of China.


Asunto(s)
Conductos Biliares/diagnóstico por imagen , Aprendizaje Profundo , Endosonografía/métodos , Enfermedades de los Conductos Biliares/diagnóstico , Bases de Datos Factuales , Humanos , Modelos Educacionales
7.
Sci Rep ; 10(1): 19196, 2020 11 05.
Artículo en Inglés | MEDLINE | ID: mdl-33154542

RESUMEN

Computed tomography (CT) is the preferred imaging method for diagnosing 2019 novel coronavirus (COVID19) pneumonia. We aimed to construct a system based on deep learning for detecting COVID-19 pneumonia on high resolution CT. For model development and validation, 46,096 anonymous images from 106 admitted patients, including 51 patients of laboratory confirmed COVID-19 pneumonia and 55 control patients of other diseases in Renmin Hospital of Wuhan University were retrospectively collected. Twenty-seven prospective consecutive patients in Renmin Hospital of Wuhan University were collected to evaluate the efficiency of radiologists against 2019-CoV pneumonia with that of the model. An external test was conducted in Qianjiang Central Hospital to estimate the system's robustness. The model achieved a per-patient accuracy of 95.24% and a per-image accuracy of 98.85% in internal retrospective dataset. For 27 internal prospective patients, the system achieved a comparable performance to that of expert radiologist. In external dataset, it achieved an accuracy of 96%. With the assistance of the model, the reading time of radiologists was greatly decreased by 65%. The deep learning model showed a comparable performance with expert radiologist, and greatly improved the efficiency of radiologists in clinical practice.


Asunto(s)
Infecciones por Coronavirus/complicaciones , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Neumonía Viral/complicaciones , Neumonía/complicaciones , Neumonía/diagnóstico por imagen , Relación Señal-Ruido , Tomografía Computarizada por Rayos X , Adulto , COVID-19 , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pandemias , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...