Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
Gastrointest Endosc ; 93(1): 187-192, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32535191

RESUMEN

BACKGROUND AND AIMS: Capsule endoscopy (CE) is an important modality for diagnosis and follow-up of Crohn's disease (CD). The severity of ulcers at endoscopy is significant for predicting the course of CD. Deep learning has been proven accurate in detecting ulcers on CE. However, endoscopic classification of ulcers by deep learning has not been attempted. The aim of our study was to develop a deep learning algorithm for automated grading of CD ulcers on CE. METHODS: We retrospectively collected CE images of CD ulcers from our CE database. In experiment 1, the severity of each ulcer was graded by 2 capsule readers based on the PillCam CD classification (grades 1-3 from mild to severe), and the inter-reader variability was evaluated. In experiment 2, a consensus reading by 3 capsule readers was used to train an ordinal convolutional neural network (CNN) to automatically grade images of ulcers, and the resulting algorithm was tested against the consensus reading. A pretraining stage included training the network on images of normal mucosa and ulcerated mucosa. RESULTS: Overall, our dataset included 17,640 CE images from 49 patients; 7391 images with mucosal ulcers and 10,249 normal images. A total of 2598 randomly selected pathologic images were further graded from 1 to 3 according to ulcer severity in the 2 different experiments. In experiment 1, overall inter-reader agreement occurred for 31% of the images (345 of 1108) and 76% (752 of 989) for distinction of grades 1 and 3. In experiment 2, the algorithm was trained on 1242 images. It achieved an overall agreement for consensus reading of 67% (166 of 248) and 91% (158 of 173) for distinction of grades 1 and 3. The classification accuracy of the algorithm was 0.91 (95% confidence interval, 0.867-0.954) for grade 1 versus grade 3 ulcers, 0.78 (95% confidence interval, 0.716-0.844) for grade 2 versus grade 3, and 0.624 (95% confidence interval, 0.547-0.701) for grade 1 versus grade 2. CONCLUSIONS: CNN achieved high accuracy in detecting severe CD ulcerations. CNN-assisted CE readings in patients with CD can potentially facilitate and improve diagnosis and monitoring in these patients.


Asunto(s)
Endoscopía Capsular , Enfermedad de Crohn , Enfermedad de Crohn/diagnóstico por imagen , Humanos , Intestino Delgado , Redes Neurales de la Computación , Estudios Retrospectivos , Úlcera/diagnóstico por imagen
2.
Diagnostics (Basel) ; 12(10)2022 Oct 14.
Artículo en Inglés | MEDLINE | ID: mdl-36292178

RESUMEN

BACKGROUND AND AIMS: The aim of our study was to create an accurate patient-level combined algorithm for the identification of ulcers on CE images from two different capsules. METHODS: We retrospectively collected CE images from PillCam-SB3's capsule and PillCam-Crohn's capsule. ML algorithms were trained to classify small bowel CE images into either normal or ulcerated mucosa: a separate model for each capsule type, a cross-domain model (training the model on one capsule type and testing on the other), and a combined model. RESULTS: The dataset included 33,100 CE images: 20,621 PillCam-SB3 images and 12,479 PillCam-Crohn's images, of which 3582 were colonic images. There were 15,684 normal mucosa images and 17,416 ulcerated mucosa images. While the separate model for each capsule type achieved excellent accuracy (average AUC 0.95 and 0.98, respectively), the cross-domain model achieved a wide range of accuracies (0.569-0.88) with an AUC of 0.93. The combined model achieved the best results with an average AUC of 0.99 and average mean patient accuracy of 0.974. CONCLUSIONS: A combined model for two different capsules provided high and consistent diagnostic accuracy. Creating a holistic AI model for automated capsule reading is an essential part of the refinement required in ML models on the way to adapting them to clinical practice.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda