Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Clin Gastroenterol ; 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38457410

RESUMEN

BACKGROUND: Gastric structure recognition systems have become increasingly necessary for the accurate diagnosis of gastric lesions in capsule endoscopy. Deep learning, especially using transformer models, has shown great potential in the recognition of gastrointestinal (GI) images according to self-attention. This study aims to establish an identification model of capsule endoscopy gastric structures to improve the clinical applicability of deep learning to endoscopic image recognition. METHODS: A total of 3343 wireless capsule endoscopy videos collected at Nanfang Hospital between 2011 and 2021 were used for unsupervised pretraining, while 2433 were for training and 118 were for validation. Fifteen upper GI structures were selected for quantifying the examination quality. We also conducted a comparison of the classification performance between the artificial intelligence model and endoscopists by the accuracy, sensitivity, specificity, and positive and negative predictive values. RESULTS: The transformer-based AI model reached a relatively high level of diagnostic accuracy in gastric structure recognition. Regarding the performance of identifying 15 upper GI structures, the AI model achieved a macroaverage accuracy of 99.6% (95% CI: 99.5-99.7), a macroaverage sensitivity of 96.4% (95% CI: 95.3-97.5), and a macroaverage specificity of 99.8% (95% CI: 99.7-99.9) and achieved a high level of interobserver agreement with endoscopists. CONCLUSIONS: The transformer-based AI model can accurately evaluate the gastric structure information of capsule endoscopy with the same performance as that of endoscopists, which will provide tremendous help for doctors in making a diagnosis from a large number of images and improve the efficiency of examination.

2.
Front Oncol ; 12: 827991, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35387126

RESUMEN

Purpose: Accurate segmentation of gross target volume (GTV) from computed tomography (CT) images is a prerequisite in radiotherapy for nasopharyngeal carcinoma (NPC). However, this task is very challenging due to the low contrast at the boundary of the tumor and the great variety of sizes and morphologies of tumors between different stages. Meanwhile, the data source also seriously affect the results of segmentation. In this paper, we propose a novel three-dimensional (3D) automatic segmentation algorithm that adopts cascaded multiscale local enhancement of convolutional neural networks (CNNs) and conduct experiments on multi-institutional datasets to address the above problems. Materials and Methods: In this study, we retrospectively collected CT images of 257 NPC patients to test the performance of the proposed automatic segmentation model, and conducted experiments on two additional multi-institutional datasets. Our novel segmentation framework consists of three parts. First, the segmentation framework is based on a 3D Res-UNet backbone model that has excellent segmentation performance. Then, we adopt a multiscale dilated convolution block to enhance the receptive field and focus on the target area and boundary for segmentation improvement. Finally, a central localization cascade model for local enhancement is designed to concentrate on the GTV region for fine segmentation to improve the robustness. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95) are utilized as qualitative evaluation criteria to estimate the performance of our automated segmentation algorithm. Results: The experimental results show that compared with other state-of-the-art methods, our modified version 3D Res-UNet backbone has excellent performance and achieves the best results in terms of the quantitative metrics DSC, PPR, ASSD and HD95, which reached 74.49 ± 7.81%, 79.97 ± 13.90%, 1.49 ± 0.65 mm and 5.06 ± 3.30 mm, respectively. It should be noted that the receptive field enhancement mechanism and cascade architecture can have a great impact on the stable output of automatic segmentation results with high accuracy, which is critical for an algorithm. The final DSC, SEN, ASSD and HD95 values can be increased to 76.23 ± 6.45%, 79.14 ± 12.48%, 1.39 ± 5.44mm, 4.72 ± 3.04mm. In addition, the outcomes of multi-institution experiments demonstrate that our model is robust and generalizable and can achieve good performance through transfer learning. Conclusions: The proposed algorithm could accurately segment NPC in CT images from multi-institutional datasets and thereby may improve and facilitate clinical applications.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...