Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
BMC Gastroenterol ; 24(1): 257, 2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39123140

RESUMEN

BACKGROUND: Construct deep learning models for colonoscopy quality control using different architectures and explore their decision-making mechanisms. METHODS: A total of 4,189 colonoscopy images were collected from two medical centers, covering different levels of bowel cleanliness, the presence of polyps, and the cecum. Using these data, eight pre-trained models based on CNN and Transformer architectures underwent transfer learning and fine-tuning. The models' performance was evaluated using metrics such as AUC, Precision, and F1 score. Perceptual hash functions were employed to detect image changes, enabling real-time monitoring of colonoscopy withdrawal speed. Model interpretability was analyzed using techniques such as Grad-CAM and SHAP. Finally, the best-performing model was converted to ONNX format and deployed on device terminals. RESULTS: The EfficientNetB2 model outperformed other architectures on the validation set, achieving an accuracy of 0.992. It surpassed models based on other CNN and Transformer architectures. The model's precision, recall, and F1 score were 0.991, 0.989, and 0.990, respectively. On the test set, the EfficientNetB2 model achieved an average AUC of 0.996, with a precision of 0.948 and a recall of 0.952. Interpretability analysis showed the specific image regions the model used for decision-making. The model was converted to ONNX format and deployed on device terminals, achieving an average inference speed of over 60 frames per second. CONCLUSIONS: The AI-assisted quality system, based on the EfficientNetB2 model, integrates four key quality control indicators for colonoscopy. This integration enables medical institutions to comprehensively manage and enhance these indicators using a single model, showcasing promising potential for clinical applications.


Asunto(s)
Colonoscopía , Aprendizaje Profundo , Control de Calidad , Colonoscopía/normas , Humanos , Pólipos del Colon/diagnóstico por imagen , Pólipos del Colon/diagnóstico
2.
Medicine (Baltimore) ; 103(27): e38752, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38968516

RESUMEN

The JNET classification, combined with magnified narrowband imaging (NBI), is essential for predicting the histology of colorectal polyps and guiding personalized treatment strategies. Despite its recognized utility, the diagnostic efficacy of JNET classification using NBI with dual focus (DF) magnification requires exploration in the Vietnamese context. This study aimed to investigate the diagnostic performance of the JNET classification with the NBI-DF mode in predicting the histology of colorectal polyps in Vietnam. A cross-sectional study was conducted at the University Medical Center in Ho Chi Minh City, Vietnam. During real-time endoscopy, endoscopists evaluated the lesion characteristics and recorded optical diagnoses using the dual focus mode magnification according to the JNET classification. En bloc lesion resection (endoscopic or surgical) provided the final pathology, serving as the reference standard for optical diagnoses. A total of 739 patients with 1353 lesions were recruited between October 2021 and March 2023. The overall concordance with the JNET classification was 86.9%. Specificities and positive predictive values for JNET types were: type 1 (95.7%, 88.3%); type 2A (81.4%, 90%); type 2B (96.6%, 54.7%); and type 3 (99.9%, 93.3%). The sensitivity and negative predictive value for differentiating neoplastic from non-neoplastic lesions were 97.8% and 88.3%, respectively. However, the sensitivity for distinguishing malignant from benign neoplasia was lower at 64.1%, despite a specificity of 95.9%. Notably, the specificity and positive predictive value for identifying deep submucosal cancer were high at 99.8% and 93.3%. In Vietnam, applying the JNET classification with NBI-DF demonstrates significant value in predicting the histology of colorectal polyps. This classification guides treatment decisions and prevents unnecessary surgeries.


Asunto(s)
Pólipos del Colon , Colonoscopía , Imagen de Banda Estrecha , Humanos , Imagen de Banda Estrecha/métodos , Estudios Transversales , Vietnam , Femenino , Masculino , Persona de Mediana Edad , Pólipos del Colon/diagnóstico por imagen , Pólipos del Colon/clasificación , Pólipos del Colon/diagnóstico , Pólipos del Colon/patología , Colonoscopía/métodos , Anciano , Adulto , Sensibilidad y Especificidad , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/diagnóstico por imagen , Neoplasias Colorrectales/clasificación , Neoplasias Colorrectales/patología , Valor Predictivo de las Pruebas , Pueblos del Sudeste Asiático , Pueblos del Este de Asia
3.
Sci Rep ; 14(1): 15478, 2024 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-38969765

RESUMEN

Colorectal cancer (CRC) is a common digestive system tumor with high morbidity and mortality worldwide. At present, the use of computer-assisted colonoscopy technology to detect polyps is relatively mature, but it still faces some challenges, such as missed or false detection of polyps. Therefore, how to improve the detection rate of polyps more accurately is the key to colonoscopy. To solve this problem, this paper proposes an improved YOLOv5-based cancer polyp detection method for colorectal cancer. The method is designed with a new structure called P-C3 incorporated into the backbone and neck network of the model to enhance the expression of features. In addition, a contextual feature augmentation module was introduced to the bottom of the backbone network to increase the receptive field for multi-scale feature information and to focus on polyp features by coordinate attention mechanism. The experimental results show that compared with some traditional target detection algorithms, the model proposed in this paper has significant advantages for the detection accuracy of polyp, especially in the recall rate, which largely solves the problem of missed detection of polyps. This study will contribute to improve the polyp/adenoma detection rate of endoscopists in the process of colonoscopy, and also has important significance for the development of clinical work.


Asunto(s)
Algoritmos , Pólipos del Colon , Colonoscopía , Neoplasias Colorrectales , Humanos , Colonoscopía/métodos , Pólipos del Colon/diagnóstico , Pólipos del Colon/diagnóstico por imagen , Pólipos del Colon/patología , Neoplasias Colorrectales/diagnóstico , Redes Neurales de la Computación , Semántica , Interpretación de Imagen Asistida por Computador/métodos
4.
Comput Biol Med ; 179: 108930, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39067285

RESUMEN

Colorectal polyps serve as potential precursors of colorectal cancer and automating polyp segmentation aids physicians in accurately identifying potential polyp regions, thereby reducing misdiagnoses and missed diagnoses. However, existing models often fall short in accurately segmenting polyps due to the high degree of similarity between polyp regions and surrounding tissue in terms of color, texture, and shape. To address this challenge, this study proposes a novel three-stage polyp segmentation network, named Reverse Attention Feature Purification with Pyramid Vision Transformer (RAFPNet), which adopts an iterative feedback UNet architecture to refine polyp saliency maps for precise segmentation. Initially, a Multi-Scale Feature Aggregation (MSFA) module is introduced to generate preliminary polyp saliency maps. Subsequently, a Reverse Attention Feature Purification (RAFP) module is devised to effectively suppress low-level surrounding tissue features while enhancing high-level semantic polyp information based on the preliminary saliency maps. Finally, the UNet architecture is leveraged to further refine the feature maps in a coarse-to-fine approach. Extensive experiments conducted on five widely used polyp segmentation datasets and three video polyp segmentation datasets demonstrate the superior performance of RAFPNet over state-of-the-art models across multiple evaluation metrics.


Asunto(s)
Pólipos del Colon , Humanos , Pólipos del Colon/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos
5.
Eur J Gastroenterol Hepatol ; 36(9): 1087-1092, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-38916233

RESUMEN

Colon capsule endoscopy (CCE) is a well-known method for the detection of colorectal lesions. Nevertheless, there are no studies reporting the accuracy of TOP 100, a CCE software tool, for the automatic detection of colorectal lesions in CCE. We aimed to evaluate the performance of TOP 100 in detecting colorectal lesions in patients submitted to CCE for incomplete colonoscopy compared with classic reading. A retrospective cohort study including adult patients submitted to CCE (PillCam COLON 2; Medtronic) for incomplete colonoscopy. Blinded for each other's evaluation, one experienced reader analyzed the TOP 100 images and the other performed classic reading to identify colorectal lesions. Detection of colorectal lesions, namely polyps, angioectasia, blood, diverticula, erosions/ulcers, neoplasia, and subepithelial lesions was assessed and TOP 100 performance was evaluated compared with the gold standard (classic reading). A total of 188 CCEs were included. Prevalence of colorectal lesions, polyps, angioectasia, blood, diverticula, erosions/ulcers, neoplasia, and subepithelial lesions were 77.7, 54.3, 8.5, 1.6, 50.0, 0.5, 0.5, and 1.1%, respectively. TOP 100 had a sensitivity of 92.5%, specificity of 69.1%, negative predictive value of 72.5%, positive predictive value of 91.2%, and accuracy of 87.2% for detecting colorectal lesions. TOP 100 had a sensitivity of 89.2%, specificity of 84.9%, negative predictive value of 86.9%, positive predictive value of 87.5%, and accuracy of 87.2% in detecting polyps. All colorectal lesions other than polyps were identified with 100% accuracy by TOP 100. TOP 100 has been shown to be a simple and useful tool in assisting the reader in the prompt identification of colorectal lesions in CCE.


Asunto(s)
Endoscopía Capsular , Colonoscopía , Neoplasias Colorrectales , Valor Predictivo de las Pruebas , Humanos , Endoscopía Capsular/métodos , Femenino , Estudios Retrospectivos , Masculino , Persona de Mediana Edad , Anciano , Neoplasias Colorrectales/diagnóstico , Colonoscopía/métodos , Pólipos del Colon/diagnóstico , Pólipos del Colon/patología , Pólipos del Colon/diagnóstico por imagen , Adulto , Programas Informáticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Interpretación de Imagen Asistida por Computador , Anciano de 80 o más Años
6.
Ann Intern Med ; 177(7): 919-928, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38768453

RESUMEN

BACKGROUND: Computer-aided diagnosis (CADx) allows prediction of polyp histology during colonoscopy, which may reduce unnecessary removal of nonneoplastic polyps. However, the potential benefits and harms of CADx are still unclear. PURPOSE: To quantify the benefit and harm of using CADx in colonoscopy for the optical diagnosis of small (≤5-mm) rectosigmoid polyps. DATA SOURCES: Medline, Embase, and Scopus were searched for articles published before 22 December 2023. STUDY SELECTION: Histologically verified diagnostic accuracy studies that evaluated the real-time performance of physicians in predicting neoplastic change of small rectosigmoid polyps without or with CADx assistance during colonoscopy. DATA EXTRACTION: The clinical benefit and harm were estimated on the basis of accuracy values of the endoscopist before and after CADx assistance. The certainty of evidence was assessed using the GRADE (Grading of Recommendations Assessment, Development and Evaluation) framework. The outcome measure for benefit was the proportion of polyps predicted to be nonneoplastic that would avoid removal with the use of CADx. The outcome measure for harm was the proportion of neoplastic polyps that would be not resected and left in situ due to an incorrect diagnosis with the use of CADx. Histology served as the reference standard for both outcomes. DATA SYNTHESIS: Ten studies, including 3620 patients with 4103 small rectosigmoid polyps, were analyzed. The studies that assessed the performance of CADx alone (9 studies; 3237 polyps) showed a sensitivity of 87.3% (95% CI, 79.2% to 92.5%) and specificity of 88.9% (CI, 81.7% to 93.5%) in predicting neoplastic change. In the studies that compared histology prediction performance before versus after CADx assistance (4 studies; 2503 polyps), there was no difference in the proportion of polyps predicted to be nonneoplastic that would avoid removal (55.4% vs. 58.4%; risk ratio [RR], 1.06 [CI, 0.96 to 1.17]; moderate-certainty evidence) or in the proportion of neoplastic polyps that would be erroneously left in situ (8.2% vs. 7.5%; RR, 0.95 [CI, 0.69 to 1.33]; moderate-certainty evidence). LIMITATION: The application of optical diagnosis was only simulated, potentially altering the decision-making process of the operator. CONCLUSION: Computer-aided diagnosis provided no incremental benefit or harm in the management of small rectosigmoid polyps during colonoscopy. PRIMARY FUNDING SOURCE: European Commission. (PROSPERO: CRD42023402197).


Asunto(s)
Pólipos del Colon , Colonoscopía , Diagnóstico por Computador , Humanos , Pólipos del Colon/patología , Pólipos del Colon/diagnóstico por imagen , Neoplasias Colorrectales/patología , Neoplasias Colorrectales/diagnóstico
7.
J Pak Med Assoc ; 74(4 (Supple-4)): S165-S170, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38712427

RESUMEN

Artificial Intelligence (AI) in the last few years has emerged as a valuable tool in managing colorectal cancer, revolutionizing its management at different stages. In early detection and diagnosis, AI leverages its prowess in imaging analysis, scrutinizing CT scans, MRI, and colonoscopy views to identify polyps and tumors. This ability enables timely and accurate diagnoses, initiating treatment at earlier stages. AI has helped in personalized treatment planning because of its ability to integrate diverse patient data, including tumor characteristics, medical history, and genetic information. Integrating AI into clinical decision support systems guarantees evidence-based treatment strategy suggestions in multidisciplinary clinical settings, thus improving patient outcomes. This narrative review explores the multifaceted role of AI, spanning early detection of colorectal cancer, personalized treatment planning, polyp detection, lymph node evaluation, cancer staging, robotic colorectal surgery, and training of colorectal surgeons.


Asunto(s)
Inteligencia Artificial , Neoplasias Colorrectales , Humanos , Neoplasias Colorrectales/patología , Neoplasias Colorrectales/terapia , Neoplasias Colorrectales/diagnóstico , Detección Precoz del Cáncer/métodos , Estadificación de Neoplasias , Procedimientos Quirúrgicos Robotizados/métodos , Colonoscopía/métodos , Pólipos del Colon/patología , Pólipos del Colon/diagnóstico por imagen , Pólipos del Colon/diagnóstico , Imagen por Resonancia Magnética/métodos , Sistemas de Apoyo a Decisiones Clínicas
8.
Comput Med Imaging Graph ; 115: 102390, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38714018

RESUMEN

Colonoscopy is the choice procedure to diagnose, screening, and treat the colon and rectum cancer, from early detection of small precancerous lesions (polyps), to confirmation of malign masses. However, the high variability of the organ appearance and the complex shape of both the colon wall and structures of interest make this exploration difficult. Learned visuospatial and perceptual abilities mitigate technical limitations in clinical practice by proper estimation of the intestinal depth. This work introduces a novel methodology to estimate colon depth maps in single frames from monocular colonoscopy videos. The generated depth map is inferred from the shading variation of the colon wall with respect to the light source, as learned from a realistic synthetic database. Briefly, a classic convolutional neural network architecture is trained from scratch to estimate the depth map, improving sharp depth estimations in haustral folds and polyps by a custom loss function that minimizes the estimation error in edges and curvatures. The network was trained by a custom synthetic colonoscopy database herein constructed and released, composed of 248400 frames (47 videos), with depth annotations at the level of pixels. This collection comprehends 5 subsets of videos with progressively higher levels of visual complexity. Evaluation of the depth estimation with the synthetic database reached a threshold accuracy of 95.65%, and a mean-RMSE of 0.451cm, while a qualitative assessment with a real database showed consistent depth estimations, visually evaluated by the expert gastroenterologist coauthoring this paper. Finally, the method achieved competitive performance with respect to another state-of-the-art method using a public synthetic database and comparable results in a set of images with other five state-of-the-art methods. Additionally, three-dimensional reconstructions demonstrated useful approximations of the gastrointestinal tract geometry. Code for reproducing the reported results and the dataset are available at https://github.com/Cimalab-unal/ColonDepthEstimation.


Asunto(s)
Colon , Colonoscopía , Bases de Datos Factuales , Humanos , Colonoscopía/métodos , Colon/diagnóstico por imagen , Redes Neurales de la Computación , Pólipos del Colon/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
9.
Med Image Anal ; 96: 103195, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38815359

RESUMEN

Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve as a training platform. However, reconstructing the colon from video footage remains difficult. Learning-based approaches hold promise as robust alternatives, but necessitate extensive datasets. Establishing a benchmark dataset, the 2022 EndoVis sub-challenge SimCol3D aimed to facilitate data-driven depth and pose prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022 in Singapore. Six teams from around the world and representatives from academia and industry participated in the three sub-challenges: synthetic depth prediction, synthetic pose prediction, and real pose prediction. This paper describes the challenge, the submitted methods, and their results. We show that depth prediction from synthetic colonoscopy images is robustly solvable, while pose estimation remains an open research question.


Asunto(s)
Colonoscopía , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Neoplasias Colorrectales/diagnóstico por imagen , Pólipos del Colon/diagnóstico por imagen
10.
Comput Biol Med ; 177: 108569, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38781640

RESUMEN

Accurate segmentation of polyps in colonoscopy images has gained significant attention in recent years, given its crucial role in automated colorectal cancer diagnosis. Many existing deep learning-based methods follow a one-stage processing pipeline, often involving feature fusion across different levels or utilizing boundary-related attention mechanisms. Drawing on the success of applying Iterative Feedback Units (IFU) in image polyp segmentation, this paper proposes FlowICBNet by extending the IFU to the domain of video polyp segmentation. By harnessing the unique capabilities of IFU to propagate and refine past segmentation results, our method proves effective in mitigating challenges linked to the inherent limitations of endoscopic imaging, notably the presence of frequent camera shake and frame defocusing. Furthermore, in FlowICBNet, we introduce two pivotal modules: Reference Frame Selection (RFS) and Flow Guided Warping (FGW). These modules play a crucial role in filtering and selecting the most suitable historical reference frames for the task at hand. The experimental results on a large video polyp segmentation dataset demonstrate that our method can significantly outperform state-of-the-art methods by notable margins achieving an average metrics improvement of 7.5% on SUN-SEG-Easy and 7.4% on SUN-SEG-Hard. Our code is available at https://github.com/eraserNut/ICBNet.


Asunto(s)
Pólipos del Colon , Humanos , Pólipos del Colon/diagnóstico por imagen , Colonoscopía/métodos , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Grabación en Video , Neoplasias Colorrectales/diagnóstico por imagen , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
11.
Sci Rep ; 14(1): 11678, 2024 05 22.
Artículo en Inglés | MEDLINE | ID: mdl-38778219

RESUMEN

Polyps are abnormal tissue clumps growing primarily on the inner linings of the gastrointestinal tract. While such clumps are generally harmless, they can potentially evolve into pathological tumors, and thus require long-term observation and monitoring. Polyp segmentation in gastrointestinal endoscopy images is an important stage for polyp monitoring and subsequent treatment. However, this segmentation task faces multiple challenges: the low contrast of the polyp boundaries, the varied polyp appearance, and the co-occurrence of multiple polyps. So, in this paper, an implicit edge-guided cross-layer fusion network (IECFNet) is proposed for polyp segmentation. The codec pair is used to generate an initial saliency map, the implicit edge-enhanced context attention module aggregates the feature graph output from the encoding and decoding to generate the rough prediction, and the multi-scale feature reasoning module is used to generate final predictions. Polyp segmentation experiments have been conducted on five popular polyp image datasets (Kvasir, CVC-ClinicDB, ETIS, CVC-ColonDB, and CVC-300), and the experimental results show that the proposed method significantly outperforms a conventional method, especially with an accuracy margin of 7.9% on the ETIS dataset.


Asunto(s)
Pólipos del Colon , Humanos , Pólipos del Colon/patología , Pólipos del Colon/diagnóstico por imagen , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Interpretación de Imagen Asistida por Computador/métodos , Pólipos/patología , Pólipos/diagnóstico por imagen , Endoscopía Gastrointestinal/métodos
12.
J Gastroenterol Hepatol ; 39(8): 1623-1635, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38744667

RESUMEN

BACKGROUND AND AIM: False positives (FPs) pose a significant challenge in the application of artificial intelligence (AI) for polyp detection during colonoscopy. The study aimed to quantitatively evaluate the impact of computer-aided polyp detection (CADe) systems' FPs on endoscopists. METHODS: The model's FPs were categorized into four gradients: 0-5, 5-10, 10-15, and 15-20 FPs per minute (FPPM). Fifty-six colonoscopy videos were collected for a crossover study involving 10 endoscopists. Polyp missed rate (PMR) was set as primary outcome. Subsequently, to further verify the impact of FPPM on the assistance capability of AI in clinical environments, a secondary analysis was conducted on a prospective randomized controlled trial (RCT) from Renmin Hospital of Wuhan University in China from July 1 to October 15, 2020, with the adenoma detection rate (ADR) as primary outcome. RESULTS: Compared with routine group, CADe reduced PMR when FPPM was less than 5. However, with the continuous increase of FPPM, the beneficial effect of CADe gradually weakens. For secondary analysis of RCT, a total of 956 patients were enrolled. In AI-assisted group, ADR is higher when FPPM ≤ 5 compared with FPPM > 5 (CADe group: 27.78% vs 11.90%; P = 0.014; odds ratio [OR], 0.351; 95% confidence interval [CI], 0.152-0.812; COMBO group: 38.40% vs 23.46%, P = 0.029; OR, 0.427; 95% CI, 0.199-0.916). After AI intervention, ADR increased when FPPM ≤ 5 (27.78% vs 14.76%; P = 0.001; OR, 0.399; 95% CI, 0.231-0.690), but no statistically significant difference was found when FPPM > 5 (11.90% vs 14.76%, P = 0.788; OR, 1.111; 95% CI, 0.514-2.403). CONCLUSION: The level of FPs of CADe does affect its effectiveness as an aid to endoscopists, with its best effect when FPPM is less than 5.


Asunto(s)
Pólipos del Colon , Colonoscopía , Diagnóstico por Computador , Humanos , Colonoscopía/métodos , Pólipos del Colon/diagnóstico , Pólipos del Colon/diagnóstico por imagen , Diagnóstico por Computador/métodos , Reacciones Falso Positivas , Masculino , Estudios Prospectivos , Inteligencia Artificial , Femenino , Persona de Mediana Edad , Estudios Cruzados , Adenoma/diagnóstico , Adenoma/diagnóstico por imagen
14.
J Gastroenterol Hepatol ; 39(8): 1613-1622, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38710592

RESUMEN

BACKGROUND AND AIM: The study aims to introduce a novel indicator, effective withdrawal time (WTS), which measures the time spent actively searching for suspicious lesions during colonoscopy and to compare WTS and the conventional withdrawal time (WT). METHODS: Colonoscopy video data from 472 patients across two hospitals were retrospectively analyzed. WTS was computed through a combination of artificial intelligence (AI) and manual verification. The results obtained through WTS were compared with those generated by the AI system. Patients were categorized into four groups based on the presence of polyps and whether resections or biopsies were performed. Bland Altman plots were utilized to compare AI-computed WTS with manually verified WTS. Scatterplots were used to illustrate WTS within the four groups, among different hospitals, and across various physicians. A parallel box plot was employed to depict the proportions of WTS relative to WT within each of the four groups. RESULTS: The study included 472 patients, with a median age of 55 years, and 57.8% were male. A significant correlation with manually verified WTS (r = 0.918) was observed in AI-computed WTS. Significant differences in WTS/WT among the four groups were revealed by the parallel box plot (P < 0.001). The group with no detected polyps had the highest WTS/WT, with a median of 0.69 (interquartile range: 0.40, 0.97). WTS patterns were found to be varied between the two hospitals and among senior and junior physicians. CONCLUSIONS: A promising alternative to traditional WT for quality control and training assessment in colonoscopy is offered by AI-assisted computation of WTS.


Asunto(s)
Inteligencia Artificial , Colonoscopía , Humanos , Colonoscopía/métodos , Masculino , Persona de Mediana Edad , Femenino , Estudios Retrospectivos , Factores de Tiempo , Pólipos del Colon/diagnóstico , Pólipos del Colon/patología , Pólipos del Colon/diagnóstico por imagen , Anciano , Adulto , Grabación en Video
16.
Digestion ; 105(4): 280-290, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38631318

RESUMEN

INTRODUCTION: We investigated coexisting lesion types in patients with invasive colorectal cancer (CRC) in a multinational study for comprehending the adenoma-carcinoma and serrated pathway about the development of CRC. METHODS: We retrospectively reviewed 3,050 patients enrolled in the international randomized controlled trial (ATLAS study) to evaluate the colorectal polyp detection performance of image-enhanced endoscopy in 11 institutions in four Asian countries/regions. In the current study, as a subgroup analysis of the ATLAS study, 92 CRC patients were extracted and compared to 2,958 patients without CRC to examine the effects of age, sex, and coexisting lesion types (high-grade adenoma [HGA], low-grade adenoma with villous component [LGAV], 10 adenomas, adenoma ≥10 mm, sessile serrated lesions [SSLs], and SSLs with dysplasia [SSLD]). Additional analyses of coexisting lesion types were performed according to sex and location of CRC (right- or left-sided). RESULTS: A multivariate analysis showed that HGA (odds ratio [95% confidence interval] 4.29 [2.16-8.18]; p < 0.01), LGAV (3.02 [1.16-7.83], p = 0.02), and age (1.04 [1.01-1.06], p = 0.01) were independently associated with CRC. According to sex, the coexisting lesion types significantly associated with CRC were LGAV (5.58 [1.94-16.0], p < 0.01) and HGA (4.46 [1.95-10.20], p < 0.01) in males and HGA (4.82 [1.47-15.80], p < 0.01) in females. Regarding the location of CRC, SSLD (21.9 [1.31-365.0], p = 0.03) was significant for right-sided CRC, and HGA (5.22 [2.39-11.4], p < 0.01) and LGAV (3.46 [1.13-10.6], p = 0.02) were significant for left-sided CRC. CONCLUSIONS: The significant coexisting lesions in CRC differed according to sex and location. These findings may contribute to comprehending the pathogenesis of CRC.


Asunto(s)
Adenoma , Colonoscopía , Neoplasias Colorrectales , Humanos , Masculino , Femenino , Neoplasias Colorrectales/patología , Persona de Mediana Edad , Estudios Retrospectivos , Anciano , Adenoma/patología , Adenoma/diagnóstico por imagen , Adenoma/complicaciones , Colonoscopía/estadística & datos numéricos , Pólipos del Colon/patología , Pólipos del Colon/diagnóstico por imagen , Pólipos del Colon/complicaciones , Factores Sexuales , Adulto , Factores de Edad
18.
Comput Biol Med ; 172: 108267, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38479197

RESUMEN

Early detection of colon adenomatous polyps is pivotal in reducing colon cancer risk. In this context, accurately distinguishing between adenomatous polyp subtypes, especially tubular and tubulovillous, from hyperplastic variants is crucial. This study introduces a cutting-edge computer-aided diagnosis system optimized for this task. Our system employs advanced Supervised Contrastive learning to ensure precise classification of colon histopathology images. Significantly, we have integrated the Big Transfer model, which has gained prominence for its exemplary adaptability to visual tasks in medical imaging. Our novel approach discerns between in-class and out-of-class images, thereby elevating its discriminatory power for polyp subtypes. We validated our system using two datasets: a specially curated one and the publicly accessible UniToPatho dataset. The results reveal that our model markedly surpasses traditional deep convolutional neural networks, registering classification accuracies of 87.1% and 70.3% for the custom and UniToPatho datasets, respectively. Such results emphasize the transformative potential of our model in polyp classification endeavors.


Asunto(s)
Pólipos Adenomatosos , Pólipos del Colon , Humanos , Pólipos del Colon/diagnóstico por imagen , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Diagnóstico por Imagen
19.
IEEE J Biomed Health Inform ; 28(7): 4118-4131, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38536686

RESUMEN

Colon polyps in colonoscopy images exhibit significant differences in color, size, shape, appearance, and location, posing significant challenges to accurate polyp segmentation. In this paper, a Weighted Dual-branch Feature Fusion Network is proposed for Polyp Segmentation, named WDFF-Net, which adopts HarDNet68 as the backbone network. First, a dual-branch feature fusion network architecture is constructed, which includes a shared feature extractor and two feature fusion branches, i.e. Progressive Feature Fusion (PFF) branch and Scale-aware Feature Fusion (SFF) branch. The branches fuse the deep features of multiple layers for different purposes and with different fusion ways. The PFF branch is to address the under-segmentation or over-segmentation problems of flat polyps with low-edge contrast by iteratively fusing the features from low, medium, and high layers. The SFF branch is to tackle the the problem of drastic variations in polyp size and shape, especially the missed segmentation problem for small polyps. These two branches are complementary and play different roles, in improving segmentation accuracy. Second, an Object-aware Attention Mechanism (OAM) is proposed to enhance the features of the target regions and suppress those of the background regions, to interfere with the segmentation performance. Third, a weighted dual-branch the segmentation loss function is specifically designed, which dynamically assigns the weight factors of the loss functions for two branches to optimize their collaborative training. Experimental results on five public colon polyp datasets demonstrate that, the proposed WDFF-Net can achieve a superior segmentation performance with lower model complexity and faster inference speed, while maintaining good generalization ability.


Asunto(s)
Pólipos del Colon , Interpretación de Imagen Asistida por Computador , Humanos , Pólipos del Colon/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Colonoscopía/métodos , Algoritmos , Redes Neurales de la Computación , Aprendizaje Profundo
20.
J Appl Clin Med Phys ; 25(6): e14351, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38551396

RESUMEN

BACKGROUND: Polyp detection and localization are essential tasks for colonoscopy. U-shape network based convolutional neural networks have achieved remarkable segmentation performance for biomedical images, but lack of long-range dependencies modeling limits their receptive fields. PURPOSE: Our goal was to develop and test a novel architecture for polyp segmentation, which takes advantage of learning local information with long-range dependencies modeling. METHODS: A novel architecture combining with multi-scale nested UNet structure integrated transformer for polyp segmentation was developed. The proposed network takes advantage of both CNN and transformer to extract distinct feature information. The transformer layer is embedded between the encoder and decoder of a U-shape net to learn explicit global context and long-range semantic information. To address the challenging of variant polyp sizes, a MSFF unit was proposed to fuse features with multiple resolution. RESULTS: Four public datasets and one in-house dataset were used to train and test the model performance. Ablation study was also conducted to verify each component of the model. For dataset Kvasir-SEG and CVC-ClinicDB, the proposed model achieved mean dice score of 0.942 and 0.950 respectively, which were more accurate than the other methods. To show the generalization of different methods, we processed two cross dataset validations, the proposed model achieved the highest mean dice score. The results demonstrate that the proposed network has powerful learning and generalization capability, significantly improving segmentation accuracy and outperforming state-of-the-art methods. CONCLUSIONS: The proposed model produced more accurate polyp segmentation than current methods on four different public and one in-house datasets. Its capability of polyps segmentation in different sizes shows the potential clinical application.


Asunto(s)
Pólipos del Colon , Colonoscopía , Redes Neurales de la Computación , Humanos , Pólipos del Colon/diagnóstico por imagen , Colonoscopía/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Colorrectales/diagnóstico por imagen , Neoplasias Colorrectales/patología , Interpretación de Imagen Asistida por Computador/métodos , Bases de Datos Factuales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA