Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 180
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38744667

RESUMEN

BACKGROUND AND AIM: False positives (FPs) pose a significant challenge in the application of artificial intelligence (AI) for polyp detection during colonoscopy. The study aimed to quantitatively evaluate the impact of computer-aided polyp detection (CADe) systems' FPs on endoscopists. METHODS: The model's FPs were categorized into four gradients: 0-5, 5-10, 10-15, and 15-20 FPs per minute (FPPM). Fifty-six colonoscopy videos were collected for a crossover study involving 10 endoscopists. Polyp missed rate (PMR) was set as primary outcome. Subsequently, to further verify the impact of FPPM on the assistance capability of AI in clinical environments, a secondary analysis was conducted on a prospective randomized controlled trial (RCT) from Renmin Hospital of Wuhan University in China from July 1 to October 15, 2020, with the adenoma detection rate (ADR) as primary outcome. RESULTS: Compared with routine group, CADe reduced PMR when FPPM was less than 5. However, with the continuous increase of FPPM, the beneficial effect of CADe gradually weakens. For secondary analysis of RCT, a total of 956 patients were enrolled. In AI-assisted group, ADR is higher when FPPM ≤ 5 compared with FPPM > 5 (CADe group: 27.78% vs 11.90%; P = 0.014; odds ratio [OR], 0.351; 95% confidence interval [CI], 0.152-0.812; COMBO group: 38.40% vs 23.46%, P = 0.029; OR, 0.427; 95% CI, 0.199-0.916). After AI intervention, ADR increased when FPPM ≤ 5 (27.78% vs 14.76%; P = 0.001; OR, 0.399; 95% CI, 0.231-0.690), but no statistically significant difference was found when FPPM > 5 (11.90% vs 14.76%, P = 0.788; OR, 1.111; 95% CI, 0.514-2.403). CONCLUSION: The level of FPs of CADe does affect its effectiveness as an aid to endoscopists, with its best effect when FPPM is less than 5.

2.
Sci Transl Med ; 16(743): eadk5395, 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38630847

RESUMEN

Endoscopy is the primary modality for detecting asymptomatic esophageal squamous cell carcinoma (ESCC) and precancerous lesions. Improving detection rate remains challenging. We developed a system based on deep convolutional neural networks (CNNs) for detecting esophageal cancer and precancerous lesions [high-risk esophageal lesions (HrELs)] and validated its efficacy in improving HrEL detection rate in clinical practice (trial registration ChiCTR2100044126 at www.chictr.org.cn). Between April 2021 and March 2022, 3117 patients ≥50 years old were consecutively recruited from Taizhou Hospital, Zhejiang Province, and randomly assigned 1:1 to an experimental group (CNN-assisted endoscopy) or a control group (unassisted endoscopy) based on block randomization. The primary endpoint was the HrEL detection rate. In the intention-to-treat population, the HrEL detection rate [28 of 1556 (1.8%)] was significantly higher in the experimental group than in the control group [14 of 1561 (0.9%), P = 0.029], and the experimental group detection rate was twice that of the control group. Similar findings were observed between the experimental and control groups [28 of 1524 (1.9%) versus 13 of 1534 (0.9%), respectively; P = 0.021]. The system's sensitivity, specificity, and accuracy for detecting HrELs were 89.7, 98.5, and 98.2%, respectively. No adverse events occurred. The proposed system thus improved HrEL detection rate during endoscopy and was safe. Deep learning assistance may enhance early diagnosis and treatment of esophageal cancer and may become a useful tool for esophageal cancer screening.


Asunto(s)
Aprendizaje Profundo , Neoplasias Esofágicas , Carcinoma de Células Escamosas de Esófago , Lesiones Precancerosas , Humanos , Persona de Mediana Edad , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/epidemiología , Neoplasias Esofágicas/patología , Carcinoma de Células Escamosas de Esófago/patología , Estudios Prospectivos , Lesiones Precancerosas/patología
3.
Gastrointest Endosc ; 2024 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-38583541

RESUMEN

BACKGROUND AND STUDY AIMS: The impact of various categories of information on the prediction of Post Endoscopic Retrograde Cholangiopancreatography Pancreatitis (PEP) remains uncertain. We aimed to comprehensively investigate the risk factors associated with PEP by constructing and validating a model incorporating multi-modal data through multiple steps. PATIENTS AND METHODS: A total of 1,916 cases underwent ERCP were retrospectively collected from multiple centers for model construction. Through literature research, 49 electronic health record (EHR) features and one image feature related to PEP were identified. The EHR features were categorized into baseline, diagnosis, technique, and prevent strategies, covering pre-ERCP, intra-ERCP, and peri-ERCP phases. We first incrementally constructed models 1-4 incorporating these four feature categories, then added the image feature into models 1-4 and developed models 5-8. All models underwent testing and comparison using both internal and external test sets. Once the optimal model was selected, we conducted comparison among multiple machine learning algorithms. RESULTS: Compared with model 2 incorporating baseline and diagnosis features, adding technique and prevent strategies (model 4) greatly improved the sensitivity (63.89% vs 83.33%, p<0.05) and specificity (75.00% vs 85.92%, p<0.001). Similar tendency was observed in internal and external tests. In model 4, the top three features ranked by weight were previous pancreatitis, NSAIDS, and difficult cannulation. The image-based feature has the highest weight in model 5-8. Lastly, model 8 employed Random Forest algorithm showed the best performance. CONCLUSIONS: We firstly developed a multi-modal prediction model for identifying PEP with clinical-acceptable performance. The image and technique features are crucial for PEP prediction.

4.
Gastrointest Endosc ; 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38636818

RESUMEN

BACKGROUND AND AIMS: Accurate bowel preparation assessment is essential for determining colonoscopy screening intervals. Patients with suboptimal bowel preparation are at a high risk of missing >5mm adenomas, and should undergo an early repeat colonoscopy. In this study, we employed artificial intelligence (AI) to evaluate bowel preparation and validated the ability of the system in accurately identifying patients who are at high risk of missing >5mm adenoma due to inadequate bowel preparation. PATIENTS AND METHODS: This prospective, single-center, observational study was conducted at the Eighth Affiliated Hospital, Sun Yat-sen University from October 8, 2021, to November 9, 2022. Eligible patients underwent screening colonoscopy were consecutively enrolled. The AI assessed bowel preparation using e-Boston Bowel Preparation Scale (BBPS) while endoscopists evaluated using BBPS. If both BBPS and e-BBPS deemed preparation adequate, the patient immediately underwent a second colonoscopy, otherwise the patient underwent bowel re-cleansing before the second colonoscopy. RESULTS: Among the 393 patients, 72 >5mm adenomas were detected, while 27 >5mm adenomas were missed. In unqualified-AI patients, the >5mm AMR was significantly higher than in qualified-AI patients (35.71% vs 13.19%, p=0.0056, OR 0.2734, 95% CI 0.1139, 0.6565), as were the AMR (50.89% vs 20.79%, p<0.001, OR 0.2532, 95% CI 0.1583, 0.4052) and >5mm PMR (35.82% vs 19.48%, p=0.0152, OR 0.4335, 95% CI 0.2288, 0.8213). CONCLUSIONS: This study confirmed that patients classified as inadequate by AI showed unacceptable >5mm AMR, provided key evidence for implementing AI in guiding the bowel re-cleansing, potentially standardizing the future colonoscopy screening; ClincialTrials.gov, NCT05145712.

5.
Artículo en Inglés | MEDLINE | ID: mdl-38414305

RESUMEN

BACKGROUND AND AIM: Early whitish gastric neoplasms can be easily misdiagnosed; differential diagnosis of gastric whitish lesions remains a challenge. We aim to build a deep learning (DL) model to diagnose whitish gastric neoplasms and explore the effect of adding domain knowledge in model construction. METHODS: We collected 4558 images from two institutions to train and test models. We first developed two sole DL models (1 and 2) using supervised and semi-supervised algorithms. Then we selected diagnosis-related features through literature research and developed feature-extraction models to determine features including boundary, surface, roundness, depression, and location. Then predictions of the five feature-extraction models and sole DL model were combined and inputted into seven machine-learning (ML) based fitting-diagnosis models. The optimal model was selected as ENDOANGEL-WD (whitish-diagnosis) and compared with endoscopists. RESULTS: Sole DL 2 had higher sensitivity (83.12% vs 68.67%, Bonferroni adjusted P = 0.024) than sole DL 1. Adding domain knowledge, the decision tree performed best among the seven ML models, achieving higher specificity than DL 1 (84.38% vs 72.27%, Bonferroni adjusted P < 0.05) and higher accuracy than DL 2 (80.47%, Bonferroni adjusted P < 0.001) and was selected as ENDOANGEL-WD. ENDOANGEL-WD showed better accuracy compared with 10 endoscopists (75.70%, P < 0.001). CONCLUSIONS: We developed a novel system ENDOANGEL-WD combining domain knowledge and traditional DL to detect gastric whitish neoplasms. Adding domain knowledge improved the performance of traditional DL, which provided a novel solution for establishing diagnostic models for other rare diseases potentially.

6.
BMC Gastroenterol ; 24(1): 10, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38166722

RESUMEN

BACKGROUND: Double-balloon enteroscopy (DBE) is a standard method for diagnosing and treating small bowel disease. However, DBE may yield false-negative results due to oversight or inexperience. We aim to develop a computer-aided diagnostic (CAD) system for the automatic detection and classification of small bowel abnormalities in DBE. DESIGN AND METHODS: A total of 5201 images were collected from Renmin Hospital of Wuhan University to construct a detection model for localizing lesions during DBE, and 3021 images were collected to construct a classification model for classifying lesions into four classes, protruding lesion, diverticulum, erosion & ulcer and angioectasia. The performance of the two models was evaluated using 1318 normal images and 915 abnormal images and 65 videos from independent patients and then compared with that of 8 endoscopists. The standard answer was the expert consensus. RESULTS: For the image test set, the detection model achieved a sensitivity of 92% (843/915) and an area under the curve (AUC) of 0.947, and the classification model achieved an accuracy of 86%. For the video test set, the accuracy of the system was significantly better than that of the endoscopists (85% vs. 77 ± 6%, p < 0.01). For the video test set, the proposed system was superior to novices and comparable to experts. CONCLUSIONS: We established a real-time CAD system for detecting and classifying small bowel lesions in DBE with favourable performance. ENDOANGEL-DBE has the potential to help endoscopists, especially novices, in clinical practice and may reduce the miss rate of small bowel lesions.


Asunto(s)
Aprendizaje Profundo , Enfermedades Intestinales , Humanos , Enteroscopía de Doble Balón/métodos , Intestino Delgado/diagnóstico por imagen , Intestino Delgado/patología , Enfermedades Intestinales/diagnóstico por imagen , Abdomen/patología , Endoscopía Gastrointestinal/métodos , Estudios Retrospectivos
7.
Dig Liver Dis ; 2024 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-38246825

RESUMEN

BACKGROUND AND AIMS: The diagnosis and stratification of gastric atrophy (GA) predict patients' gastric cancer progression risk and determine endoscopy surveillance interval. We aimed to construct an artificial intelligence (AI) system for GA endoscopic identification and risk stratification based on the Kimura-Takemoto classification. METHODS: We constructed the system using two trained models and verified its performance. First, we retrospectively collected 869 images and 119 videos to compare its performance with that of endoscopists in identifying GA. Then, we included original image cases of 102 patients to validate the system for stratifying GA and comparing it with endoscopists with different experiences. RESULTS: The sensitivity of model 1 was higher than that of endoscopists (92.72% vs. 76.85 %) at image level and also higher than that of experts (94.87% vs. 85.90 %) at video level. The system outperformed experts in stratifying GA (overall accuracy: 81.37 %, 73.04 %, p = 0.045). The accuracy of this system in classifying non-GA, mild GA, moderate GA, and severe GA was 80.00 %, 77.42 %, 83.33 %, and 85.71 %, comparable to that of experts and better than that of seniors and novices. CONCLUSIONS: We established an expert-level system for GA endoscopic identification and risk stratification. It has great potential for endoscopic assessment and surveillance determinations.

8.
Dig Endosc ; 36(1): 5-15, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37522555

RESUMEN

Esophagogastroduodenoscopy (EGD) screening is being implemented in countries with a high incidence of upper gastrointestinal (UGI) cancer. High-quality EGD screening ensures the yield of early diagnosis and prevents suffering from advanced UGI cancer and minimal operational-related discomfort. However, performance varied dramatically among endoscopists, and quality control for EGD screening remains suboptimal. Guidelines have recommended potential measures for endoscopy quality improvement and research has been conducted for evidence. Moreover, artificial intelligence offers a promising solution for computer-aided diagnosis and quality control during EGD examinations. In this review, we summarized the key points for quality assurance in EGD screening based on current guidelines and evidence. We also outline the latest evidence, limitations, and future prospects of the emerging role of artificial intelligence in EGD quality control, aiming to provide a foundation for improving the quality of EGD screening.


Asunto(s)
Neoplasias Gastrointestinales , Tracto Gastrointestinal Superior , Humanos , Inteligencia Artificial , Endoscopía del Sistema Digestivo , Endoscopía Gastrointestinal , Neoplasias Gastrointestinales/diagnóstico
9.
Endoscopy ; 56(4): 260-270, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37827513

RESUMEN

BACKGROUND: The choice of polypectomy device and surveillance intervals for colorectal polyps are primarily decided by polyp size. We developed a deep learning-based system (ENDOANGEL-CPS) to estimate colorectal polyp size in real time. METHODS: ENDOANGEL-CPS calculates polyp size by estimating the distance from the endoscope lens to the polyp using the parameters of the lens. The depth estimator network was developed on 7297 images from five virtually produced colon videos and tested on 730 images from seven virtual colon videos. The performance of the system was first evaluated in nine videos of a simulated colon with polyps attached, then tested in 157 real-world prospective videos from three hospitals, with the outcomes compared with that of nine endoscopists over 69 videos. Inappropriate surveillance recommendations caused by incorrect estimation of polyp size were also analyzed. RESULTS: The relative error of depth estimation was 11.3% (SD 6.0%) in successive virtual colon images. The concordance correlation coefficients (CCCs) between system estimation and ground truth were 0.89 and 0.93 in images of a simulated colon and multicenter videos of 157 polyps. The mean CCC of ENDOANGEL-CPS surpassed all endoscopists (0.89 vs. 0.41 [SD 0.29]; P<0.001). The relative accuracy of ENDOANGEL-CPS was significantly higher than that of endoscopists (89.9% vs. 54.7%; P<0.001). Regarding inappropriate surveillance recommendations, the system's error rate is also lower than that of endoscopists (1.5% vs. 16.6%; P<0.001). CONCLUSIONS: ENDOANGEL-CPS could potentially improve the accuracy of colorectal polyp size measurements and size-based surveillance intervals.


Asunto(s)
Pólipos del Colon , Neoplasias Colorrectales , Aprendizaje Profundo , Humanos , Pólipos del Colon/diagnóstico por imagen , Colonoscopía/métodos , Neoplasias Colorrectales/diagnóstico por imagen
10.
Gastrointest Endosc ; 99(1): 91-99.e9, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37536635

RESUMEN

BACKGROUND AND AIMS: The efficacy and safety of colonoscopy performed by artificial intelligence (AI)-assisted novices remain unknown. The aim of this study was to compare the lesion detection capability of novices, AI-assisted novices, and experts. METHODS: This multicenter, randomized, noninferiority tandem study was conducted across 3 hospitals in China from May 1, 2022, to November 11, 2022. Eligible patients were randomized into 1 of 3 groups: the CN group (control novice group, withdrawal performed by a novice independently), the AN group (AI-assisted novice group, withdrawal performed by a novice with AI assistance), or the CE group (control expert group, withdrawal performed by an expert independently). Participants underwent a repeat colonoscopy conducted by an AI-assisted expert to evaluate the lesion miss rate and ensure lesion detection. The primary outcome was the adenoma miss rate (AMR). RESULTS: A total of 685 eligible patients were analyzed: 229 in the CN group, 227 in the AN group, and 229 in the CE group. Both AMR and polyp miss rate were lower in the AN group than in the CN group (18.82% vs 43.69% [P < .001] and 21.23% vs 35.38% [P < .001], respectively). The noninferiority margin was met between the AN and CE groups of both AMR and polyp miss rate (18.82% vs 26.97% [P = .202] and 21.23% vs 24.10% [P < .249]). CONCLUSIONS: AI-assisted colonoscopy lowered the AMR of novices, making them noninferior to experts. The withdrawal technique of new endoscopists can be enhanced by AI-assisted colonoscopy. (Clinical trial registration number: NCT05323279.).


Asunto(s)
Adenoma , Pólipos del Colon , Neoplasias Colorrectales , Pólipos , Humanos , Inteligencia Artificial , Estudios Prospectivos , Colonoscopía/métodos , Proyectos de Investigación , Adenoma/diagnóstico , Adenoma/patología , Pólipos del Colon/diagnóstico por imagen , Neoplasias Colorrectales/diagnóstico
11.
Endosc Ultrasound ; 12(5): 417-423, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37969169

RESUMEN

Background and Objectives: EUS is a crucial diagnostic and therapeutic method for many anatomical regions, especially in the evaluation of mediastinal diseases and related pathologies. Rapidly finding the standard stations is the key to achieving efficient and complete mediastinal EUS imaging. However, it requires substantial technical skills and extensive knowledge of mediastinal anatomy. We constructed a system, named EUS-MPS (EUS-mediastinal position system), for real-time mediastinal EUS station recognition. Methods: The standard scanning of mediastinum EUS was divided into 7 stations. There were 33 010 images in mediastinum EUS examination collected to construct a station classification model. Then, we used 151 videos clips for video validation and used 1212 EUS images from 2 other hospitals for external validation. An independent data set containing 230 EUS images was applied for the man-machine contest. We conducted a crossover study to evaluate the effectiveness of this system in reducing the difficulty of mediastinal ultrasound image interpretation. Results: For station classification, the model achieved an accuracy of 90.49% in image validation and 83.80% in video validation. At external validation, the models achieved 89.85% accuracy. In the man-machine contest, the model achieved an accuracy of 84.78%, which was comparable to that of expert (83.91%). The accuracy of the trainees' station recognition was significantly improved in the crossover study, with an increase of 13.26% (95% confidence interval, 11.04%-15.48%; P < 0.05). Conclusions: This deep learning-based system shows great performance in mediastinum station localization, having the potential to play an important role in shortening the learning curve and establishing standard mediastinal scanning in the future.

12.
JAMA Netw Open ; 6(9): e2334822, 2023 09 05.
Artículo en Inglés | MEDLINE | ID: mdl-37728926

RESUMEN

Importance: The adherence of physicians and patients to published colorectal postpolypectomy surveillance guidelines varies greatly, and patient follow-up is critical but time consuming. Objectives: To evaluate the accuracy of an automatic surveillance (AS) system in identifying patients after polypectomy, assigning surveillance intervals for different risks of patients, and proactively following up with patients on time. Design, Setting, and Participants: In this diagnostic/prognostic study, endoscopic and pathological reports of 47 544 patients undergoing colonoscopy at 3 hospitals between January 1, 2017, and June 30, 2022, were collected to develop an AS system based on natural language processing. The performance of the AS system was fully evaluated in internal and external tests according to 5 guidelines worldwide and compared with that of physicians. A multireader, multicase (MRMC) trial was conducted to evaluate use of the AS system and physician guideline adherence, and prospective data were collected to evaluate the success rate in contacting patients and the association with reduced human workload. Data analysis was conducted from July to September 2022. Exposures: Assistance of the AS system. Main Outcomes and Measures: The accuracy of the system in identifying patients after polypectomy, stratifying patient risk levels, and assigning surveillance intervals in internal (Renmin Hospital of Wuhan University), external 1 (Wenzhou Central Hospital), and external 2 (The First People's Hospital of Yichang) test sets; the accuracy of physicians and their time burden with and without system assistance; and the rate of successfully informed patients of the system were evaluated. Results: Test sets for 16 106 patients undergoing colonoscopy (mean [SD] age, 51.90 [13.40] years; 7690 females [47.75%]) were evaluated. In internal, external 1, and external 2 test sets, the system had an overall accuracy of 99.91% (95% CI, 99.83%-99.95%), 99.54% (95% CI, 99.30%-99.70%), and 99.77% (95% CI, 99.41%-99.91%), respectively, for identifying types of patients and achieved an overall accuracy of at least 99.30% (95% CI, 98.67%-99.63%) in the internal test set, 98.89% (95% CI, 98.33%-99.27%) in external test set 1, and 98.56% (95% CI, 95.86%-99.51%) in external test set 2 for stratifying patient risk levels and assigning surveillance intervals according to 5 guidelines. The system was associated with increased mean (SD) accuracy among physicians vs no AS system in 105 patients (98.67% [1.28%] vs 78.10% [18.01%]; P = .04) in the MRMC trial. In a prospective trial, the AS system successfully informed 82 of 88 patients (93.18%) and was associated with reduced burden of follow-up time vs no AS system (0 vs 2.86 h). Conclusions and Relevance: This study found that an AS system was associated with improved adherence to guidelines among physicians and reduced workload among physicians and nurses.


Asunto(s)
Colonoscopía , Neoplasias Colorrectales , Femenino , Humanos , Persona de Mediana Edad , Estudios de Seguimiento , Estudios Prospectivos , Análisis de Datos
13.
J Gastroenterol ; 58(10): 978-989, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37515597

RESUMEN

BACKGROUND: Artificial intelligence (AI) performed variously among test sets with different diversity due to sample selection bias, which can be stumbling block for AI applications. We previously tested AI named ENDOANGEL, diagnosing early gastric cancer (EGC) on single-center videos in man-machine competition. We aimed to re-test ENDOANGEL on multi-center videos to explore challenges applying AI in multiple centers, then upgrade ENDOANGEL and explore solutions to the challenge. METHODS: ENDOANGEL was re-tested on multi-center videos retrospectively collected from 12 institutions and compared with performance in previously reported single-center videos. We then upgraded ENDOANGEL to ENDOANGEL-2022 with more training samples and novel algorithms and conducted competition between ENDOANGEL-2022 and endoscopists. ENDOANGEL-2022 was then tested on single-center videos and compared with performance in multi-center videos; the two AI systems were also compared with each other and endoscopists. RESULTS: Forty-six EGCs and 54 non-cancers were included in multi-center video cohort. On diagnosing EGCs, compared with single-center videos, ENDOANGEL showed stable sensitivity (97.83% vs. 100.00%) while sharply decreased specificity (61.11% vs. 82.54%); ENDOANGEL-2022 showed similar tendency while achieving significantly higher specificity (79.63%, p < 0.01) making fewer mistakes on typical lesions than ENDOANGEL. On detecting gastric neoplasms, both AI showed stable sensitivity while sharply decreased specificity. Nevertheless, both AI outperformed endoscopists in the two competitions. CONCLUSIONS: Great increase of false positives is a prominent challenge for applying EGC diagnostic AI in multiple centers due to high heterogeneity of negative cases. Optimizing AI by adding samples and using novel algorithms is promising to overcome this challenge.


Asunto(s)
Inteligencia Artificial , Neoplasias Gástricas , Humanos , Algoritmos , Proyectos de Investigación , Estudios Retrospectivos , Neoplasias Gástricas/diagnóstico
14.
Am J Clin Pathol ; 160(4): 394-403, 2023 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-37279532

RESUMEN

OBJECTIVES: The histopathologic diagnosis of colorectal sessile serrated lesions (SSLs) and hyperplastic polyps (HPs) is of low consistency among pathologists. This study aimed to develop and validate a deep learning (DL)-based logical anthropomorphic pathology diagnostic system (LA-SSLD) for the differential diagnosis of colorectal SSL and HP. METHODS: The diagnosis framework of the LA-SSLD system was constructed according to the current guidelines and consisted of 4 DL models. Deep convolutional neural network (DCNN) 1 was the mucosal layer segmentation model, DCNN 2 was the muscularis mucosa segmentation model, DCNN 3 was the glandular lumen segmentation model, and DCNN 4 was the glandular lumen classification (aberrant or regular) model. A total of 175 HP and 127 SSL sections were collected from Renmin Hospital of Wuhan University during November 2016 to November 2022. The performance of the LA-SSLD system was compared to 11 pathologists with different qualifications through the human-machine contest. RESULTS: The Dice scores of DCNNs 1, 2, and 3 were 93.66%, 58.38%, and 74.04%, respectively. The accuracy of DCNN 4 was 92.72%. In the human-machine contest, the accuracy, sensitivity, and specificity of the LA-SSLD system were 85.71%, 86.36%, and 85.00%, respectively. In comparison with experts (pathologist D: accuracy 83.33%, sensitivity 90.91%, specificity 75.00%; pathologist E: accuracy 85.71%, sensitivity 90.91%, specificity 80.00%), LA-SSLD achieved expert-level accuracy and outperformed all the senior and junior pathologists. CONCLUSIONS: This study proposed a logical anthropomorphic diagnostic system for the differential diagnosis of colorectal SSL and HP. The diagnostic performance of the system is comparable to that of experts and has the potential to become a powerful diagnostic tool for SSL in the future. It is worth mentioning that a logical anthropomorphic system can achieve expert-level accuracy with fewer samples, providing potential ideas for the development of other artificial intelligence models.


Asunto(s)
Pólipos del Colon , Neoplasias Colorrectales , Aprendizaje Profundo , Humanos , Pólipos del Colon/diagnóstico , Pólipos del Colon/patología , Inteligencia Artificial , Redes Neurales de la Computación , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/patología
16.
Clin Transl Gastroenterol ; 14(10): e00606, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-37289447

RESUMEN

INTRODUCTION: Endoscopic evaluation is crucial for predicting the invasion depth of esophagus squamous cell carcinoma (ESCC) and selecting appropriate treatment strategies. Our study aimed to develop and validate an interpretable artificial intelligence-based invasion depth prediction system (AI-IDPS) for ESCC. METHODS: We reviewed the PubMed for eligible studies and collected potential visual feature indices associated with invasion depth. Multicenter data comprising 5,119 narrow-band imaging magnifying endoscopy images from 581 patients with ESCC were collected from 4 hospitals between April 2016 and November 2021. Thirteen models for feature extraction and 1 model for feature fitting were developed for AI-IDPS. The efficiency of AI-IDPS was evaluated on 196 images and 33 consecutively collected videos and compared with a pure deep learning model and performance of endoscopists. A crossover study and a questionnaire survey were conducted to investigate the system's impact on endoscopists' understanding of the AI predictions. RESULTS: AI-IDPS demonstrated the sensitivity, specificity, and accuracy of 85.7%, 86.3%, and 86.2% in image validation and 87.5%, 84%, and 84.9% in consecutively collected videos, respectively, for differentiating SM2-3 lesions. The pure deep learning model showed significantly lower sensitivity, specificity, and accuracy (83.7%, 52.1% and 60.0%, respectively). The endoscopists had significantly improved accuracy (from 79.7% to 84.9% on average, P = 0.03) and comparable sensitivity (from 37.5% to 55.4% on average, P = 0.27) and specificity (from 93.1% to 94.3% on average, P = 0.75) after AI-IDPS assistance. DISCUSSION: Based on domain knowledge, we developed an interpretable system for predicting ESCC invasion depth. The anthropopathic approach demonstrates the potential to outperform deep learning architecture in practice.


Asunto(s)
Carcinoma de Células Escamosas , Neoplasias Esofágicas , Carcinoma de Células Escamosas de Esófago , Humanos , Carcinoma de Células Escamosas de Esófago/diagnóstico , Carcinoma de Células Escamosas de Esófago/patología , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/patología , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/patología , Esofagoscopía/métodos , Inteligencia Artificial , Estudios Cruzados , Sensibilidad y Especificidad , Estudios Multicéntricos como Asunto
17.
Trials ; 24(1): 323, 2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37170280

RESUMEN

BACKGROUND: This protocol is for a multi-centre randomised controlled trial to determine whether the computer-aided system ENDOANGEL-GC improves the detection rates of gastric neoplasms and early gastric cancer (EGC) in routine oesophagogastroduodenoscopy (EGD). METHODS: Study design: Prospective, single-blind, parallel-group, multi-centre randomised controlled trial. SETTINGS: The computer-aided system ENDOANGEL-GC was used to monitor blind spots, detect gastric abnormalities, and identify gastric neoplasms during EGD. PARTICIPANTS: Adults who underwent screening, diagnosis, or surveillance EGD. Randomisation groups: 1. Experiment group, EGD examinations with the assistance of the ENDOANGEL-GC; 2. Control group, EGD examinations without the assistance of the ENDOANGEL-GC. RANDOMISATION: Block randomisation, stratified by centre. PRIMARY OUTCOMES: Detection rates of gastric neoplasms and EGC. SECONDARY OUTCOMES: Detection rate of premalignant gastric lesions, biopsy rate, observation time, and number of blind spots on EGD. BLINDING: Outcomes are undertaken by blinded assessors. SAMPLE SIZE: Based on the previously published findings and our pilot study, the detection rate of gastric neoplasms in the control group is estimated to be 2.5%, and that of the experimental group is expected to be 4.0%. With a two-sided α level of 0.05 and power of 80%, allowing for a 10% drop-out rate, the sample size is calculated as 4858. The detection rate of EGC in the control group is estimated to be 20%, and that of the experiment group is expected to be 35%. With a two-sided α level of 0.05 and power of 80%, a total of 270 cases of gastric cancer are needed. Assuming the proportion of gastric cancer to be 1% in patients undergoing EGD and allowing for a 10% dropout rate, the sample size is calculated as 30,000. Considering the larger sample size calculated from the two primary endpoints, the required sample size is determined to be 30,000. DISCUSSION: The results of this trial will help determine the effectiveness of the ENDOANGEL-GC in clinical settings. TRIAL REGISTRATION: ChiCTR (Chinese Clinical Trial Registry), ChiCTR2100054449, registered 17 December 2021.


Asunto(s)
COVID-19 , Neoplasias Gástricas , Adulto , Humanos , Computadores , Estudios Multicéntricos como Asunto , Proyectos Piloto , Estudios Prospectivos , SARS-CoV-2 , Método Simple Ciego , Neoplasias Gástricas/diagnóstico , Resultado del Tratamiento , Ensayos Clínicos Controlados Aleatorios como Asunto
18.
NPJ Digit Med ; 6(1): 64, 2023 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-37045949

RESUMEN

White light endoscopy is the most pivotal tool for detecting early gastric neoplasms. Previous artificial intelligence (AI) systems were primarily unexplainable, affecting their clinical credibility and acceptability. We aimed to develop an explainable AI named ENDOANGEL-ED (explainable diagnosis) to solve this problem. A total of 4482 images and 296 videos with focal lesions from 3279 patients from eight hospitals were used for training, validating, and testing ENDOANGEL-ED. A traditional sole deep learning (DL) model was trained using the same dataset. The performance of ENDOANGEL-ED and sole DL was evaluated on six levels: internal and external images, internal and external videos, consecutive videos, and man-machine comparison with 77 endoscopists in videos. Furthermore, a multi-reader, multi-case study was conducted to evaluate the ENDOANGEL-ED's effectiveness. A scale was used to compare the overall acceptance of endoscopists to traditional and explainable AI systems. The ENDOANGEL-ED showed high performance in the image and video tests. In man-machine comparison, the accuracy of ENDOANGEL-ED was significantly higher than that of all endoscopists in internal (81.10% vs. 70.61%, p < 0.001) and external videos (88.24% vs. 78.49%, p < 0.001). With ENDOANGEL-ED's assistance, the accuracy of endoscopists significantly improved (70.61% vs. 79.63%, p < 0.001). Compared with the traditional AI, the explainable AI increased the endoscopists' trust and acceptance (4.42 vs. 3.74, p < 0.001; 4.52 vs. 4.00, p < 0.001). In conclusion, we developed a real-time explainable AI that showed high performance, higher clinical credibility, and acceptance than traditional DL models and greatly improved the diagnostic ability of endoscopists.

19.
Therap Adv Gastroenterol ; 16: 17562848231155023, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36895279

RESUMEN

Background: Changes in gastric mucosa caused by Helicobacter pylori (H. pylori) infection affect the observation of early gastric cancer under endoscopy. Although previous researches reported that computer-aided diagnosis (CAD) systems have great potential in the diagnosis of H. pylori infection, their explainability remains a challenge. Objective: We aim to develop an explainable artificial intelligence system for diagnosing H. pylori infection (EADHI) and giving diagnostic basis under endoscopy. Design: A case-control study. Methods: We retrospectively obtained 47,239 images from 1826 patients between 1 June 2020 and 31 July 2021 at Renmin Hospital of Wuhan University for the development of EADHI. EADHI was developed based on feature extraction combining ResNet-50 and long short-term memory networks. Nine endoscopic features were used for H. pylori infection. EADHI's performance was evaluated and compared to that of endoscopists. An external test was conducted in Wenzhou Central Hospital to evaluate its robustness. A gradient-boosting decision tree model was used to examine the contributions of different mucosal features for diagnosing H. pylori infection. Results: The system extracted mucosal features for diagnosing H. pylori infection with an overall accuracy of 78.3% [95% confidence interval (CI): 76.2-80.3]. The accuracy of EADHI for diagnosing H. pylori infection (91.1%, 95% CI: 85.7-94.6) was significantly higher than that of endoscopists (by 15.5%, 95% CI: 9.7-21.3) in internal test. And it showed a good accuracy of 91.9% (95% CI: 85.6-95.7) in external test. Mucosal edema was the most important diagnostic feature for H. pylori positive, while regular arrangement of collecting venules was the most important H. pylori negative feature. Conclusion: The EADHI discerns H. pylori gastritis with high accuracy and good explainability, which may improve the trust and acceptability of endoscopists on CADs. Plain language summary: An explainable AI system for Helicobacter pylori with good diagnostic performance Helicobacter pylori (H. pylori) is the main risk factor for gastric cancer (GC), and changes in gastric mucosa caused by H. pylori infection affect the observation of early GC under endoscopy. Therefore, it is necessary to identify H. pylori infection under endoscopy. Although previous research showed that computer-aided diagnosis (CAD) systems have great potential in H. pylori infection diagnosis, their generalization and explainability are still a challenge. Herein, we constructed an explainable artificial intelligence system for diagnosing H. pylori infection (EADHI) using images by case. In this study, we integrated ResNet-50 and long short-term memory (LSTM) networks into the system. Among them, ResNet50 is used for feature extraction, LSTM is used to classify H. pylori infection status based on these features. Furthermore, we added the information of mucosal features in each case when training the system so that EADHI could identify and output which mucosal features are contained in a case. In our study, EADHI achieved good diagnostic performance with an accuracy of 91.1% [95% confidence interval (CI): 85.7-94.6], which was significantly higher than that of endoscopists (by 15.5%, 95% CI: 9.7-21.3%) in internal test. In addition, it showed a good diagnostic accuracy of 91.9% (95% CI: 85.6-95.7) in external tests. The EADHI discerns H. pylori gastritis with high accuracy and good explainability, which may improve the trust and acceptability of endoscopists on CADs. However, we only used data from a single center to develop EADHI, and it was not effective in identifying past H. pylori infection. Future, multicenter, prospective studies are needed to demonstrate the clinical applicability of CADs.

20.
Dig Endosc ; 35(4): 422-429, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36749036

RESUMEN

The number of artificial intelligence (AI) tools for colonoscopy on the market is increasing with supporting clinical evidence. Nevertheless, their implementation is not going smoothly for a variety of reasons, including lack of data on clinical benefits and cost-effectiveness, lack of trustworthy guidelines, uncertain indications, and cost for implementation. To address this issue and better guide practitioners, the World Endoscopy Organization (WEO) has provided its perspective about the status of AI in colonoscopy as the position statement. WEO Position Statement: Statement 1.1: Computer-aided detection (CADe) for colorectal polyps is likely to improve colonoscopy effectiveness by reducing adenoma miss rates and thus increase adenoma detection; Statement 1.2: In the short term, use of CADe is likely to increase health-care costs by detecting more adenomas; Statement 1.3: In the long term, the increased cost by CADe could be balanced by savings in costs related to cancer treatment (surgery, chemotherapy, palliative care) due to CADe-related cancer prevention; Statement 1.4: Health-care delivery systems and authorities should evaluate the cost-effectiveness of CADe to support its use in clinical practice; Statement 2.1: Computer-aided diagnosis (CADx) for diminutive polyps (≤5 mm), when it has sufficient accuracy, is expected to reduce health-care costs by reducing polypectomies, pathological examinations, or both; Statement 2.2: Health-care delivery systems and authorities should evaluate the cost-effectiveness of CADx to support its use in clinical practice; Statement 3: We recommend that a broad range of high-quality cost-effectiveness research should be undertaken to understand whether AI implementation benefits populations and societies in different health-care systems.


Asunto(s)
Pólipos del Colon , Neoplasias Colorrectales , Humanos , Inteligencia Artificial , Colonoscopía , Endoscopía Gastrointestinal , Diagnóstico por Computador , Pólipos del Colon/diagnóstico , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/prevención & control
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA