Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 86
Filtrar
1.
Dig Liver Dis ; 56(7): 1156-1163, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38763796

RESUMO

Recognition of gastric conditions during endoscopy exams, including gastric cancer, usually requires specialized training and a long learning curve. Besides that, the interobserver variability is frequently high due to the different morphological characteristics of the lesions and grades of mucosal inflammation. In this sense, artificial intelligence tools based on deep learning models have been developed to support physicians to detect, classify, and predict gastric lesions more efficiently. Even though a growing number of studies exists in the literature, there are multiple challenges to bring a model to practice in this field, such as the need for more robust validation studies and regulatory hurdles. Therefore, the aim of this review is to provide a comprehensive assessment of the current use of artificial intelligence applied to endoscopic imaging to evaluate gastric precancerous and cancerous lesions and the barriers to widespread implementation of this technology in clinical routine.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/diagnóstico , Neoplasias Gástricas/patologia , Lesões Pré-Cancerosas/diagnóstico por imagem , Lesões Pré-Cancerosas/diagnóstico , Lesões Pré-Cancerosas/patologia , Gastroscopia/métodos
2.
J Clin Med ; 13(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38610762

RESUMO

Background: Barrett's esophagus and esophageal adenocarcinoma cases are increasing as gastroesophageal reflux disease increases. Using artificial intelligence (AI) and linked color imaging (LCI), our aim was to establish a method of diagnosis for short-segment Barrett's esophagus (SSBE). Methods: We retrospectively selected 624 consecutive patients in total at our hospital, treated between May 2017 and March 2020, who experienced an esophagogastroduodenoscopy with white light imaging (WLI) and LCI. Images were randomly chosen as data for learning from WLI: 542 (SSBE+/- 348/194) of 696 (SSBE+/- 444/252); and LCI: 643 (SSBE+/- 446/197) of 805 (SSBE+/- 543/262). Using a Vision Transformer (Vit-B/16-384) to diagnose SSBE, we established two AI systems for WLI and LCI. Finally, 126 WLI (SSBE+/- 77/49) and 137 LCI (SSBE+/- 81/56) images were used for verification purposes. The accuracy of six endoscopists in making diagnoses was compared to that of AI. Results: Study participants were 68.2 ± 12.3 years, M/F 330/294, SSBE+/- 409/215. The accuracy/sensitivity/specificity (%) of AI were 84.1/89.6/75.5 for WLI and 90.5/90.1/91.1/for LCI, and those of experts and trainees were 88.6/88.7/88.4, 85.7/87.0/83.7 for WLI and 93.4/92.6/94.6, 84.7/88.1/79.8 for LCI, respectively. Conclusions: Using AI to diagnose SSBE was similar in accuracy to using a specialist. Our finding may aid the diagnosis of SSBE in the clinic.

3.
J Gastroenterol Hepatol ; 39(1): 157-164, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37830487

RESUMO

BACKGROUND AND AIM: Convolutional neural network (CNN) systems that automatically detect abnormalities from small-bowel capsule endoscopy (SBCE) images are still experimental, and no studies have directly compared the clinical usefulness of different systems. We compared endoscopist readings using an existing and a novel CNN system in a real-world SBCE setting. METHODS: Thirty-six complete SBCE videos, including 43 abnormal lesions (18 mucosal breaks, 8 angioectasia, and 17 protruding lesions), were retrospectively prepared. Three reading processes were compared: (A) endoscopist readings without CNN screening, (B) endoscopist readings after an existing CNN screening, and (C) endoscopist readings after a novel CNN screening. RESULTS: The mean number of small-bowel images was 14 747 per patient. Among these images, existing and novel CNN systems automatically captured 24.3% and 9.4% of the images, respectively. In this process, both systems extracted all 43 abnormal lesions. Next, we focused on the clinical usefulness. The detection rates of abnormalities by trainee endoscopists were not significantly different across the three processes: A, 77%; B, 67%; and C, 79%. The mean reading time of the trainees was the shortest during process C (10.1 min per patient), followed by processes B (23.1 min per patient) and A (33.6 min per patient). The mean psychological stress score while reading videos (scale, 1-5) was the lowest in process C (1.8) but was not significantly different between processes B (2.8) and A (3.2). CONCLUSIONS: Our novel CNN system significantly reduced endoscopist reading time and psychological stress while maintaining the detectability of abnormalities. CNN performance directly affects clinical utility and should be carefully assessed.


Assuntos
Endoscopia por Cápsula , Aprendizado Profundo , Humanos , Endoscopia por Cápsula/métodos , Estudos Retrospectivos , Intestino Delgado/diagnóstico por imagem , Intestino Delgado/patologia , Redes Neurais de Computação
4.
J Gastroenterol Hepatol ; 38(9): 1587-1591, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37408330

RESUMO

OBJECTIVES: Artificial intelligence (AI) uses deep learning functionalities that may enhance the detection of early gastric cancer during endoscopy. An AI-based endoscopic system for upper endoscopy was recently developed in Japan. We aim to validate this AI-based system in a Singaporean cohort. METHODS: There were 300 de-identified still images prepared from endoscopy video files obtained from subjects that underwent gastroscopy in National University Hospital (NUH). Five specialists and 6 non-specialists (trainees) from NUH were assigned to read and categorize the images into "neoplastic" or "non-neoplastic." Results were then compared with the readings performed by the endoscopic AI system. RESULTS: The mean accuracy, sensitivity, and specificity for the 11 endoscopists were 0.847, 0.525, and 0.872, respectively. These values for the AI-based system were 0.777, 0.591, and 0.791, respectively. While AI in general did not perform better than endoscopists on the whole, in the subgroup of high-grade dysplastic lesions, only 29.1% were picked up by the endoscopist rating, but 80% were classified as neoplastic by AI (P = 0.0011). The average diagnostic time was also faster in AI compared with endoscopists (677.1 s vs 42.02 s (P < 0.001). CONCLUSION: We demonstrated that an AI system developed in another health system was comparable in diagnostic accuracy in the evaluation of static images. AI systems are faster and not fatigable and may have a role in augmenting human diagnosis during endoscopy. With more advances in AI and larger studies to support its efficacy it would likely play a larger role in screening endoscopy in future.


Assuntos
Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagem , Inteligência Artificial , Gastroscopia , Povo Asiático , Fadiga
5.
Gastrointest Endosc ; 98(6): 968-976.e3, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37482106

RESUMO

BACKGROUND AND AIMS: Capsule endoscopy (CE) is useful in evaluating disease surveillance for primary small-bowel follicular lymphoma (FL), but some cases are difficult to evaluate objectively. This study evaluated the usefulness of a deep convolutional neural network (CNN) system using CE images for disease surveillance of primary small-bowel FL. METHODS: We enrolled 26 consecutive patients with primary small-bowel FL diagnosed between January 2011 and January 2021 who underwent CE before and after a watch-and-wait strategy or chemotherapy. Disease surveillance by the CNN system was evaluated by the percentage of FL-detected images among all CE images of the small-bowel mucosa. RESULTS: Eighteen cases (69%) were managed with a watch-and-wait approach, and 8 cases (31%) were treated with chemotherapy. Among the 18 cases managed with the watch-and-wait approach, the outcome of lesion evaluation by the CNN system was almost the same in 13 cases (72%), aggravation in 4 (22%), and improvement in 1 (6%). Among the 8 cases treated with chemotherapy, the outcome of lesion evaluation by the CNN system was improvement in 5 cases (63%), almost the same in 2 (25%), and aggravation in 1 (12%). The physician and CNN system reported similar results regarding disease surveillance evaluation in 23 of 26 cases (88%), whereas a discrepancy between the 2 was found in the remaining 3 cases (12%), attributed to poor small-bowel cleansing level. CONCLUSIONS: Disease surveillance evaluation of primary small-bowel FL using CE images by the developed CNN system was useful under the condition of excellent small-bowel cleansing level.


Assuntos
Endoscopia por Cápsula , Linfoma Folicular , Humanos , Endoscopia por Cápsula/métodos , Linfoma Folicular/diagnóstico por imagem , Linfoma Folicular/tratamento farmacológico , Redes Neurais de Computação , Intestino Delgado/diagnóstico por imagem , Intestino Delgado/patologia , Duodeno
6.
BMC Gastroenterol ; 23(1): 184, 2023 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-37231330

RESUMO

BACKGROUND: Several pre-clinical studies have reported the usefulness of artificial intelligence (AI) systems in the diagnosis of esophageal squamous cell carcinoma (ESCC). We conducted this study to evaluate the usefulness of an AI system for real-time diagnosis of ESCC in a clinical setting. METHODS: This study followed a single-center prospective single-arm non-inferiority design. Patients at high risk for ESCC were recruited and real-time diagnosis by the AI system was compared with that of endoscopists for lesions suspected to be ESCC. The primary outcomes were the diagnostic accuracy of the AI system and endoscopists. The secondary outcomes were sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and adverse events. RESULTS: A total of 237 lesions were evaluated. The accuracy, sensitivity, and specificity of the AI system were 80.6%, 68.2%, and 83.4%, respectively. The accuracy, sensitivity, and specificity of endoscopists were 85.7%, 61.4%, and 91.2%, respectively. The difference between the accuracy of the AI system and that of the endoscopists was - 5.1%, and the lower limit of the 90% confidence interval was less than the non-inferiority margin. CONCLUSIONS: The non-inferiority of the AI system in comparison with endoscopists in the real-time diagnosis of ESCC in a clinical setting was not proven. TRIAL REGISTRATION: Japan Registry of Clinical Trials (jRCTs052200015, 18/05/2020).


Assuntos
Neoplasias Esofágicas , Carcinoma de Células Escamosas do Esôfago , Humanos , Inteligência Artificial , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/patologia , Carcinoma de Células Escamosas do Esôfago/diagnóstico , Carcinoma de Células Escamosas do Esôfago/patologia , Esofagoscopia , Estudos Prospectivos
7.
Ann Nucl Med ; 37(7): 410-418, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37160863

RESUMO

OBJECTIVES: Standardised uptake value ratio (SUVR) is usually obtained by dividing the SUV of the region of interest (ROI) by that of the cerebellar cortex. Cerebellar cortex is not a valid reference in cases where amyloid ß deposition or lesions are present. Only few studies have evaluated the use of other regions as references. We compared the validity of the pons and corpus callosum as reference regions for the quantitative evaluation of brain positron emission tomography (PET) using 11C-PiB compared to the cerebellar cortex. METHODS: We retrospectively evaluated data from 86 subjects with or without Alzheimer's disease (AD). All subjects underwent magnetic resonance imaging, PET imaging, and cognitive function testing. For the quantitative analysis, three-dimensional ROIs were automatically placed, and SUV and SUVR were obtained. We compared these values between AD and healthy control (HC) groups. RESULTS: SUVR data obtained using the pons and corpus callosum as reference regions strongly correlated with that using the cerebellar cortex. The sensitivity and specificity were high when either the pons or corpus callosum was used as the reference region. However, the SUV values of the corpus callosum were different between AD and HC (p < 0.01). CONCLUSIONS: Our data suggest that the pons and corpus callosum might be valid reference regions.


Assuntos
Doença de Alzheimer , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Peptídeos beta-Amiloides/metabolismo , Corpo Caloso/metabolismo , Corpo Caloso/patologia , Estudos Retrospectivos , Tomografia por Emissão de Pósitrons/métodos , Encéfalo/metabolismo , Ponte/diagnóstico por imagem , Ponte/metabolismo , Ponte/patologia , Compostos de Anilina
8.
Dig Endosc ; 35(4): 483-491, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36239483

RESUMO

OBJECTIVES: Endoscopists' abilities to diagnose early gastric cancers (EGCs) vary, especially between specialists and nonspecialists. We developed an artificial intelligence (AI)-based diagnostic support tool "Tango" to differentiate EGCs and compared its performance with that of endoscopists. METHODS: The diagnostic performances of Tango and endoscopists (34 specialists, 42 nonspecialists) were compared using still images of 150 neoplastic and 165 non-neoplastic lesions. Neoplastic lesions included EGCs and adenomas. The primary outcome was to show the noninferiority of Tango (based on sensitivity) over specialists. The secondary outcomes were the noninferiority of Tango (based on accuracy) over specialists and the superiority of Tango (based on sensitivity and accuracy) over nonspecialists. The lower limit of the 95% confidence interval (CI) of the difference between Tango and the specialists for sensitivity was calculated, with >-10% defined as noninferiority and >0% defined as superiority in the primary outcome. The comparable differences between Tango and the endoscopists for each performance were calculated, with >10% defined as superiority and >0% defined as noninferiority in the secondary outcomes. RESULTS: Tango achieved superiority over the specialists based on sensitivity (84.7% vs. 65.8%, difference 18.9%, 95% CI 12.3-25.3%) and demonstrated noninferiority based on accuracy (70.8% vs. 67.4%). Tango achieved superiority over the nonspecialists based on sensitivity (84.7% vs. 51.0%) and accuracy (70.8% vs. 58.4%). CONCLUSIONS: The AI-based diagnostic support tool for EGCs demonstrated a robust performance and may be useful to reduce misdiagnosis.


Assuntos
Inteligência Artificial , Neoplasias Gástricas , Humanos , Estudos Retrospectivos , Neoplasias Gástricas/diagnóstico
9.
Diagnostics (Basel) ; 12(12)2022 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-36553160

RESUMO

Artificial intelligence (AI) is gradually being utilized in various fields as its performance has been improving with the development of deep learning methods, availability of big data, and the progression of computer processing units. In the field of medicine, AI is mainly implemented in image recognition, such as in radiographic and pathologic diagnoses. In the realm of gastrointestinal endoscopy, although AI-based computer-assisted detection/diagnosis (CAD) systems have been applied in some areas, such as colorectal polyp detection and diagnosis, so far, their implementation in real-world clinical settings is limited. The accurate detection or diagnosis of gastric cancer (GC) is one of the challenges in which performance varies greatly depending on the endoscopist's skill. The diagnosis of early GC is especially challenging, partly because early GC mimics atrophic gastritis in the background mucosa. Therefore, several CAD systems for GC are being actively developed. The development of a CAD system for GC is considered challenging because it requires a large number of GC images. In particular, early stage GC images are rarely available, partly because it is difficult to diagnose gastric cancer during the early stages. Additionally, the training image data should be of a sufficiently high quality to conduct proper CAD training. Recently, several AI systems for GC that exhibit a robust performance, owing to being trained on a large number of high-quality images, have been reported. This review outlines the current status and prospects of AI use in esophagogastroduodenoscopy (EGDS), focusing on the diagnosis of GC.

10.
JGH Open ; 6(10): 704-710, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36262541

RESUMO

Background and Aim: Gastric atrophy is a precancerous lesion. We aimed to clarify whether gastric atrophy determined by artificial intelligence (AI) correlates with the diagnosis made by expert endoscopists using several endoscopic classifications, the Operative Link on Gastritis Assessment (OLGA) classification based on histological findings, and genotypes associated with gastric atrophy and cancer. Methods: Two hundred seventy Helicobacter pylori-positive outpatients were enrolled. All patients' endoscopy data were retrospectively evaluated based on the Kimura-Takemoto, modified Kyoto, and OLGA classifications. The AI-trained neural network generated a continuous number between 0 and 1 for gastric atrophy. Nucleotide variance of some candidate genes was confirmed or selectively assessed for a variety of genotypes, including the COX-21195, IL-1ß 511, and mPGES-1 genotypes. Results: There were significant correlations between determinations of gastric atrophy by AI and by expert endoscopists using not only the Kimura-Takemoto classification (P < 0.001), but also the modified Kyoto classification (P = 0.046 and P < 0.001 for the two criteria). Moreover, there was a significant correlation with the OLGA classification (P = 0.009). Nucleotide variance of the COX-2, IL-1ß, and mPGES-1genes was not significantly associated with gastric atrophy determined by AI. The area under the curve values of the combinations of AI and the modified Kyoto classification (0.746) and AI and the OLGA classification (0.675) were higher than in AI alone (0.665). Conclusion: Combinations of AI and the modified Kyoto classification or of AI and the OLGA classification could be useful tools for evaluating gastric atrophy in patients with H. pylori infection as the risk of gastric cancer.

11.
DEN Open ; 2(1): e72, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35873509

RESUMO

The application of artificial intelligence (AI) using deep learning has significantly expanded in the field of esophagogastric endoscopy. Recent studies have shown promising results in detecting and differentiating early gastric cancer using AI tools built using white light, magnified, or image-enhanced endoscopic images. Some studies have reported the use of AI tools to predict the depth of early gastric cancer based on endoscopic images. Similarly, studies based on using AI for detecting early esophageal cancer have also been reported, with an accuracy comparable to that of endoscopy specialists. Moreover, an AI system, developed to diagnose pharyngeal cancer, has shown promising performance with high sensitivity. These reports suggest that, if introduced for regular use in clinical settings, AI systems can significantly reduce the burden on physicians. This review summarizes the current status of AI applications in the upper gastrointestinal tract and presents directions for clinical practice implementation and future research.

13.
J Clin Med ; 11(9)2022 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-35566653

RESUMO

Subjective symptoms associated with eosinophilic esophagitis (EoE), such as dysphagia, are not specific, thus the endoscopic identification of suggestive EoE findings is quite important for facilitating endoscopic biopsy sampling. However, poor inter-observer agreement among endoscopists regarding diagnosis has become a complicated issue, especially with inexperienced practitioners. Therefore, we constructed a computer-assisted diagnosis (CAD) system using a convolutional neural network (CNN) and evaluated its performance as a diagnostic utility. A CNN-based CAD system was developed based on ResNet50 architecture. The CNN was trained using a total of 1192 characteristic endoscopic images of 108 patients histologically proven to be in an active phase of EoE (≥15 eosinophils per high power field) as well as 1192 normal esophagus images. To evaluate diagnostic accuracy, an independent test set of 756 endoscopic images from 35 patients with EoE and 96 subjects with a normal esophagus was examined with the constructed CNN. The CNN correctly diagnosed EoE in 94.7% using a diagnosis per image analysis, with an overall sensitivity of 90.8% and specificity of 96.6%. For each case, the CNN correctly diagnosed 37 of 39 EoE cases with overall sensitivity and specificity of 94.9% and 99.0%, respectively. These findings indicate the usefulness of CNN for diagnosing EoE, especially for aiding inexperienced endoscopists during medical check-up screening.

14.
BMC Gastroenterol ; 22(1): 237, 2022 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-35549679

RESUMO

BACKGROUND: Endocytoscopy (ECS) aids early gastric cancer (EGC) diagnosis by visualization of cells. However, it is difficult for non-experts to accurately diagnose EGC using ECS. In this study, we developed and evaluated a convolutional neural network (CNN)-based system for ECS-aided EGC diagnosis. METHODS: We constructed a CNN based on a residual neural network with a training dataset comprising 906 images from 61 EGC cases and 717 images from 65 noncancerous gastric mucosa (NGM) cases. To evaluate diagnostic ability, we used an independent test dataset comprising 313 images from 39 EGC cases and 235 images from 33 NGM cases. The test dataset was further evaluated by three endoscopists, and their findings were compared with CNN-based results. RESULTS: The trained CNN required 7.0 s to analyze the test dataset. The area under the curve of the total ECS images was 0.93. The CNN produced 18 false positives from 7 NGM lesions and 74 false negatives from 28 EGC lesions. In the per-image analysis, the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 83.2%, 76.4%, 92.3%, 93.0%, and 74.6%, respectively, with the CNN and 76.8%, 73.4%, 81.3%, 83.9%, and 69.6%, respectively, for the endoscopist-derived values. The CNN-based findings had significantly higher specificity than the findings determined by all endoscopists. In the per-lesion analysis, the accuracy, sensitivity, specificity, PPV, and NPV of the CNN-based findings were 86.1%, 82.1%, 90.9%, 91.4%, and 81.1%, respectively, and those of the results calculated by the endoscopists were 82.4%, 79.5%, 85.9%, 86.9%, and 78.0%, respectively. CONCLUSIONS: Compared with three endoscopists, our CNN for ECS demonstrated higher specificity for EGC diagnosis. Using the CNN in ECS-based EGC diagnosis may improve the diagnostic performance of endoscopists.


Assuntos
Neoplasias Gástricas , Detecção Precoce de Câncer/métodos , Mucosa Gástrica/diagnóstico por imagem , Mucosa Gástrica/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/patologia
15.
J Appl Clin Med Phys ; 23(7): e13626, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35536775

RESUMO

PURPOSE: Accurate tracer accumulation evaluation is difficult owing to the partial volume effect (PVE). We proposed a novel semi-quantitative approach for measuring the accumulation amount by examining the approximate image. Using a striatal phantom, we verified the validity of a newly proposed method to accurately evaluate the tracer accumulations in the caudate and putamen separately. Moreover, we compared the proposed method with the conventional methods. METHODS: The left and right caudate/putamen regions and the whole brain region as background were identified in computed tomography (CT) images obtained by single-photon emission computed tomography (SPECT)/CT and acquired the positional information of each region. SPECT-like images were generated by assigning assumed accumulation amounts to each region. The SPECT-like image, approximated to the actual measured SPECT image, was examined by changing the assumed accumulation amounts assigned to each region. When the generated SPECT-like image most approximated the actual measured SPECT image, the accumulation amounts assumed were determined as the accumulation amounts in each region. We evaluated the correlation between the count density calculated by the proposed method and the actual count density of the 123 I solution filled in the phantom. Conventional methods (CT-guide method, geometric transfer matrix [GTM] method, region-based voxel-wise [RBV] method, and Southampton method) were also evaluated. The significance of differences between the correlation coefficients of various methods (except the Southampton method) was evaluated. RESULTS: The correlation coefficients between the actual count density and the SPECT count densities were 0.997, 0.973, 0.951, 0.950, and 0.996 for the proposed method, CT-guide method, GTM method, RBV method, and Southampton method, respectively. The correlation of the proposed method was significantly higher than those of the other methods. CONCLUSIONS: The proposed method could calculate accurate accumulation amounts in the caudate and putamen separately, considering the PVE.


Assuntos
Proteínas da Membrana Plasmática de Transporte de Dopamina , Tomografia Computadorizada de Emissão de Fóton Único , Encéfalo , Proteínas da Membrana Plasmática de Transporte de Dopamina/metabolismo , Humanos , Imagens de Fantasmas , Tomografia Computadorizada de Emissão de Fóton Único/métodos
16.
Sci Rep ; 12(1): 6677, 2022 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-35461350

RESUMO

Previous reports have shown favorable performance of artificial intelligence (AI) systems for diagnosing esophageal squamous cell carcinoma (ESCC) compared with endoscopists. However, these findings don't reflect performance in clinical situations, as endoscopists classify lesions based on both magnified and non-magnified videos, while AI systems often use only a few magnified narrow band imaging (NBI) still images. We evaluated the performance of the AI system in simulated clinical situations. We used 25,048 images from 1433 superficial ESCC and 4746 images from 410 noncancerous esophagi to construct our AI system. For the validation dataset, we took NBI videos of suspected superficial ESCCs. The AI system diagnosis used one magnified still image taken from each video, while 19 endoscopists used whole videos. We used 147 videos and still images including 83 superficial ESCC and 64 non-ESCC lesions. The accuracy, sensitivity and specificity for the classification of ESCC were, respectively, 80.9% [95% CI 73.6-87.0], 85.5% [76.1-92.3], and 75.0% [62.6-85.0] for the AI system and 69.2% [66.4-72.1], 67.5% [61.4-73.6], and 71.5% [61.9-81.0] for the endoscopists. The AI system correctly classified all ESCCs invading the muscularis mucosa or submucosa and 96.8% of lesions ≥ 20 mm, whereas even the experts diagnosed some of them as non-ESCCs. Our AI system showed higher accuracy for classifying ESCC and non-ESCC than endoscopists. It may provide valuable diagnostic support to endoscopists.


Assuntos
Neoplasias Esofágicas , Carcinoma de Células Escamosas do Esôfago , Inteligência Artificial , Neoplasias Esofágicas/diagnóstico por imagem , Neoplasias Esofágicas/patologia , Carcinoma de Células Escamosas do Esôfago/diagnóstico , Carcinoma de Células Escamosas do Esôfago/patologia , Humanos , Imagem de Banda Estreita
17.
Dis Esophagus ; 35(9)2022 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-35292794

RESUMO

Endocytoscopy (EC) facilitates real-time histological diagnosis of esophageal lesions in vivo. We developed a deep-learning artificial intelligence (AI) system for analysis of EC images and compared its diagnostic ability with that of an expert pathologist and nonexpert endoscopists. Our new AI was based on a vision transformer model (DeiT) and trained using 7983 EC images of the esophagus (2368 malignant and 5615 nonmalignant). The AI evaluated 114 randomly arranged EC pictures (33 ESCC and 81 nonmalignant lesions) from 38 consecutive cases. An expert pathologist and two nonexpert endoscopists also analyzed the same image set according to the modified type classification (adding four EC features of nonmalignant lesions to our previous classification). The area under the curve calculated from the receiver-operating characteristic curve for the AI analysis was 0.92. In per-image analysis, the overall accuracy of the AI, pathologist, and two endoscopists was 91.2%, 91.2%, 85.9%, and 83.3%, respectively. The kappa value between the pathologist and the AI, and between the two endoscopists and the AI showed moderate concordance; that between the pathologist and the two endoscopists showed poor concordance. In per-patient analysis, the overall accuracy of the AI, pathologist, and two endoscopists was 94.7%, 92.1%, 86.8%, and 89.5%, respectively. The modified type classification aided high overall diagnostic accuracy by the pathologist and nonexpert endoscopists. The diagnostic ability of the AI was equal or superior to that of the experienced pathologist. AI is expected to support endoscopists in diagnosing esophageal lesions based on EC images.


Assuntos
Inteligência Artificial , Endoscopia , Endoscopia/métodos , Esôfago/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Curva ROC
18.
J Clin Lab Anal ; 36(1): e24122, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34811809

RESUMO

BACKGROUND AND AIM: Gastrointestinal endoscopy and biopsy-based pathological findings are needed to diagnose early gastric cancer. However, the information of biopsy specimen is limited because of the topical procedure; therefore, pathology doctors sometimes diagnose as gastric indefinite for dysplasia (GIN). METHODS: We compared the accuracy of physician-performed endoscopy (trainee, n = 3; specialists, n = 3), artificial intelligence (AI)-based endoscopy, and/or molecular markers (DNA methylation: BARHL2, MINT31, TET1, miR-148a, miR-124a-3, NKX6-1; mutations: TP53; and microsatellite instability) in diagnosing GIN lesions. We enrolled 24,388 patients who underwent endoscopy, and 71 patients were diagnosed with GIN lesions. Thirty-two cases of endoscopic submucosal dissection (ESD) in 71 GIN lesions and 32 endoscopically resected tissues were assessed by endoscopists, AI, and molecular markers to identify benign or malignant lesions. RESULTS: The board-certified endoscopic physicians group showed the highest accuracy in the receiver operative characteristic curve (area under the curve [AUC]: 0.931), followed by a combination of AI and miR148a DNA methylation (AUC: 0.825), and finally trainee endoscopists (AUC: 0.588). CONCLUSION: AI with miR148s DNA methylation-based diagnosis is a potential modality for diagnosing GIN.


Assuntos
Inteligência Artificial , Diagnóstico por Computador/métodos , Endoscopia Gastrointestinal , MicroRNAs/genética , Neoplasias Gástricas , Idoso , Idoso de 80 Anos ou mais , Biomarcadores Tumorais/genética , Metilação de DNA/genética , Detecção Precoce de Câncer , Ressecção Endoscópica de Mucosa , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estômago/patologia , Estômago/cirurgia , Neoplasias Gástricas/diagnóstico , Neoplasias Gástricas/genética , Neoplasias Gástricas/patologia , Neoplasias Gástricas/cirurgia
19.
Endoscopy ; 54(8): 780-784, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-34607377

RESUMO

AIMS: To compare endoscopy gastric cancer images diagnosis rate between artificial intelligence (AI) and expert endoscopists. PATIENTS AND METHODS: We used the retrospective data of 500 patients, including 100 with gastric cancer, matched 1:1 to diagnosis by AI or expert endoscopists. We retrospectively evaluated the noninferiority (prespecified margin 5 %) of the per-patient rate of gastric cancer diagnosis by AI and compared the per-image rate of gastric cancer diagnosis. RESULTS: Gastric cancer was diagnosed in 49 of 49 patients (100 %) in the AI group and 48 of 51 patients (94.12 %) in the expert endoscopist group (difference 5.88, 95 % confidence interval: -0.58 to 12.3). The per-image rate of gastric cancer diagnosis was higher in the AI group (99.87 %, 747 /748 images) than in the expert endoscopist group (88.17 %, 693 /786 images) (difference 11.7 %). CONCLUSIONS: Noninferiority of the rate of gastric cancer diagnosis by AI was demonstrated but superiority was not demonstrated.


Assuntos
Inteligência Artificial , Neoplasias Gástricas , Endoscopia , Endoscopia Gastrointestinal/métodos , Humanos , Estudos Retrospectivos , Neoplasias Gástricas/diagnóstico por imagem
20.
Gastroenterol Rep (Oxf) ; 9(3): 226-233, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34316372

RESUMO

BACKGROUND: A colonoscopy can detect colorectal diseases, including cancers, polyps, and inflammatory bowel diseases. A computer-aided diagnosis (CAD) system using deep convolutional neural networks (CNNs) that can recognize anatomical locations during a colonoscopy could efficiently assist practitioners. We aimed to construct a CAD system using a CNN to distinguish colorectal images from parts of the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum. METHOD: We constructed a CNN by training of 9,995 colonoscopy images and tested its performance by 5,121 independent colonoscopy images that were categorized according to seven anatomical locations: the terminal ileum, the cecum, ascending colon to transverse colon, descending colon to sigmoid colon, the rectum, the anus, and indistinguishable parts. We examined images taken during total colonoscopy performed between January 2017 and November 2017 at a single center. We evaluated the concordance between the diagnosis by endoscopists and those by the CNN. The main outcomes of the study were the sensitivity and specificity of the CNN for the anatomical categorization of colonoscopy images. RESULTS: The constructed CNN recognized anatomical locations of colonoscopy images with the following areas under the curves: 0.979 for the terminal ileum; 0.940 for the cecum; 0.875 for ascending colon to transverse colon; 0.846 for descending colon to sigmoid colon; 0.835 for the rectum; and 0.992 for the anus. During the test process, the CNN system correctly recognized 66.6% of images. CONCLUSION: We constructed the new CNN system with clinically relevant performance for recognizing anatomical locations of colonoscopy images, which is the first step in constructing a CAD system that will support us during colonoscopy and provide an assurance of the quality of the colonoscopy procedure.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...