Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 88
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Endoscopy ; 2024 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-39317205

RESUMEN

Background Artificial intelligence (AI) has made remarkable progress in image recognition using deep learning systems and has been used to detect esophageal squamous cell carcinoma (ESCC). However, all previous reports were not investigated in clinical settings, but in a retrospective design. Therefore, we conducted this trial to determine how AI can help endoscopists detect ESCC in clinical settings. Methods This was a prospective, single-center, exploratory, and randomized controlled trial. High-risk patients with ESCC undergoing screening or surveillance esophagogastroduodenoscopy were enrolled and randomly assigned to either the AI or control group. In the AI group, the endoscopists watched both the AI monitor detecting ESCC with annotation and the normal monitor simultaneously, whereas in the control group, the endoscopists watched only the normal monitor. In both groups, the endoscopists observed the esophagus using white-light imaging (WLI), followed by narrow-band imaging (NBI) and iodine staining. The primary endpoint was the enhanced detection rate of ESCC by non-experts using AI. The detection rate was defined as the ratio of WLI/NBI-detected ESCCs to all ESCCs detected by iodine staining. Results A total of 320 patients were included in this analysis. The detection rate of ESCC in non-experts was 47% in the AI group and 45% in the control group (p=0.93), with no significant difference, was similar to that in experts (87% vs. 57%, p=0.20) and all endoscopists (57% vs. 50%, p=0.70). Conclusions This study could not demonstrate an improvement in the esophageal cancer detection rate using the AI diagnostic support system for ESCC.

2.
J Gastroenterol Hepatol ; 39(1): 157-164, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37830487

RESUMEN

BACKGROUND AND AIM: Convolutional neural network (CNN) systems that automatically detect abnormalities from small-bowel capsule endoscopy (SBCE) images are still experimental, and no studies have directly compared the clinical usefulness of different systems. We compared endoscopist readings using an existing and a novel CNN system in a real-world SBCE setting. METHODS: Thirty-six complete SBCE videos, including 43 abnormal lesions (18 mucosal breaks, 8 angioectasia, and 17 protruding lesions), were retrospectively prepared. Three reading processes were compared: (A) endoscopist readings without CNN screening, (B) endoscopist readings after an existing CNN screening, and (C) endoscopist readings after a novel CNN screening. RESULTS: The mean number of small-bowel images was 14 747 per patient. Among these images, existing and novel CNN systems automatically captured 24.3% and 9.4% of the images, respectively. In this process, both systems extracted all 43 abnormal lesions. Next, we focused on the clinical usefulness. The detection rates of abnormalities by trainee endoscopists were not significantly different across the three processes: A, 77%; B, 67%; and C, 79%. The mean reading time of the trainees was the shortest during process C (10.1 min per patient), followed by processes B (23.1 min per patient) and A (33.6 min per patient). The mean psychological stress score while reading videos (scale, 1-5) was the lowest in process C (1.8) but was not significantly different between processes B (2.8) and A (3.2). CONCLUSIONS: Our novel CNN system significantly reduced endoscopist reading time and psychological stress while maintaining the detectability of abnormalities. CNN performance directly affects clinical utility and should be carefully assessed.


Asunto(s)
Endoscopía Capsular , Aprendizaje Profundo , Humanos , Endoscopía Capsular/métodos , Estudios Retrospectivos , Intestino Delgado/diagnóstico por imagen , Intestino Delgado/patología , Redes Neurales de la Computación
3.
Digestion ; : 1-17, 2024 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-39068926

RESUMEN

BACKGROUND: Artificial intelligence (AI) using deep learning systems has recently been utilized in various medical fields. In the field of gastroenterology, AI is primarily implemented in image recognition and utilized in the realm of gastrointestinal (GI) endoscopy. In GI endoscopy, computer-aided detection/diagnosis (CAD) systems assist endoscopists in GI neoplasm detection or differentiation of cancerous or noncancerous lesions. Several AI systems for colorectal polyps have already been applied in colonoscopy clinical practices. In esophagogastroduodenoscopy, a few CAD systems for upper GI neoplasms have been launched in Asian countries. The usefulness of these CAD systems in GI endoscopy has been gradually elucidated. SUMMARY: In this review, we outline recent articles on several studies of endoscopic AI systems for GI neoplasms, focusing on esophageal squamous cell carcinoma (ESCC), esophageal adenocarcinoma (EAC), gastric cancer (GC), and colorectal polyps. In ESCC and EAC, computer-aided detection (CADe) systems were mainly developed, and a recent meta-analysis study showed sensitivities of 91.2% and 93.1% and specificities of 80% and 86.9%, respectively. In GC, a recent meta-analysis study on CADe systems demonstrated that their sensitivity and specificity were as high as 90%. A randomized controlled trial (RCT) also showed that the use of the CADe system reduced the miss rate. Regarding computer-aided diagnosis (CADx) systems for GC, although RCTs have not yet been conducted, most studies have demonstrated expert-level performance. In colorectal polyps, multiple RCTs have shown the usefulness of the CADe system for improving the polyp detection rate, and several CADx systems have been shown to have high accuracy in colorectal polyp differentiation. KEY MESSAGES: Most analyses of endoscopic AI systems suggested that their performance was better than that of nonexpert endoscopists and equivalent to that of expert endoscopists. Thus, endoscopic AI systems may be useful for reducing the risk of overlooking lesions and improving the diagnostic ability of endoscopists.

4.
Gastrointest Endosc ; 98(6): 968-976.e3, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37482106

RESUMEN

BACKGROUND AND AIMS: Capsule endoscopy (CE) is useful in evaluating disease surveillance for primary small-bowel follicular lymphoma (FL), but some cases are difficult to evaluate objectively. This study evaluated the usefulness of a deep convolutional neural network (CNN) system using CE images for disease surveillance of primary small-bowel FL. METHODS: We enrolled 26 consecutive patients with primary small-bowel FL diagnosed between January 2011 and January 2021 who underwent CE before and after a watch-and-wait strategy or chemotherapy. Disease surveillance by the CNN system was evaluated by the percentage of FL-detected images among all CE images of the small-bowel mucosa. RESULTS: Eighteen cases (69%) were managed with a watch-and-wait approach, and 8 cases (31%) were treated with chemotherapy. Among the 18 cases managed with the watch-and-wait approach, the outcome of lesion evaluation by the CNN system was almost the same in 13 cases (72%), aggravation in 4 (22%), and improvement in 1 (6%). Among the 8 cases treated with chemotherapy, the outcome of lesion evaluation by the CNN system was improvement in 5 cases (63%), almost the same in 2 (25%), and aggravation in 1 (12%). The physician and CNN system reported similar results regarding disease surveillance evaluation in 23 of 26 cases (88%), whereas a discrepancy between the 2 was found in the remaining 3 cases (12%), attributed to poor small-bowel cleansing level. CONCLUSIONS: Disease surveillance evaluation of primary small-bowel FL using CE images by the developed CNN system was useful under the condition of excellent small-bowel cleansing level.


Asunto(s)
Endoscopía Capsular , Linfoma Folicular , Humanos , Endoscopía Capsular/métodos , Linfoma Folicular/diagnóstico por imagen , Linfoma Folicular/tratamiento farmacológico , Redes Neurales de la Computación , Intestino Delgado/diagnóstico por imagen , Intestino Delgado/patología , Duodeno
5.
BMC Gastroenterol ; 23(1): 184, 2023 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-37231330

RESUMEN

BACKGROUND: Several pre-clinical studies have reported the usefulness of artificial intelligence (AI) systems in the diagnosis of esophageal squamous cell carcinoma (ESCC). We conducted this study to evaluate the usefulness of an AI system for real-time diagnosis of ESCC in a clinical setting. METHODS: This study followed a single-center prospective single-arm non-inferiority design. Patients at high risk for ESCC were recruited and real-time diagnosis by the AI system was compared with that of endoscopists for lesions suspected to be ESCC. The primary outcomes were the diagnostic accuracy of the AI system and endoscopists. The secondary outcomes were sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and adverse events. RESULTS: A total of 237 lesions were evaluated. The accuracy, sensitivity, and specificity of the AI system were 80.6%, 68.2%, and 83.4%, respectively. The accuracy, sensitivity, and specificity of endoscopists were 85.7%, 61.4%, and 91.2%, respectively. The difference between the accuracy of the AI system and that of the endoscopists was - 5.1%, and the lower limit of the 90% confidence interval was less than the non-inferiority margin. CONCLUSIONS: The non-inferiority of the AI system in comparison with endoscopists in the real-time diagnosis of ESCC in a clinical setting was not proven. TRIAL REGISTRATION: Japan Registry of Clinical Trials (jRCTs052200015, 18/05/2020).


Asunto(s)
Neoplasias Esofágicas , Carcinoma de Células Escamosas de Esófago , Humanos , Inteligencia Artificial , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/patología , Carcinoma de Células Escamosas de Esófago/diagnóstico , Carcinoma de Células Escamosas de Esófago/patología , Esofagoscopía , Estudios Prospectivos
6.
J Gastroenterol Hepatol ; 38(9): 1587-1591, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37408330

RESUMEN

OBJECTIVES: Artificial intelligence (AI) uses deep learning functionalities that may enhance the detection of early gastric cancer during endoscopy. An AI-based endoscopic system for upper endoscopy was recently developed in Japan. We aim to validate this AI-based system in a Singaporean cohort. METHODS: There were 300 de-identified still images prepared from endoscopy video files obtained from subjects that underwent gastroscopy in National University Hospital (NUH). Five specialists and 6 non-specialists (trainees) from NUH were assigned to read and categorize the images into "neoplastic" or "non-neoplastic." Results were then compared with the readings performed by the endoscopic AI system. RESULTS: The mean accuracy, sensitivity, and specificity for the 11 endoscopists were 0.847, 0.525, and 0.872, respectively. These values for the AI-based system were 0.777, 0.591, and 0.791, respectively. While AI in general did not perform better than endoscopists on the whole, in the subgroup of high-grade dysplastic lesions, only 29.1% were picked up by the endoscopist rating, but 80% were classified as neoplastic by AI (P = 0.0011). The average diagnostic time was also faster in AI compared with endoscopists (677.1 s vs 42.02 s (P < 0.001). CONCLUSION: We demonstrated that an AI system developed in another health system was comparable in diagnostic accuracy in the evaluation of static images. AI systems are faster and not fatigable and may have a role in augmenting human diagnosis during endoscopy. With more advances in AI and larger studies to support its efficacy it would likely play a larger role in screening endoscopy in future.


Asunto(s)
Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagen , Inteligencia Artificial , Gastroscopía , Pueblo Asiatico , Fatiga
7.
Dig Endosc ; 35(4): 483-491, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36239483

RESUMEN

OBJECTIVES: Endoscopists' abilities to diagnose early gastric cancers (EGCs) vary, especially between specialists and nonspecialists. We developed an artificial intelligence (AI)-based diagnostic support tool "Tango" to differentiate EGCs and compared its performance with that of endoscopists. METHODS: The diagnostic performances of Tango and endoscopists (34 specialists, 42 nonspecialists) were compared using still images of 150 neoplastic and 165 non-neoplastic lesions. Neoplastic lesions included EGCs and adenomas. The primary outcome was to show the noninferiority of Tango (based on sensitivity) over specialists. The secondary outcomes were the noninferiority of Tango (based on accuracy) over specialists and the superiority of Tango (based on sensitivity and accuracy) over nonspecialists. The lower limit of the 95% confidence interval (CI) of the difference between Tango and the specialists for sensitivity was calculated, with >-10% defined as noninferiority and >0% defined as superiority in the primary outcome. The comparable differences between Tango and the endoscopists for each performance were calculated, with >10% defined as superiority and >0% defined as noninferiority in the secondary outcomes. RESULTS: Tango achieved superiority over the specialists based on sensitivity (84.7% vs. 65.8%, difference 18.9%, 95% CI 12.3-25.3%) and demonstrated noninferiority based on accuracy (70.8% vs. 67.4%). Tango achieved superiority over the nonspecialists based on sensitivity (84.7% vs. 51.0%) and accuracy (70.8% vs. 58.4%). CONCLUSIONS: The AI-based diagnostic support tool for EGCs demonstrated a robust performance and may be useful to reduce misdiagnosis.


Asunto(s)
Inteligencia Artificial , Neoplasias Gástricas , Humanos , Estudios Retrospectivos , Neoplasias Gástricas/diagnóstico
8.
Endoscopy ; 54(8): 780-784, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-34607377

RESUMEN

AIMS: To compare endoscopy gastric cancer images diagnosis rate between artificial intelligence (AI) and expert endoscopists. PATIENTS AND METHODS: We used the retrospective data of 500 patients, including 100 with gastric cancer, matched 1:1 to diagnosis by AI or expert endoscopists. We retrospectively evaluated the noninferiority (prespecified margin 5 %) of the per-patient rate of gastric cancer diagnosis by AI and compared the per-image rate of gastric cancer diagnosis. RESULTS: Gastric cancer was diagnosed in 49 of 49 patients (100 %) in the AI group and 48 of 51 patients (94.12 %) in the expert endoscopist group (difference 5.88, 95 % confidence interval: -0.58 to 12.3). The per-image rate of gastric cancer diagnosis was higher in the AI group (99.87 %, 747 /748 images) than in the expert endoscopist group (88.17 %, 693 /786 images) (difference 11.7 %). CONCLUSIONS: Noninferiority of the rate of gastric cancer diagnosis by AI was demonstrated but superiority was not demonstrated.


Asunto(s)
Inteligencia Artificial , Neoplasias Gástricas , Endoscopía , Endoscopía Gastrointestinal/métodos , Humanos , Estudios Retrospectivos , Neoplasias Gástricas/diagnóstico por imagen
9.
BMC Gastroenterol ; 22(1): 237, 2022 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-35549679

RESUMEN

BACKGROUND: Endocytoscopy (ECS) aids early gastric cancer (EGC) diagnosis by visualization of cells. However, it is difficult for non-experts to accurately diagnose EGC using ECS. In this study, we developed and evaluated a convolutional neural network (CNN)-based system for ECS-aided EGC diagnosis. METHODS: We constructed a CNN based on a residual neural network with a training dataset comprising 906 images from 61 EGC cases and 717 images from 65 noncancerous gastric mucosa (NGM) cases. To evaluate diagnostic ability, we used an independent test dataset comprising 313 images from 39 EGC cases and 235 images from 33 NGM cases. The test dataset was further evaluated by three endoscopists, and their findings were compared with CNN-based results. RESULTS: The trained CNN required 7.0 s to analyze the test dataset. The area under the curve of the total ECS images was 0.93. The CNN produced 18 false positives from 7 NGM lesions and 74 false negatives from 28 EGC lesions. In the per-image analysis, the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 83.2%, 76.4%, 92.3%, 93.0%, and 74.6%, respectively, with the CNN and 76.8%, 73.4%, 81.3%, 83.9%, and 69.6%, respectively, for the endoscopist-derived values. The CNN-based findings had significantly higher specificity than the findings determined by all endoscopists. In the per-lesion analysis, the accuracy, sensitivity, specificity, PPV, and NPV of the CNN-based findings were 86.1%, 82.1%, 90.9%, 91.4%, and 81.1%, respectively, and those of the results calculated by the endoscopists were 82.4%, 79.5%, 85.9%, 86.9%, and 78.0%, respectively. CONCLUSIONS: Compared with three endoscopists, our CNN for ECS demonstrated higher specificity for EGC diagnosis. Using the CNN in ECS-based EGC diagnosis may improve the diagnostic performance of endoscopists.


Asunto(s)
Neoplasias Gástricas , Detección Precoz del Cáncer/métodos , Mucosa Gástrica/diagnóstico por imagen , Mucosa Gástrica/patología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Neoplasias Gástricas/diagnóstico por imagen , Neoplasias Gástricas/patología
10.
J Clin Lab Anal ; 36(1): e24122, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34811809

RESUMEN

BACKGROUND AND AIM: Gastrointestinal endoscopy and biopsy-based pathological findings are needed to diagnose early gastric cancer. However, the information of biopsy specimen is limited because of the topical procedure; therefore, pathology doctors sometimes diagnose as gastric indefinite for dysplasia (GIN). METHODS: We compared the accuracy of physician-performed endoscopy (trainee, n = 3; specialists, n = 3), artificial intelligence (AI)-based endoscopy, and/or molecular markers (DNA methylation: BARHL2, MINT31, TET1, miR-148a, miR-124a-3, NKX6-1; mutations: TP53; and microsatellite instability) in diagnosing GIN lesions. We enrolled 24,388 patients who underwent endoscopy, and 71 patients were diagnosed with GIN lesions. Thirty-two cases of endoscopic submucosal dissection (ESD) in 71 GIN lesions and 32 endoscopically resected tissues were assessed by endoscopists, AI, and molecular markers to identify benign or malignant lesions. RESULTS: The board-certified endoscopic physicians group showed the highest accuracy in the receiver operative characteristic curve (area under the curve [AUC]: 0.931), followed by a combination of AI and miR148a DNA methylation (AUC: 0.825), and finally trainee endoscopists (AUC: 0.588). CONCLUSION: AI with miR148s DNA methylation-based diagnosis is a potential modality for diagnosing GIN.


Asunto(s)
Inteligencia Artificial , Diagnóstico por Computador/métodos , Endoscopía Gastrointestinal , MicroARNs/genética , Neoplasias Gástricas , Anciano , Anciano de 80 o más Años , Biomarcadores de Tumor/genética , Metilación de ADN/genética , Detección Precoz del Cáncer , Resección Endoscópica de la Mucosa , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estómago/patología , Estómago/cirugía , Neoplasias Gástricas/diagnóstico , Neoplasias Gástricas/genética , Neoplasias Gástricas/patología , Neoplasias Gástricas/cirugía
11.
Dis Esophagus ; 35(9)2022 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-35292794

RESUMEN

Endocytoscopy (EC) facilitates real-time histological diagnosis of esophageal lesions in vivo. We developed a deep-learning artificial intelligence (AI) system for analysis of EC images and compared its diagnostic ability with that of an expert pathologist and nonexpert endoscopists. Our new AI was based on a vision transformer model (DeiT) and trained using 7983 EC images of the esophagus (2368 malignant and 5615 nonmalignant). The AI evaluated 114 randomly arranged EC pictures (33 ESCC and 81 nonmalignant lesions) from 38 consecutive cases. An expert pathologist and two nonexpert endoscopists also analyzed the same image set according to the modified type classification (adding four EC features of nonmalignant lesions to our previous classification). The area under the curve calculated from the receiver-operating characteristic curve for the AI analysis was 0.92. In per-image analysis, the overall accuracy of the AI, pathologist, and two endoscopists was 91.2%, 91.2%, 85.9%, and 83.3%, respectively. The kappa value between the pathologist and the AI, and between the two endoscopists and the AI showed moderate concordance; that between the pathologist and the two endoscopists showed poor concordance. In per-patient analysis, the overall accuracy of the AI, pathologist, and two endoscopists was 94.7%, 92.1%, 86.8%, and 89.5%, respectively. The modified type classification aided high overall diagnostic accuracy by the pathologist and nonexpert endoscopists. The diagnostic ability of the AI was equal or superior to that of the experienced pathologist. AI is expected to support endoscopists in diagnosing esophageal lesions based on EC images.


Asunto(s)
Inteligencia Artificial , Endoscopía , Endoscopía/métodos , Esófago/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Curva ROC
12.
J Appl Clin Med Phys ; 23(7): e13626, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35536775

RESUMEN

PURPOSE: Accurate tracer accumulation evaluation is difficult owing to the partial volume effect (PVE). We proposed a novel semi-quantitative approach for measuring the accumulation amount by examining the approximate image. Using a striatal phantom, we verified the validity of a newly proposed method to accurately evaluate the tracer accumulations in the caudate and putamen separately. Moreover, we compared the proposed method with the conventional methods. METHODS: The left and right caudate/putamen regions and the whole brain region as background were identified in computed tomography (CT) images obtained by single-photon emission computed tomography (SPECT)/CT and acquired the positional information of each region. SPECT-like images were generated by assigning assumed accumulation amounts to each region. The SPECT-like image, approximated to the actual measured SPECT image, was examined by changing the assumed accumulation amounts assigned to each region. When the generated SPECT-like image most approximated the actual measured SPECT image, the accumulation amounts assumed were determined as the accumulation amounts in each region. We evaluated the correlation between the count density calculated by the proposed method and the actual count density of the 123 I solution filled in the phantom. Conventional methods (CT-guide method, geometric transfer matrix [GTM] method, region-based voxel-wise [RBV] method, and Southampton method) were also evaluated. The significance of differences between the correlation coefficients of various methods (except the Southampton method) was evaluated. RESULTS: The correlation coefficients between the actual count density and the SPECT count densities were 0.997, 0.973, 0.951, 0.950, and 0.996 for the proposed method, CT-guide method, GTM method, RBV method, and Southampton method, respectively. The correlation of the proposed method was significantly higher than those of the other methods. CONCLUSIONS: The proposed method could calculate accurate accumulation amounts in the caudate and putamen separately, considering the PVE.


Asunto(s)
Proteínas de Transporte de Dopamina a través de la Membrana Plasmática , Tomografía Computarizada de Emisión de Fotón Único , Encéfalo , Proteínas de Transporte de Dopamina a través de la Membrana Plasmática/metabolismo , Humanos , Fantasmas de Imagen , Tomografía Computarizada de Emisión de Fotón Único/métodos
13.
Gastrointest Endosc ; 93(1): 165-173.e1, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32417297

RESUMEN

BACKGROUND AND AIMS: A deep convolutional neural network (CNN) system could be a high-level screening tool for capsule endoscopy (CE) reading but has not been established for targeting various abnormalities. We aimed to develop a CNN-based system and compare it with the existing QuickView mode in terms of their ability to detect various abnormalities. METHODS: We trained a CNN system using 66,028 CE images (44,684 images of abnormalities and 21,344 normal images). The detection rate of the CNN for various abnormalities was assessed per patient, using an independent test set of 379 consecutive small-bowel CE videos from 3 institutions. Mucosal breaks, angioectasia, protruding lesions, and blood content were present in 94, 29, 81, and 23 patients, respectively. The detection capability of the CNN was compared with that of QuickView mode. RESULTS: The CNN picked up 1,135,104 images (22.5%) from the 5,050,226 test images, and thus, the sampling rate of QuickView mode was set to 23% in this study. In total, the detection rate of the CNN for abnormalities per patient was significantly higher than that of QuickView mode (99% vs 89%, P < .001). The detection rates of the CNN for mucosal breaks, angioectasia, protruding lesions, and blood content were 100% (94 of 94), 97% (28 of 29), 99% (80 of 81), and 100% (23 of 23), respectively, and those of QuickView mode were 91%, 97%, 80%, and 96%, respectively. CONCLUSIONS: We developed and tested a CNN-based detection system for various abnormalities using multicenter CE videos. This system could serve as an alternative high-level screening tool to QuickView mode.


Asunto(s)
Endoscopía Capsular , Aprendizaje Profundo , Humanos , Intestino Delgado/diagnóstico por imagen , Redes Neurales de la Computación
14.
Endoscopy ; 53(11): 1105-1113, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33540446

RESUMEN

BACKGROUND: It is known that an esophagus with multiple Lugol-voiding lesions (LVLs) after iodine staining is high risk for esophageal cancer; however, it is preferable to identify high-risk cases without staining because iodine causes discomfort and prolongs examination times. This study assessed the capability of an artificial intelligence (AI) system to predict multiple LVLs from images that had not been stained with iodine as well as patients at high risk for esophageal cancer. METHODS: We constructed the AI system by preparing a training set of 6634 images from white-light and narrow-band imaging in 595 patients before they underwent endoscopic examination with iodine staining. Diagnostic performance was evaluated on an independent validation dataset (667 images from 72 patients) and compared with that of 10 experienced endoscopists. RESULTS: The sensitivity, specificity, and accuracy of the AI system to predict multiple LVLs were 84.4 %, 70.0 %, and 76.4 %, respectively, compared with 46.9 %, 77.5 %, and 63.9 %, respectively, for the endoscopists. The AI system had significantly higher sensitivity than 9/10 experienced endoscopists. We also identified six endoscopic findings that were significantly more frequent in patients with multiple LVLs; however, the AI system had greater sensitivity than these findings for the prediction of multiple LVLs. Moreover, patients with AI-predicted multiple LVLs had significantly more cancers in the esophagus and head and neck than patients without predicted multiple LVLs. CONCLUSION: The AI system could predict multiple LVLs with high sensitivity from images without iodine staining. The system could enable endoscopists to apply iodine staining more judiciously.


Asunto(s)
Neoplasias Esofágicas , Carcinoma de Células Escamosas de Esófago , Inteligencia Artificial , Neoplasias Esofágicas/diagnóstico por imagen , Esofagoscopía , Humanos , Imagen de Banda Estrecha
15.
J Nucl Cardiol ; 28(4): 1438-1445, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31435883

RESUMEN

BACKGROUND: Nearly one-third of patients with advanced heart failure (HF) do not benefit from cardiac resynchronization therapy (CRT). We developed a novel approach for optimizing CRT via a simultaneous assessment of the myocardial viability and an appropriate lead position using a fusion technique with CT coronary venography and myocardial perfusion imaging. METHODS AND RESULTS: The myocardial viability and coronary venous anatomy were evaluated by resting Tc-99m-tetrofosmin myocardial perfusion imaging (MPI) and contrast CT venography, respectively. Using fusion images reconstructed by MPI and CT coronary venography, the pacing site and lead length were determined for appropriate CRT device implantations in 4 HF patients. The efficacy of this method was estimated by the symptomatic and echocardiographic functional parameters. In all patients, fusion images using MPI and CT coronary venograms were successfully reconstructed without any misregistration and contributed to an effective CRT. Before the surgery, this method enabled the operators to precisely identify the optimal indwelling site, which exhibited myocardial viability and had a lead length necessary for an appropriate device implantation. CONCLUSIONS: The fusion image technique using myocardial perfusion imaging and CT coronary venography is clinically feasible and promising for CRT optimization and enhancing the patient safety in patients with advanced HF.


Asunto(s)
Terapia de Resincronización Cardíaca , Insuficiencia Cardíaca/diagnóstico por imagen , Imagen de Perfusión Miocárdica , Flebografía , Tomografía Computarizada de Emisión de Fotón Único , Tomografía Computarizada por Rayos X , Anciano , Anciano de 80 o más Años , Dispositivos de Terapia de Resincronización Cardíaca , Angiografía Coronaria , Femenino , Insuficiencia Cardíaca/terapia , Humanos , Imagenología Tridimensional , Masculino , Persona de Mediana Edad , Supervivencia Tisular
16.
J Gastroenterol Hepatol ; 36(2): 482-489, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32681536

RESUMEN

BACKGROUND AND AIM: Magnifying endoscopy with narrow-band imaging (ME-NBI) has made a huge contribution to clinical practice. However, acquiring skill at ME-NBI diagnosis of early gastric cancer (EGC) requires considerable expertise and experience. Recently, artificial intelligence (AI), using deep learning and a convolutional neural network (CNN), has made remarkable progress in various medical fields. Here, we constructed an AI-assisted CNN computer-aided diagnosis (CAD) system, based on ME-NBI images, to diagnose EGC and evaluated the diagnostic accuracy of the AI-assisted CNN-CAD system. METHODS: The AI-assisted CNN-CAD system (ResNet50) was trained and validated on a dataset of 5574 ME-NBI images (3797 EGCs, 1777 non-cancerous mucosa and lesions). To evaluate the diagnostic accuracy, a separate test dataset of 2300 ME-NBI images (1430 EGCs, 870 non-cancerous mucosa and lesions) was assessed using the AI-assisted CNN-CAD system. RESULTS: The AI-assisted CNN-CAD system required 60 s to analyze 2300 test images. The overall accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the CNN were 98.7%, 98%, 100%, 100%, and 96.8%, respectively. All misdiagnosed images of EGCs were of low-quality or of superficially depressed and intestinal-type intramucosal cancers that were difficult to distinguish from gastritis, even by experienced endoscopists. CONCLUSIONS: The AI-assisted CNN-CAD system for ME-NBI diagnosis of EGC could process many stored ME-NBI images in a short period of time and had a high diagnostic ability. This system may have great potential for future application to real clinical settings, which could facilitate ME-NBI diagnosis of EGC in practice.


Asunto(s)
Inteligencia Artificial , Detección Precoz del Cáncer/métodos , Endoscopía Gastrointestinal/métodos , Imagen de Banda Estrecha/métodos , Redes Neurales de la Computación , Neoplasias Gástricas/diagnóstico , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Sensibilidad y Especificidad , Neoplasias Gástricas/diagnóstico por imagen , Neoplasias Gástricas/patología
17.
J Gastroenterol Hepatol ; 36(1): 131-136, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-32511793

RESUMEN

BACKGROUND AND AIM: Conventional endoscopy for the early detection of esophageal and esophagogastric junctional adenocarcinoma (E/J cancer) is limited because early lesions are asymptomatic, and the associated changes in the mucosa are subtle. There are no reports on artificial intelligence (AI) diagnosis for E/J cancer from Asian countries. Therefore, we aimed to develop a computerized image analysis system using deep learning for the detection of E/J cancers. METHODS: A total of 1172 images from 166 pathologically proven superficial E/J cancer cases and 2271 images of normal mucosa in esophagogastric junctional from 219 cases were used as the training image data. A total of 232 images from 36 cancer cases and 43 non-cancerous cases were used as the validation test data. The same validation test data were diagnosed by 15 board-certified specialists (experts). RESULTS: The sensitivity, specificity, and accuracy of the AI system were 94%, 42%, and 66%, respectively, and that of the experts were 88%, 43%, and 63%, respectively. The sensitivity of the AI system was favorable, while its specificity for non-cancerous lesions was similar to that of the experts. Interobserver agreement among the experts for detecting superficial E/J was fair (Fleiss' kappa = 0.26, z = 20.4, P < 0.001). CONCLUSIONS: Our AI system achieved high sensitivity and acceptable specificity for the detection of E/J cancers and may be a good supporting tool for the screening of E/J cancers.


Asunto(s)
Adenocarcinoma/diagnóstico por imagen , Inteligencia Artificial , Aprendizaje Profundo , Detección Precoz del Cáncer/métodos , Neoplasias Esofágicas/diagnóstico por imagen , Unión Esofagogástrica/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Gástricas/diagnóstico por imagen , Adulto , Anciano , Anciano de 80 o más Años , Asia , Femenino , Humanos , Masculino , Persona de Mediana Edad , Sensibilidad y Especificidad
18.
Dig Endosc ; 33(2): 254-262, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33222330

RESUMEN

In recent years, artificial intelligence (AI) has been found to be useful to physicians in the field of image recognition due to three elements: deep learning (that is, CNN, convolutional neural network), a high-performance computer, and a large amount of digitized data. In the field of gastrointestinal endoscopy, Japanese endoscopists have produced the world's first achievements of CNN-based AI system for detecting gastric and esophageal cancers. This study reviews papers on CNN-based AI for gastrointestinal cancers, and discusses the future of this technology in clinical practice. Employing AI-based endoscopes would enable early cancer detection. The better diagnostic abilities of AI technology may be beneficial in early gastrointestinal cancers in which endoscopists have variable diagnostic abilities and accuracy. AI coupled with the expertise of endoscopists would increase the accuracy of endoscopic diagnosis.


Asunto(s)
Neoplasias Esofágicas , Tracto Gastrointestinal Superior , Inteligencia Artificial , Endoscopía Gastrointestinal , Neoplasias Esofágicas/diagnóstico , Humanos , Redes Neurales de la Computación , Tracto Gastrointestinal Superior/diagnóstico por imagen
19.
Dig Endosc ; 33(1): 141-150, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-32282110

RESUMEN

OBJECTIVES: Detecting early gastric cancer is difficult, and it may even be overlooked by experienced endoscopists. Recently, artificial intelligence based on deep learning through convolutional neural networks (CNNs) has enabled significant advancements in the field of gastroenterology. However, it remains unclear whether a CNN can outperform endoscopists. In this study, we evaluated whether the performance of a CNN in detecting early gastric cancer is better than that of endoscopists. METHODS: The CNN was constructed using 13,584 endoscopic images from 2639 lesions of gastric cancer. Subsequently, its diagnostic ability was compared to that of 67 endoscopists using an independent test dataset (2940 images from 140 cases). RESULTS: The average diagnostic time for analyzing 2940 test endoscopic images by the CNN and endoscopists were 45.5 ± 1.8 s and 173.0 ± 66.0 min, respectively. The sensitivity, specificity, and positive and negative predictive values for the CNN were 58.4%, 87.3%, 26.0%, and 96.5%, respectively. These values for the 67 endoscopists were 31.9%, 97.2%, 46.2%, and 94.9%, respectively. The CNN had a significantly higher sensitivity than the endoscopists (by 26.5%; 95% confidence interval, 14.9-32.5%). CONCLUSION: The CNN detected more early gastric cancer cases in a shorter time than the endoscopists. The CNN needs further training to achieve higher diagnostic accuracy. However, a diagnostic support tool for gastric cancer using a CNN will be realized in the near future.


Asunto(s)
Neoplasias Gástricas , Inteligencia Artificial , Detección Precoz del Cáncer , Humanos , Redes Neurales de la Computación , Neoplasias Gástricas/diagnóstico por imagen
20.
Dig Endosc ; 33(4): 569-576, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-32715508

RESUMEN

OBJECTIVES: We aimed to develop an artificial intelligence (AI) system for the real-time diagnosis of pharyngeal cancers. METHODS: Endoscopic video images and still images of pharyngeal cancer treated in our facility were collected. A total of 4559 images of pathologically proven pharyngeal cancer (1243 using white light imaging and 3316 using narrow-band imaging/blue laser imaging) from 276 patients were used as a training dataset. The AI system used a convolutional neural network (CNN) model typical of the type used to analyze visual imagery. Supervised learning was used to train the CNN. The AI system was evaluated using an independent validation dataset of 25 video images of pharyngeal cancer and 36 video images of normal pharynx taken at our hospital. RESULTS: The AI system diagnosed 23/25 (92%) pharyngeal cancers as cancers and 17/36 (47%) non-cancers as non-cancers. The transaction speed of the AI system was 0.03 s per image, which meets the required speed for real-time diagnosis. The sensitivity, specificity, and accuracy for the detection of cancer were 92%, 47%, and 66% respectively. CONCLUSIONS: Our single-institution study showed that our AI system for diagnosing cancers of the pharyngeal region had promising performance with high sensitivity and acceptable specificity. Further training and improvement of the system are required with a larger dataset including multiple centers.


Asunto(s)
Inteligencia Artificial , Neoplasias Faríngeas , Endoscopía , Humanos , Imagen de Banda Estrecha , Redes Neurales de la Computación , Neoplasias Faríngeas/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA