Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 86
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Gastroenterol Hepatol ; 39(1): 157-164, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37830487

RESUMO

BACKGROUND AND AIM: Convolutional neural network (CNN) systems that automatically detect abnormalities from small-bowel capsule endoscopy (SBCE) images are still experimental, and no studies have directly compared the clinical usefulness of different systems. We compared endoscopist readings using an existing and a novel CNN system in a real-world SBCE setting. METHODS: Thirty-six complete SBCE videos, including 43 abnormal lesions (18 mucosal breaks, 8 angioectasia, and 17 protruding lesions), were retrospectively prepared. Three reading processes were compared: (A) endoscopist readings without CNN screening, (B) endoscopist readings after an existing CNN screening, and (C) endoscopist readings after a novel CNN screening. RESULTS: The mean number of small-bowel images was 14 747 per patient. Among these images, existing and novel CNN systems automatically captured 24.3% and 9.4% of the images, respectively. In this process, both systems extracted all 43 abnormal lesions. Next, we focused on the clinical usefulness. The detection rates of abnormalities by trainee endoscopists were not significantly different across the three processes: A, 77%; B, 67%; and C, 79%. The mean reading time of the trainees was the shortest during process C (10.1 min per patient), followed by processes B (23.1 min per patient) and A (33.6 min per patient). The mean psychological stress score while reading videos (scale, 1-5) was the lowest in process C (1.8) but was not significantly different between processes B (2.8) and A (3.2). CONCLUSIONS: Our novel CNN system significantly reduced endoscopist reading time and psychological stress while maintaining the detectability of abnormalities. CNN performance directly affects clinical utility and should be carefully assessed.


Assuntos
Endoscopia por Cápsula , Aprendizado Profundo , Humanos , Endoscopia por Cápsula/métodos , Estudos Retrospectivos , Intestino Delgado/diagnóstico por imagem , Intestino Delgado/patologia , Redes Neurais de Computação
2.
Gastrointest Endosc ; 98(6): 968-976.e3, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37482106

RESUMO

BACKGROUND AND AIMS: Capsule endoscopy (CE) is useful in evaluating disease surveillance for primary small-bowel follicular lymphoma (FL), but some cases are difficult to evaluate objectively. This study evaluated the usefulness of a deep convolutional neural network (CNN) system using CE images for disease surveillance of primary small-bowel FL. METHODS: We enrolled 26 consecutive patients with primary small-bowel FL diagnosed between January 2011 and January 2021 who underwent CE before and after a watch-and-wait strategy or chemotherapy. Disease surveillance by the CNN system was evaluated by the percentage of FL-detected images among all CE images of the small-bowel mucosa. RESULTS: Eighteen cases (69%) were managed with a watch-and-wait approach, and 8 cases (31%) were treated with chemotherapy. Among the 18 cases managed with the watch-and-wait approach, the outcome of lesion evaluation by the CNN system was almost the same in 13 cases (72%), aggravation in 4 (22%), and improvement in 1 (6%). Among the 8 cases treated with chemotherapy, the outcome of lesion evaluation by the CNN system was improvement in 5 cases (63%), almost the same in 2 (25%), and aggravation in 1 (12%). The physician and CNN system reported similar results regarding disease surveillance evaluation in 23 of 26 cases (88%), whereas a discrepancy between the 2 was found in the remaining 3 cases (12%), attributed to poor small-bowel cleansing level. CONCLUSIONS: Disease surveillance evaluation of primary small-bowel FL using CE images by the developed CNN system was useful under the condition of excellent small-bowel cleansing level.


Assuntos
Endoscopia por Cápsula , Linfoma Folicular , Humanos , Endoscopia por Cápsula/métodos , Linfoma Folicular/diagnóstico por imagem , Linfoma Folicular/tratamento farmacológico , Redes Neurais de Computação , Intestino Delgado/diagnóstico por imagem , Intestino Delgado/patologia , Duodeno
3.
BMC Gastroenterol ; 23(1): 184, 2023 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-37231330

RESUMO

BACKGROUND: Several pre-clinical studies have reported the usefulness of artificial intelligence (AI) systems in the diagnosis of esophageal squamous cell carcinoma (ESCC). We conducted this study to evaluate the usefulness of an AI system for real-time diagnosis of ESCC in a clinical setting. METHODS: This study followed a single-center prospective single-arm non-inferiority design. Patients at high risk for ESCC were recruited and real-time diagnosis by the AI system was compared with that of endoscopists for lesions suspected to be ESCC. The primary outcomes were the diagnostic accuracy of the AI system and endoscopists. The secondary outcomes were sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and adverse events. RESULTS: A total of 237 lesions were evaluated. The accuracy, sensitivity, and specificity of the AI system were 80.6%, 68.2%, and 83.4%, respectively. The accuracy, sensitivity, and specificity of endoscopists were 85.7%, 61.4%, and 91.2%, respectively. The difference between the accuracy of the AI system and that of the endoscopists was - 5.1%, and the lower limit of the 90% confidence interval was less than the non-inferiority margin. CONCLUSIONS: The non-inferiority of the AI system in comparison with endoscopists in the real-time diagnosis of ESCC in a clinical setting was not proven. TRIAL REGISTRATION: Japan Registry of Clinical Trials (jRCTs052200015, 18/05/2020).


Assuntos
Neoplasias Esofágicas , Carcinoma de Células Escamosas do Esôfago , Humanos , Inteligência Artificial , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/patologia , Carcinoma de Células Escamosas do Esôfago/diagnóstico , Carcinoma de Células Escamosas do Esôfago/patologia , Esofagoscopia , Estudos Prospectivos
4.
J Gastroenterol Hepatol ; 38(9): 1587-1591, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37408330

RESUMO

OBJECTIVES: Artificial intelligence (AI) uses deep learning functionalities that may enhance the detection of early gastric cancer during endoscopy. An AI-based endoscopic system for upper endoscopy was recently developed in Japan. We aim to validate this AI-based system in a Singaporean cohort. METHODS: There were 300 de-identified still images prepared from endoscopy video files obtained from subjects that underwent gastroscopy in National University Hospital (NUH). Five specialists and 6 non-specialists (trainees) from NUH were assigned to read and categorize the images into "neoplastic" or "non-neoplastic." Results were then compared with the readings performed by the endoscopic AI system. RESULTS: The mean accuracy, sensitivity, and specificity for the 11 endoscopists were 0.847, 0.525, and 0.872, respectively. These values for the AI-based system were 0.777, 0.591, and 0.791, respectively. While AI in general did not perform better than endoscopists on the whole, in the subgroup of high-grade dysplastic lesions, only 29.1% were picked up by the endoscopist rating, but 80% were classified as neoplastic by AI (P = 0.0011). The average diagnostic time was also faster in AI compared with endoscopists (677.1 s vs 42.02 s (P < 0.001). CONCLUSION: We demonstrated that an AI system developed in another health system was comparable in diagnostic accuracy in the evaluation of static images. AI systems are faster and not fatigable and may have a role in augmenting human diagnosis during endoscopy. With more advances in AI and larger studies to support its efficacy it would likely play a larger role in screening endoscopy in future.


Assuntos
Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagem , Inteligência Artificial , Gastroscopia , Povo Asiático , Fadiga
5.
Dig Endosc ; 35(4): 483-491, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36239483

RESUMO

OBJECTIVES: Endoscopists' abilities to diagnose early gastric cancers (EGCs) vary, especially between specialists and nonspecialists. We developed an artificial intelligence (AI)-based diagnostic support tool "Tango" to differentiate EGCs and compared its performance with that of endoscopists. METHODS: The diagnostic performances of Tango and endoscopists (34 specialists, 42 nonspecialists) were compared using still images of 150 neoplastic and 165 non-neoplastic lesions. Neoplastic lesions included EGCs and adenomas. The primary outcome was to show the noninferiority of Tango (based on sensitivity) over specialists. The secondary outcomes were the noninferiority of Tango (based on accuracy) over specialists and the superiority of Tango (based on sensitivity and accuracy) over nonspecialists. The lower limit of the 95% confidence interval (CI) of the difference between Tango and the specialists for sensitivity was calculated, with >-10% defined as noninferiority and >0% defined as superiority in the primary outcome. The comparable differences between Tango and the endoscopists for each performance were calculated, with >10% defined as superiority and >0% defined as noninferiority in the secondary outcomes. RESULTS: Tango achieved superiority over the specialists based on sensitivity (84.7% vs. 65.8%, difference 18.9%, 95% CI 12.3-25.3%) and demonstrated noninferiority based on accuracy (70.8% vs. 67.4%). Tango achieved superiority over the nonspecialists based on sensitivity (84.7% vs. 51.0%) and accuracy (70.8% vs. 58.4%). CONCLUSIONS: The AI-based diagnostic support tool for EGCs demonstrated a robust performance and may be useful to reduce misdiagnosis.


Assuntos
Inteligência Artificial , Neoplasias Gástricas , Humanos , Estudos Retrospectivos , Neoplasias Gástricas/diagnóstico
6.
Endoscopy ; 54(8): 780-784, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-34607377

RESUMO

AIMS: To compare endoscopy gastric cancer images diagnosis rate between artificial intelligence (AI) and expert endoscopists. PATIENTS AND METHODS: We used the retrospective data of 500 patients, including 100 with gastric cancer, matched 1:1 to diagnosis by AI or expert endoscopists. We retrospectively evaluated the noninferiority (prespecified margin 5 %) of the per-patient rate of gastric cancer diagnosis by AI and compared the per-image rate of gastric cancer diagnosis. RESULTS: Gastric cancer was diagnosed in 49 of 49 patients (100 %) in the AI group and 48 of 51 patients (94.12 %) in the expert endoscopist group (difference 5.88, 95 % confidence interval: -0.58 to 12.3). The per-image rate of gastric cancer diagnosis was higher in the AI group (99.87 %, 747 /748 images) than in the expert endoscopist group (88.17 %, 693 /786 images) (difference 11.7 %). CONCLUSIONS: Noninferiority of the rate of gastric cancer diagnosis by AI was demonstrated but superiority was not demonstrated.


Assuntos
Inteligência Artificial , Neoplasias Gástricas , Endoscopia , Endoscopia Gastrointestinal/métodos , Humanos , Estudos Retrospectivos , Neoplasias Gástricas/diagnóstico por imagem
7.
BMC Gastroenterol ; 22(1): 237, 2022 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-35549679

RESUMO

BACKGROUND: Endocytoscopy (ECS) aids early gastric cancer (EGC) diagnosis by visualization of cells. However, it is difficult for non-experts to accurately diagnose EGC using ECS. In this study, we developed and evaluated a convolutional neural network (CNN)-based system for ECS-aided EGC diagnosis. METHODS: We constructed a CNN based on a residual neural network with a training dataset comprising 906 images from 61 EGC cases and 717 images from 65 noncancerous gastric mucosa (NGM) cases. To evaluate diagnostic ability, we used an independent test dataset comprising 313 images from 39 EGC cases and 235 images from 33 NGM cases. The test dataset was further evaluated by three endoscopists, and their findings were compared with CNN-based results. RESULTS: The trained CNN required 7.0 s to analyze the test dataset. The area under the curve of the total ECS images was 0.93. The CNN produced 18 false positives from 7 NGM lesions and 74 false negatives from 28 EGC lesions. In the per-image analysis, the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 83.2%, 76.4%, 92.3%, 93.0%, and 74.6%, respectively, with the CNN and 76.8%, 73.4%, 81.3%, 83.9%, and 69.6%, respectively, for the endoscopist-derived values. The CNN-based findings had significantly higher specificity than the findings determined by all endoscopists. In the per-lesion analysis, the accuracy, sensitivity, specificity, PPV, and NPV of the CNN-based findings were 86.1%, 82.1%, 90.9%, 91.4%, and 81.1%, respectively, and those of the results calculated by the endoscopists were 82.4%, 79.5%, 85.9%, 86.9%, and 78.0%, respectively. CONCLUSIONS: Compared with three endoscopists, our CNN for ECS demonstrated higher specificity for EGC diagnosis. Using the CNN in ECS-based EGC diagnosis may improve the diagnostic performance of endoscopists.


Assuntos
Neoplasias Gástricas , Detecção Precoce de Câncer/métodos , Mucosa Gástrica/diagnóstico por imagem , Mucosa Gástrica/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/patologia
8.
J Clin Lab Anal ; 36(1): e24122, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34811809

RESUMO

BACKGROUND AND AIM: Gastrointestinal endoscopy and biopsy-based pathological findings are needed to diagnose early gastric cancer. However, the information of biopsy specimen is limited because of the topical procedure; therefore, pathology doctors sometimes diagnose as gastric indefinite for dysplasia (GIN). METHODS: We compared the accuracy of physician-performed endoscopy (trainee, n = 3; specialists, n = 3), artificial intelligence (AI)-based endoscopy, and/or molecular markers (DNA methylation: BARHL2, MINT31, TET1, miR-148a, miR-124a-3, NKX6-1; mutations: TP53; and microsatellite instability) in diagnosing GIN lesions. We enrolled 24,388 patients who underwent endoscopy, and 71 patients were diagnosed with GIN lesions. Thirty-two cases of endoscopic submucosal dissection (ESD) in 71 GIN lesions and 32 endoscopically resected tissues were assessed by endoscopists, AI, and molecular markers to identify benign or malignant lesions. RESULTS: The board-certified endoscopic physicians group showed the highest accuracy in the receiver operative characteristic curve (area under the curve [AUC]: 0.931), followed by a combination of AI and miR148a DNA methylation (AUC: 0.825), and finally trainee endoscopists (AUC: 0.588). CONCLUSION: AI with miR148s DNA methylation-based diagnosis is a potential modality for diagnosing GIN.


Assuntos
Inteligência Artificial , Diagnóstico por Computador/métodos , Endoscopia Gastrointestinal , MicroRNAs/genética , Neoplasias Gástricas , Idoso , Idoso de 80 Anos ou mais , Biomarcadores Tumorais/genética , Metilação de DNA/genética , Detecção Precoce de Câncer , Ressecção Endoscópica de Mucosa , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estômago/patologia , Estômago/cirurgia , Neoplasias Gástricas/diagnóstico , Neoplasias Gástricas/genética , Neoplasias Gástricas/patologia , Neoplasias Gástricas/cirurgia
9.
Dis Esophagus ; 35(9)2022 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-35292794

RESUMO

Endocytoscopy (EC) facilitates real-time histological diagnosis of esophageal lesions in vivo. We developed a deep-learning artificial intelligence (AI) system for analysis of EC images and compared its diagnostic ability with that of an expert pathologist and nonexpert endoscopists. Our new AI was based on a vision transformer model (DeiT) and trained using 7983 EC images of the esophagus (2368 malignant and 5615 nonmalignant). The AI evaluated 114 randomly arranged EC pictures (33 ESCC and 81 nonmalignant lesions) from 38 consecutive cases. An expert pathologist and two nonexpert endoscopists also analyzed the same image set according to the modified type classification (adding four EC features of nonmalignant lesions to our previous classification). The area under the curve calculated from the receiver-operating characteristic curve for the AI analysis was 0.92. In per-image analysis, the overall accuracy of the AI, pathologist, and two endoscopists was 91.2%, 91.2%, 85.9%, and 83.3%, respectively. The kappa value between the pathologist and the AI, and between the two endoscopists and the AI showed moderate concordance; that between the pathologist and the two endoscopists showed poor concordance. In per-patient analysis, the overall accuracy of the AI, pathologist, and two endoscopists was 94.7%, 92.1%, 86.8%, and 89.5%, respectively. The modified type classification aided high overall diagnostic accuracy by the pathologist and nonexpert endoscopists. The diagnostic ability of the AI was equal or superior to that of the experienced pathologist. AI is expected to support endoscopists in diagnosing esophageal lesions based on EC images.


Assuntos
Inteligência Artificial , Endoscopia , Endoscopia/métodos , Esôfago/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Curva ROC
10.
J Appl Clin Med Phys ; 23(7): e13626, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35536775

RESUMO

PURPOSE: Accurate tracer accumulation evaluation is difficult owing to the partial volume effect (PVE). We proposed a novel semi-quantitative approach for measuring the accumulation amount by examining the approximate image. Using a striatal phantom, we verified the validity of a newly proposed method to accurately evaluate the tracer accumulations in the caudate and putamen separately. Moreover, we compared the proposed method with the conventional methods. METHODS: The left and right caudate/putamen regions and the whole brain region as background were identified in computed tomography (CT) images obtained by single-photon emission computed tomography (SPECT)/CT and acquired the positional information of each region. SPECT-like images were generated by assigning assumed accumulation amounts to each region. The SPECT-like image, approximated to the actual measured SPECT image, was examined by changing the assumed accumulation amounts assigned to each region. When the generated SPECT-like image most approximated the actual measured SPECT image, the accumulation amounts assumed were determined as the accumulation amounts in each region. We evaluated the correlation between the count density calculated by the proposed method and the actual count density of the 123 I solution filled in the phantom. Conventional methods (CT-guide method, geometric transfer matrix [GTM] method, region-based voxel-wise [RBV] method, and Southampton method) were also evaluated. The significance of differences between the correlation coefficients of various methods (except the Southampton method) was evaluated. RESULTS: The correlation coefficients between the actual count density and the SPECT count densities were 0.997, 0.973, 0.951, 0.950, and 0.996 for the proposed method, CT-guide method, GTM method, RBV method, and Southampton method, respectively. The correlation of the proposed method was significantly higher than those of the other methods. CONCLUSIONS: The proposed method could calculate accurate accumulation amounts in the caudate and putamen separately, considering the PVE.


Assuntos
Proteínas da Membrana Plasmática de Transporte de Dopamina , Tomografia Computadorizada de Emissão de Fóton Único , Encéfalo , Proteínas da Membrana Plasmática de Transporte de Dopamina/metabolismo , Humanos , Imagens de Fantasmas , Tomografia Computadorizada de Emissão de Fóton Único/métodos
11.
Gastrointest Endosc ; 93(1): 165-173.e1, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32417297

RESUMO

BACKGROUND AND AIMS: A deep convolutional neural network (CNN) system could be a high-level screening tool for capsule endoscopy (CE) reading but has not been established for targeting various abnormalities. We aimed to develop a CNN-based system and compare it with the existing QuickView mode in terms of their ability to detect various abnormalities. METHODS: We trained a CNN system using 66,028 CE images (44,684 images of abnormalities and 21,344 normal images). The detection rate of the CNN for various abnormalities was assessed per patient, using an independent test set of 379 consecutive small-bowel CE videos from 3 institutions. Mucosal breaks, angioectasia, protruding lesions, and blood content were present in 94, 29, 81, and 23 patients, respectively. The detection capability of the CNN was compared with that of QuickView mode. RESULTS: The CNN picked up 1,135,104 images (22.5%) from the 5,050,226 test images, and thus, the sampling rate of QuickView mode was set to 23% in this study. In total, the detection rate of the CNN for abnormalities per patient was significantly higher than that of QuickView mode (99% vs 89%, P < .001). The detection rates of the CNN for mucosal breaks, angioectasia, protruding lesions, and blood content were 100% (94 of 94), 97% (28 of 29), 99% (80 of 81), and 100% (23 of 23), respectively, and those of QuickView mode were 91%, 97%, 80%, and 96%, respectively. CONCLUSIONS: We developed and tested a CNN-based detection system for various abnormalities using multicenter CE videos. This system could serve as an alternative high-level screening tool to QuickView mode.


Assuntos
Endoscopia por Cápsula , Aprendizado Profundo , Humanos , Intestino Delgado/diagnóstico por imagem , Redes Neurais de Computação
12.
Endoscopy ; 53(11): 1105-1113, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33540446

RESUMO

BACKGROUND: It is known that an esophagus with multiple Lugol-voiding lesions (LVLs) after iodine staining is high risk for esophageal cancer; however, it is preferable to identify high-risk cases without staining because iodine causes discomfort and prolongs examination times. This study assessed the capability of an artificial intelligence (AI) system to predict multiple LVLs from images that had not been stained with iodine as well as patients at high risk for esophageal cancer. METHODS: We constructed the AI system by preparing a training set of 6634 images from white-light and narrow-band imaging in 595 patients before they underwent endoscopic examination with iodine staining. Diagnostic performance was evaluated on an independent validation dataset (667 images from 72 patients) and compared with that of 10 experienced endoscopists. RESULTS: The sensitivity, specificity, and accuracy of the AI system to predict multiple LVLs were 84.4 %, 70.0 %, and 76.4 %, respectively, compared with 46.9 %, 77.5 %, and 63.9 %, respectively, for the endoscopists. The AI system had significantly higher sensitivity than 9/10 experienced endoscopists. We also identified six endoscopic findings that were significantly more frequent in patients with multiple LVLs; however, the AI system had greater sensitivity than these findings for the prediction of multiple LVLs. Moreover, patients with AI-predicted multiple LVLs had significantly more cancers in the esophagus and head and neck than patients without predicted multiple LVLs. CONCLUSION: The AI system could predict multiple LVLs with high sensitivity from images without iodine staining. The system could enable endoscopists to apply iodine staining more judiciously.


Assuntos
Neoplasias Esofágicas , Carcinoma de Células Escamosas do Esôfago , Inteligência Artificial , Neoplasias Esofágicas/diagnóstico por imagem , Esofagoscopia , Humanos , Imagem de Banda Estreita
13.
J Nucl Cardiol ; 28(4): 1438-1445, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31435883

RESUMO

BACKGROUND: Nearly one-third of patients with advanced heart failure (HF) do not benefit from cardiac resynchronization therapy (CRT). We developed a novel approach for optimizing CRT via a simultaneous assessment of the myocardial viability and an appropriate lead position using a fusion technique with CT coronary venography and myocardial perfusion imaging. METHODS AND RESULTS: The myocardial viability and coronary venous anatomy were evaluated by resting Tc-99m-tetrofosmin myocardial perfusion imaging (MPI) and contrast CT venography, respectively. Using fusion images reconstructed by MPI and CT coronary venography, the pacing site and lead length were determined for appropriate CRT device implantations in 4 HF patients. The efficacy of this method was estimated by the symptomatic and echocardiographic functional parameters. In all patients, fusion images using MPI and CT coronary venograms were successfully reconstructed without any misregistration and contributed to an effective CRT. Before the surgery, this method enabled the operators to precisely identify the optimal indwelling site, which exhibited myocardial viability and had a lead length necessary for an appropriate device implantation. CONCLUSIONS: The fusion image technique using myocardial perfusion imaging and CT coronary venography is clinically feasible and promising for CRT optimization and enhancing the patient safety in patients with advanced HF.


Assuntos
Terapia de Ressincronização Cardíaca , Insuficiência Cardíaca/diagnóstico por imagem , Imagem de Perfusão do Miocárdio , Flebografia , Tomografia Computadorizada de Emissão de Fóton Único , Tomografia Computadorizada por Raios X , Idoso , Idoso de 80 Anos ou mais , Dispositivos de Terapia de Ressincronização Cardíaca , Angiografia Coronária , Feminino , Insuficiência Cardíaca/terapia , Humanos , Imageamento Tridimensional , Masculino , Pessoa de Meia-Idade , Sobrevivência de Tecidos
14.
J Gastroenterol Hepatol ; 36(1): 131-136, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32511793

RESUMO

BACKGROUND AND AIM: Conventional endoscopy for the early detection of esophageal and esophagogastric junctional adenocarcinoma (E/J cancer) is limited because early lesions are asymptomatic, and the associated changes in the mucosa are subtle. There are no reports on artificial intelligence (AI) diagnosis for E/J cancer from Asian countries. Therefore, we aimed to develop a computerized image analysis system using deep learning for the detection of E/J cancers. METHODS: A total of 1172 images from 166 pathologically proven superficial E/J cancer cases and 2271 images of normal mucosa in esophagogastric junctional from 219 cases were used as the training image data. A total of 232 images from 36 cancer cases and 43 non-cancerous cases were used as the validation test data. The same validation test data were diagnosed by 15 board-certified specialists (experts). RESULTS: The sensitivity, specificity, and accuracy of the AI system were 94%, 42%, and 66%, respectively, and that of the experts were 88%, 43%, and 63%, respectively. The sensitivity of the AI system was favorable, while its specificity for non-cancerous lesions was similar to that of the experts. Interobserver agreement among the experts for detecting superficial E/J was fair (Fleiss' kappa = 0.26, z = 20.4, P < 0.001). CONCLUSIONS: Our AI system achieved high sensitivity and acceptable specificity for the detection of E/J cancers and may be a good supporting tool for the screening of E/J cancers.


Assuntos
Adenocarcinoma/diagnóstico por imagem , Inteligência Artificial , Aprendizado Profundo , Detecção Precoce de Câncer/métodos , Neoplasias Esofágicas/diagnóstico por imagem , Junção Esofagogástrica/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Gástricas/diagnóstico por imagem , Adulto , Idoso , Idoso de 80 Anos ou mais , Ásia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sensibilidade e Especificidade
15.
J Gastroenterol Hepatol ; 36(2): 482-489, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32681536

RESUMO

BACKGROUND AND AIM: Magnifying endoscopy with narrow-band imaging (ME-NBI) has made a huge contribution to clinical practice. However, acquiring skill at ME-NBI diagnosis of early gastric cancer (EGC) requires considerable expertise and experience. Recently, artificial intelligence (AI), using deep learning and a convolutional neural network (CNN), has made remarkable progress in various medical fields. Here, we constructed an AI-assisted CNN computer-aided diagnosis (CAD) system, based on ME-NBI images, to diagnose EGC and evaluated the diagnostic accuracy of the AI-assisted CNN-CAD system. METHODS: The AI-assisted CNN-CAD system (ResNet50) was trained and validated on a dataset of 5574 ME-NBI images (3797 EGCs, 1777 non-cancerous mucosa and lesions). To evaluate the diagnostic accuracy, a separate test dataset of 2300 ME-NBI images (1430 EGCs, 870 non-cancerous mucosa and lesions) was assessed using the AI-assisted CNN-CAD system. RESULTS: The AI-assisted CNN-CAD system required 60 s to analyze 2300 test images. The overall accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the CNN were 98.7%, 98%, 100%, 100%, and 96.8%, respectively. All misdiagnosed images of EGCs were of low-quality or of superficially depressed and intestinal-type intramucosal cancers that were difficult to distinguish from gastritis, even by experienced endoscopists. CONCLUSIONS: The AI-assisted CNN-CAD system for ME-NBI diagnosis of EGC could process many stored ME-NBI images in a short period of time and had a high diagnostic ability. This system may have great potential for future application to real clinical settings, which could facilitate ME-NBI diagnosis of EGC in practice.


Assuntos
Inteligência Artificial , Detecção Precoce de Câncer/métodos , Endoscopia Gastrointestinal/métodos , Imagem de Banda Estreita/métodos , Redes Neurais de Computação , Neoplasias Gástricas/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Sensibilidade e Especificidade , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/patologia
16.
Dig Endosc ; 33(2): 254-262, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33222330

RESUMO

In recent years, artificial intelligence (AI) has been found to be useful to physicians in the field of image recognition due to three elements: deep learning (that is, CNN, convolutional neural network), a high-performance computer, and a large amount of digitized data. In the field of gastrointestinal endoscopy, Japanese endoscopists have produced the world's first achievements of CNN-based AI system for detecting gastric and esophageal cancers. This study reviews papers on CNN-based AI for gastrointestinal cancers, and discusses the future of this technology in clinical practice. Employing AI-based endoscopes would enable early cancer detection. The better diagnostic abilities of AI technology may be beneficial in early gastrointestinal cancers in which endoscopists have variable diagnostic abilities and accuracy. AI coupled with the expertise of endoscopists would increase the accuracy of endoscopic diagnosis.


Assuntos
Neoplasias Esofágicas , Trato Gastrointestinal Superior , Inteligência Artificial , Endoscopia Gastrointestinal , Neoplasias Esofágicas/diagnóstico , Humanos , Redes Neurais de Computação , Trato Gastrointestinal Superior/diagnóstico por imagem
17.
Dig Endosc ; 33(1): 141-150, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32282110

RESUMO

OBJECTIVES: Detecting early gastric cancer is difficult, and it may even be overlooked by experienced endoscopists. Recently, artificial intelligence based on deep learning through convolutional neural networks (CNNs) has enabled significant advancements in the field of gastroenterology. However, it remains unclear whether a CNN can outperform endoscopists. In this study, we evaluated whether the performance of a CNN in detecting early gastric cancer is better than that of endoscopists. METHODS: The CNN was constructed using 13,584 endoscopic images from 2639 lesions of gastric cancer. Subsequently, its diagnostic ability was compared to that of 67 endoscopists using an independent test dataset (2940 images from 140 cases). RESULTS: The average diagnostic time for analyzing 2940 test endoscopic images by the CNN and endoscopists were 45.5 ± 1.8 s and 173.0 ± 66.0 min, respectively. The sensitivity, specificity, and positive and negative predictive values for the CNN were 58.4%, 87.3%, 26.0%, and 96.5%, respectively. These values for the 67 endoscopists were 31.9%, 97.2%, 46.2%, and 94.9%, respectively. The CNN had a significantly higher sensitivity than the endoscopists (by 26.5%; 95% confidence interval, 14.9-32.5%). CONCLUSION: The CNN detected more early gastric cancer cases in a shorter time than the endoscopists. The CNN needs further training to achieve higher diagnostic accuracy. However, a diagnostic support tool for gastric cancer using a CNN will be realized in the near future.


Assuntos
Neoplasias Gástricas , Inteligência Artificial , Detecção Precoce de Câncer , Humanos , Redes Neurais de Computação , Neoplasias Gástricas/diagnóstico por imagem
18.
Dig Endosc ; 33(4): 569-576, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-32715508

RESUMO

OBJECTIVES: We aimed to develop an artificial intelligence (AI) system for the real-time diagnosis of pharyngeal cancers. METHODS: Endoscopic video images and still images of pharyngeal cancer treated in our facility were collected. A total of 4559 images of pathologically proven pharyngeal cancer (1243 using white light imaging and 3316 using narrow-band imaging/blue laser imaging) from 276 patients were used as a training dataset. The AI system used a convolutional neural network (CNN) model typical of the type used to analyze visual imagery. Supervised learning was used to train the CNN. The AI system was evaluated using an independent validation dataset of 25 video images of pharyngeal cancer and 36 video images of normal pharynx taken at our hospital. RESULTS: The AI system diagnosed 23/25 (92%) pharyngeal cancers as cancers and 17/36 (47%) non-cancers as non-cancers. The transaction speed of the AI system was 0.03 s per image, which meets the required speed for real-time diagnosis. The sensitivity, specificity, and accuracy for the detection of cancer were 92%, 47%, and 66% respectively. CONCLUSIONS: Our single-institution study showed that our AI system for diagnosing cancers of the pharyngeal region had promising performance with high sensitivity and acceptable specificity. Further training and improvement of the system are required with a larger dataset including multiple centers.


Assuntos
Inteligência Artificial , Neoplasias Faríngeas , Endoscopia , Humanos , Imagem de Banda Estreita , Redes Neurais de Computação , Neoplasias Faríngeas/diagnóstico por imagem
19.
Dig Endosc ; 33(7): 1101-1109, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33502046

RESUMO

OBJECTIVES: Artificial intelligence (AI) systems have shown favorable performance in the detection of esophageal squamous cell carcinoma (ESCC). However, previous studies were limited by the quality of their validation methods. In this study, we evaluated the performance of an AI system with videos simulating situations in which ESCC has been overlooked. METHODS: We used 17,336 images from 1376 superficial ESCCs and 1461 images from 196 noncancerous and normal esophagi to construct the AI system. To record validation videos, the endoscope was passed through the esophagus at a constant speed without focusing on the lesion to simulate situations in which ESCC has been missed. Validation videos were evaluated by the AI system and 21 endoscopists. RESULTS: We prepared 100 video datasets, including 50 superficial ESCCs, 22 noncancerous lesions, and 28 normal esophagi. The AI system had sensitivity of 85.7% (54 of 63 ESCCs) and specificity of 40%. Initial evaluation by endoscopists conducted with plain video (without AI support) had average sensitivity of 75.0% (47.3 of 63 ESCC) and specificity of 91.4%. Subsequent evaluation by endoscopists was conducted with AI assistance, which improved their sensitivity to 77.7% (P = 0.00696) without changing their specificity (91.6%, P = 0.756). CONCLUSIONS: Our AI system had high sensitivity for the detection of ESCC. As a support tool, the system has the potential to enhance detection of ESCC without reducing specificity. (UMIN000039645).


Assuntos
Neoplasias Esofágicas , Carcinoma de Células Escamosas do Esôfago , Inteligência Artificial , Neoplasias Esofágicas/diagnóstico por imagem , Humanos
20.
Gastrointest Endosc ; 92(4): 848-855, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32505685

RESUMO

BACKGROUND AND AIMS: Narrow-band imaging (NBI) is currently regarded as the standard modality for diagnosing esophageal squamous cell carcinoma (SCC). We developed a computerized image-analysis system for diagnosing esophageal SCC by NBI and estimated its performance with video images. METHODS: Altogether, 23,746 images from 1544 pathologically proven superficial esophageal SCCs and 4587 images from 458 noncancerous and normal tissue were used to construct an artificial intelligence (AI) system. Five- to 9-second video clips from 144 patients captured by NBI or blue-light imaging were used as the validation dataset. These video images were diagnosed by the AI system and 13 board-certified specialists (experts). RESULTS: The diagnostic process was divided into 2 parts: detection (identify suspicious lesions) and characterization (differentiate cancer from noncancer). The sensitivities, specificities, and accuracies for the detection of SCC were, respectively, 91%, 51%, and 63% for the AI system and 79%, 72%, and 75% for the experts. The sensitivity of the AI system was significantly higher than that of the experts, but its specificity was significantly lower. Sensitivities, specificities, and accuracy for the characterization of SCC were, respectively, 86%, 89%, and 88% for the AI system and 74%, 76%, and 75% for the experts. The receiver operating characteristic curve showed that the AI system had significantly better diagnostic performance than the experts. CONCLUSIONS: Our AI system showed significantly higher sensitivity for detecting SCC and higher accuracy for characterizing SCC from noncancerous tissue than endoscopic experts.


Assuntos
Neoplasias Esofágicas , Carcinoma de Células Escamosas do Esôfago , Neoplasias de Cabeça e Pescoço , Inteligência Artificial , Neoplasias Esofágicas/diagnóstico por imagem , Humanos , Imagem de Banda Estreita
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA