Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Clin Transl Gastroenterol ; 14(10): e00643, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37800683

ABSTRACT

INTRODUCTION: Convolutional neural network during endoscopy may facilitate evaluation of Helicobacter pylori infection without obtaining gastric biopsies. The aim of the study was to evaluate the diagnosis accuracy of a computer-aided decision support system for H. pylori infection (CADSS-HP) based on convolutional neural network under white-light endoscopy. METHODS: Archived video recordings of upper endoscopy with white-light examinations performed at Sir Run Run Shaw Hospital (January 2019-September 2020) were used to develop CADSS-HP. Patients receiving endoscopy were prospectively enrolled (August 2021-August 2022) from 3 centers to calculate the diagnostic property. Accuracy of CADSS-HP for H. pylori infection was also compared with endoscopic impression, urea breath test (URT), and histopathology. H. pylori infection was defined by positive test on histopathology and/or URT. RESULTS: Video recordings of 599 patients who received endoscopy were used to develop CADSS-HP. Subsequently, 456 patients participated in the prospective evaluation including 189 (41.4%) with H. pylori infection. With a threshold of 0.5, CADSS-HP achieved an area under the curve of 0.95 (95% confidence interval [CI], 0.93-0.97) with sensitivity and specificity of 91.5% (95% CI 86.4%-94.9%) and 88.8% (95% CI 84.2%-92.2%), respectively. CADSS-HP demonstrated higher sensitivity (91.5% vs 78.3%; mean difference = 13.2%, 95% CI 5.7%-20.7%) and accuracy (89.9% vs 83.8%, mean difference = 6.1%, 95% CI 1.6%-10.7%) compared with endoscopic diagnosis by endoscopists. Sensitivity of CADSS-HP in diagnosing H. pylori was comparable with URT (91.5% vs 95.2%; mean difference = 3.7%, 95% CI -1.8% to 9.4%), better than histopathology (91.5% vs 82.0%; mean difference = 9.5%, 95% CI 2.3%-16.8%). DISCUSSION: CADSS-HP achieved high sensitivity in the diagnosis of H. pylori infection in the real-time test, outperforming endoscopic diagnosis by endoscopists and comparable with URT. Clinicaltrials.gov ; ChiCTR2000030724.


Subject(s)
Helicobacter Infections , Helicobacter pylori , Humans , Helicobacter Infections/diagnosis , Helicobacter Infections/pathology , Gastroscopy , Endoscopy, Gastrointestinal , Neural Networks, Computer
2.
Comput Biol Med ; 143: 105255, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35151153

ABSTRACT

Deep learning-based computer-aided diagnosis techniques have demonstrated encouraging performance in endoscopic lesion identification and detection, and have reduced the rate of missed and false detections of disease during endoscopy. However, the interpretability of the model-based results has not been adequately addressed by existing methods. This phenomenon is directly manifested by a significant bias in the representation of feature localization. Good recognition models experience severe feature localization errors, particularly for lesions with subtle morphological features, and such unsatisfactory performance hinders the clinical deployment of models. To effectively alleviate this problem, we proposed a solution to optimize the localization bias in feature representations of cancer-related recognition models that is difficult to accurately label and identify in clinical practice. Optimization was performed in the training phase of the model through the proposed data augmentation method and auxiliary loss function based on clinical priors. The data augmentation method, called partial jigsaw, can "break" the spatial structure of lesion-independent image blocks and enrich the data feature space to decouple the interference of background features on the space and focus on fine-grained lesion features. The annotation-based auxiliary loss function used class activation maps for sample distribution correction and led the model to present localization representation converging on the gold standard annotation of visualization maps. The results show that with the improvement of our method, the precision of model recognition reached an average of 92.79%, an F1-score of 92.61%, and accuracy of 95.56% based on a dataset constructed from 23 hospitals. In addition, we quantified the evaluation representation of visualization feature maps. The improved model yielded significant offset correction results for visualized feature maps compared with the baseline model. The average visualization-weighted positive coverage improved from 51.85% to 83.76%. The proposed approach did not change the deployment capability and inference speed of the original model and can be incorporated into any state-of-the-art neural network. It also shows the potential to provide more accurate localization inference results and assist in clinical examinations during endoscopies.

3.
Bioinformatics ; 36(9): 2888-2895, 2020 05 01.
Article in English | MEDLINE | ID: mdl-31985775

ABSTRACT

MOTIVATION: As a highly heterogeneous disease, clear cell renal cell carcinoma (ccRCC) has quite variable clinical behaviors. The prognostic biomarkers play a crucial role in stratifying patients suffering from ccRCC to avoid over- and under-treatment. Researches based on hand-crafted features and single-modal data have been widely conducted to predict the prognosis of ccRCC. However, these experience-dependent methods, neglecting the synergy among multimodal data, have limited capacity to perform accurate prediction. Inspired by complementary information among multimodal data and the successful application of convolutional neural networks (CNNs) in medical image analysis, a novel framework was proposed to improve prediction performance. RESULTS: We proposed a cross-modal feature-based integrative framework, in which deep features extracted from computed tomography/histopathological images by using CNNs were combined with eigengenes generated from functional genomic data, to construct a prognostic model for ccRCC. Results showed that our proposed model can stratify high- and low-risk subgroups with significant difference (P-value < 0.05) and outperform the predictive performance of those models based on single-modality features in the independent testing cohort [C-index, 0.808 (0.728-0.888)]. In addition, we also explored the relationship between deep image features and eigengenes, and make an attempt to explain deep image features from the view of genomic data. Notably, the integrative framework is available to the task of prognosis prediction of other cancer with matched multimodal data. AVAILABILITY AND IMPLEMENTATION: https://github.com/zhang-de-lab/zhang-lab? from=singlemessage. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Carcinoma, Renal Cell , Kidney Neoplasms , Carcinoma, Renal Cell/diagnostic imaging , Carcinoma, Renal Cell/genetics , Genome , Humans , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/genetics , Neural Networks, Computer , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL
...