Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Endoscopy ; 55(8): 701-708, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36754065

RESUMO

BACKGROUND : Deep learning models have previously been established to predict the histopathology and invasion depth of gastric lesions using endoscopic images. This study aimed to establish and validate a deep learning-based clinical decision support system (CDSS) for the automated detection and classification (diagnosis and invasion depth prediction) of gastric neoplasms in real-time endoscopy. METHODS : The same 5017 endoscopic images that were employed to establish previous models were used for the training data. The primary outcomes were: (i) the lesion detection rate for the detection model, and (ii) the lesion classification accuracy for the classification model. For performance validation of the lesion detection model, 2524 real-time procedures were tested in a randomized pilot study. Consecutive patients were allocated either to CDSS-assisted or conventional screening endoscopy. The lesion detection rate was compared between the groups. For performance validation of the lesion classification model, a prospective multicenter external test was conducted using 3976 novel images from five institutions. RESULTS : The lesion detection rate was 95.6 % (internal test). On performance validation, CDSS-assisted endoscopy showed a higher lesion detection rate than conventional screening endoscopy, although statistically not significant (2.0 % vs. 1.3 %; P = 0.21) (randomized study). The lesion classification rate was 89.7 % in the four-class classification (advanced gastric cancer, early gastric cancer, dysplasia, and non-neoplastic) and 89.2 % in the invasion depth prediction (mucosa confined or submucosa invaded; internal test). On performance validation, the CDSS reached 81.5 % accuracy in the four-class classification and 86.4 % accuracy in the binary classification (prospective multicenter external test). CONCLUSIONS : The CDSS demonstrated its potential for real-life clinical application and high performance in terms of lesion detection and classification of detected lesions in the stomach.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Aprendizado Profundo , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/patologia , Projetos Piloto , Estudos Prospectivos , Endoscopia/métodos , Endoscopia Gastrointestinal
2.
J Pers Med ; 11(4)2021 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-33805171

RESUMO

Auto-detection of cerebral aneurysms via convolutional neural network (CNN) is being increasingly reported. However, few studies to date have accurately predicted the risk, but not the diagnosis itself. We developed a multi-view CNN for the prediction of rupture risk involving small unruptured intracranial aneurysms (UIAs) based on three-dimensional (3D) digital subtraction angiography (DSA). The performance of a multi-view CNN-ResNet50 in accurately predicting the rupture risk (high vs. non-high) of UIAs in the anterior circulation measuring less than 7 mm in size was compared with various CNN architectures (AlexNet and VGG16), with similar type but different layers (ResNet101 and ResNet152), and single image-based CNN (single-view ResNet50). The sensitivity, specificity, and overall accuracy of risk prediction were estimated and compared according to CNN architecture. The study included 364 UIAs in training and 93 in test datasets. A multi-view CNN-ResNet50 exhibited a sensitivity of 81.82 (66.76-91.29)%, a specificity of 81.63 (67.50-90.76)%, and an overall accuracy of 81.72 (66.98-90.92)% for risk prediction. AlexNet, VGG16, ResNet101, ResNet152, and single-view CNN-ResNet50 showed similar specificity. However, the sensitivity and overall accuracy were decreased (AlexNet, 63.64% and 76.34%; VGG16, 68.18% and 74.19%; ResNet101, 68.18% and 73.12%; ResNet152, 54.55% and 72.04%; and single-view CNN-ResNet50, 50.00% and 64.52%) compared with multi-view CNN-ResNet50. Regarding F1 score, it was the highest in multi-view CNN-ResNet50 (80.90 (67.29-91.81)%). Our study suggests that multi-view CNN-ResNet50 may be feasible to assess the rupture risk in small-sized UIAs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA