Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Eur Radiol ; 2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-37973631

RESUMO

OBJECTIVE: This study aims to develop a weakly supervised deep learning (DL) model for vertebral-level vertebral compression fracture (VCF) classification using image-level labelled data. METHODS: The training set included 815 patients with normal (n = 507, 62%) or VCFs (n = 308, 38%). Our proposed model was trained on image-level labelled data for vertebral-level classification. Another supervised DL model was trained with vertebral-level labelled data to compare the performance of the proposed model. RESULTS: The test set included 227 patients with normal (n = 117, 52%) or VCFs (n = 110, 48%). For a fair comparison of the two models, we compared sensitivities with the same specificities of the proposed model and the vertebral-level supervised model. The specificity for overall L1-L5 performance was 0.981. The proposed model may outperform the vertebral-level supervised model with sensitivities of 0.770 vs 0.705 (p = 0.080), respectively. For vertebral-level analysis, the specificities for each L1-L5 were 0.974, 0.973, 0.970, 0.991, and 0.995, respectively. The proposed model yielded the same or better sensitivity than the vertebral-level supervised model in L1 (0.750 vs 0.694, p = 0.480), L3 (0.793 vs 0.586, p < 0.05), L4 (0.833 vs 0.667, p = 0.480), and L5 (0.600 vs 0.600, p = 1.000), respectively. The proposed model showed lower sensitivity than the vertebral-level supervised model for L2, but there was no significant difference (0.775 vs 0.825, p = 0.617). CONCLUSIONS: The proposed model may have a comparable or better performance than the supervised model in vertebral-level VCF classification. CLINICAL RELEVANCE STATEMENT: Vertebral-level vertebral compression fracture classification aids in devising patient-specific treatment plans by identifying the precise vertebrae affected by compression fractures. KEY POINTS: • Our proposed weakly supervised method may have comparable or better performance than the supervised method for vertebral-level vertebral compression fracture classification. • The weakly supervised model could have classified cases with multiple vertebral compression fractures at the vertebral-level, even if the model was trained with image-level labels. • Our proposed method could help reduce radiologists' labour because it enables vertebral-level classification from image-level labels.

2.
Nat Commun ; 13(1): 5815, 2022 10 03.
Artigo em Inglês | MEDLINE | ID: mdl-36192403

RESUMO

A wearable silent speech interface (SSI) is a promising platform that enables verbal communication without vocalization. The most widely studied methodology for SSI focuses on surface electromyography (sEMG). However, sEMG suffers from low scalability because of signal quality-related issues, including signal-to-noise ratio and interelectrode interference. Hence, here, we present a novel SSI by utilizing crystalline-silicon-based strain sensors combined with a 3D convolutional deep learning algorithm. Two perpendicularly placed strain gauges with minimized cell dimension (<0.1 mm2) could effectively capture the biaxial strain information with high reliability. We attached four strain sensors near the subject's mouths and collected strain data of unprecedently large wordsets (100 words), which our SSI can classify at a high accuracy rate (87.53%). Several analysis methods were demonstrated to verify the system's reliability, as well as the performance comparison with another SSI using sEMG electrodes with the same dimension, which exhibited a relatively low accuracy rate (42.60%).


Assuntos
Aprendizado Profundo , Fala , Algoritmos , Eletromiografia/métodos , Reprodutibilidade dos Testes , Silício
3.
Diagnostics (Basel) ; 12(8)2022 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-36010210

RESUMO

By automatically classifying the stomach, small bowel, and colon, the reading time of the wireless capsule endoscopy (WCE) can be reduced. In addition, it is an essential first preprocessing step to localize the small bowel in order to apply automated small bowel lesion detection algorithms based on deep learning. The purpose of the study was to develop an automated small bowel detection method from long untrimmed videos captured from WCE. Through this, the stomach and colon can also be distinguished. The proposed method is based on a convolutional neural network (CNN) with a temporal filtering on the predicted probabilities from the CNN. For CNN, we use a ResNet50 model to classify three organs including stomach, small bowel, and colon. The hybrid temporal filter consisting of a Savitzky-Golay filter and a median filter is applied to the temporal probabilities for the "small bowel" class. After filtering, the small bowel and the other two organs are differentiated with thresholding. The study was conducted on dataset of 200 patients (100 normal and 100 abnormal WCE cases), which was divided into a training set of 140 cases, a validation set of 20 cases, and a test set of 40 cases. For the test set of 40 patients (20 normal and 20 abnormal WCE cases), the proposed method showed accuracy of 99.8% in binary classification for the small bowel. Transition time errors for gastrointestinal tracts were only 38.8 ± 25.8 seconds for the transition between stomach and small bowel and 32.0 ± 19.1 seconds for the transition between small bowel and colon, compared to the ground truth organ transition points marked by two experienced gastroenterologists.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...