Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Quant Imaging Med Surg ; 13(12): 8370-8382, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38106318

RESUMO

Background: Early preoperative evaluation of cervical lymph node metastasis (LNM) in papillary thyroid carcinoma (PTC) is critical for further surgical treatment. However, insufficient accuracy in predicting LNM status for PTC based on ultrasound images is a problem that needs to be urgently resolved. This study aimed to clarify the role of convolutional neural networks (CNNs) in predicting LNM for PTC based on multimodality ultrasound. Methods: In this study, the data of 308 patients who were clinically diagnosed with PTC and had confirmed LNM status via postoperative pathology at Beijing Tiantan Hospital, Capital Medical University, from August 2018 to April 2022 were incorporated into CNN algorithm development and evaluation. Of these patients, 80% were randomly included into the training set and 20% into the test set. The ultrasound examination of cervical LNM was performed to assess possible metastasis. Residual network 50 (Resnet50) was employed for feature extraction from the B-mode and contrast-enhanced ultrasound (CEUS) images. For each case, all of features were extracted from B-mode ultrasound images and CEUS images separately, and the ultrasound examination data of cervical LNM information were concatenated together to produce a final multimodality LNM prediction. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were used to evaluate the performance of the predictive model. Heatmaps were further developed for visualizing the attention region of the images of the best-working model. Results: Of the 308 patients with PTC included in the analysis, 158 (51.3%) were diagnosed as LNM and 150 (48.7%) as non-LNM. In the test set, when a triple-modality method (i.e., B-mode image, CEUS image, and ultrasound examination of cervical LNM) was used, accuracy was maximized at 80.65% (AUC =0.831; sensitivity =80.65%; specificity =82.26%), which showed an expected increased performance over B-mode alone (accuracy =69.00%; AUC =0.720; sensitivity =70.00%; specificity =73.00%) and a dual-modality method (B-mode image plus CEUS image: accuracy =75.81%; AUC =0.742; sensitivity =74.19%; specificity =77.42%). The heatmaps of our triple-modality model demonstrated a possible focus area and revealed the model's flaws. Conclusions: The PTC lymph node prediction model based on the triple-modality features significantly outperformed all the other feature configurations. This deep learning model mimics the workflow of a human expert and leverages multimodal data from patients with PTC, thus further supporting clinical decision-making.

2.
BMC Med Imaging ; 23(1): 163, 2023 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-37858039

RESUMO

INTRODUCTION: Parameters, such as left ventricular ejection fraction, peak strain dispersion, global longitudinal strain, etc. are influential and clinically interpretable for detection of cardiac disease, while manual detection requires laborious steps and expertise. In this study, we evaluated a video-based deep learning method that merely depends on echocardiographic videos from four apical chamber views of hypertensive cardiomyopathy detection. METHODS: One hundred eighty-five hypertensive cardiomyopathy (HTCM) patients and 112 healthy normal controls (N) were enrolled in this diagnostic study. We collected 297 de-identified subjects' echo videos for training and testing of an end-to-end video-based pipeline of snippet proposal, snippet feature extraction by a three-dimensional (3-D) convolutional neural network (CNN), a weakly-supervised temporally correlated feature ensemble, and a final classification module. The snippet proposal step requires a preliminarily trained end-systole and end-diastole timing detection model to produce snippets that begin at end-diastole, and involve contraction and dilatation for a complete cardiac cycle. A domain adversarial neural network was introduced to systematically address the appearance variability of echo videos in terms of noise, blur, transducer depth, contrast, etc. to improve the generalization of deep learning algorithms. In contrast to previous image-based cardiac disease detection architectures, video-based approaches integrate spatial and temporal information better with a more powerful 3D convolutional operator. RESULTS: Our proposed model achieved accuracy (ACC) of 92%, area under receiver operating characteristic (ROC) curve (AUC) of 0.90, sensitivity(SEN) of 97%, and specificity (SPE) of 84% with respect to subjects for hypertensive cardiomyopathy detection in the test data set, and outperformed the corresponding 3D CNN (vanilla I3D: ACC (0.90), AUC (0.89), SEN (0.94), and SPE (0.84)). On the whole, the video-based methods remarkably appeared superior to the image-based methods, while few evaluation metrics of image-based methods exhibited to be more compelling (sensitivity of 93% and negative predictive value of 100% for the image-based methods (ES/ED and random)). CONCLUSION: The results supported the possibility of using end-to-end video-based deep learning method for the automated diagnosis of hypertensive cardiomyopathy in the field of echocardiography to augment and assist clinicians. TRIAL REGISTRATION: Current Controlled Trials ChiCTR1900025325, Aug, 24, 2019. Retrospectively registered.


Assuntos
Cardiomiopatias , Função Ventricular Esquerda , Humanos , Volume Sistólico , Coração , Redes Neurais de Computação , Cardiomiopatias/diagnóstico por imagem
3.
Chin Med J (Engl) ; 134(4): 415-424, 2021 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-33617184

RESUMO

BACKGROUND: The current deep learning diagnosis of breast masses is mainly reflected by the diagnosis of benign and malignant lesions. In China, breast masses are divided into four categories according to the treatment method: inflammatory masses, adenosis, benign tumors, and malignant tumors. These categorizations are important for guiding clinical treatment. In this study, we aimed to develop a convolutional neural network (CNN) for classification of these four breast mass types using ultrasound (US) images. METHODS: Taking breast biopsy or pathological examinations as the reference standard, CNNs were used to establish models for the four-way classification of 3623 breast cancer patients from 13 centers. The patients were randomly divided into training and test groups (n = 1810 vs. n = 1813). Separate models were created for two-dimensional (2D) images only, 2D and color Doppler flow imaging (2D-CDFI), and 2D-CDFI and pulsed wave Doppler (2D-CDFI-PW) images. The performance of these three models was compared using sensitivity, specificity, area under receiver operating characteristic curve (AUC), positive (PPV) and negative predictive values (NPV), positive (LR+) and negative likelihood ratios (LR-), and the performance of the 2D model was further compared between masses of different sizes with above statistical indicators, between images from different hospitals with AUC, and with the performance of 37 radiologists. RESULTS: The accuracies of the 2D, 2D-CDFI, and 2D-CDFI-PW models on the test set were 87.9%, 89.2%, and 88.7%, respectively. The AUCs for classification of benign tumors, malignant tumors, inflammatory masses, and adenosis were 0.90, 0.91, 0.90, and 0.89, respectively (95% confidence intervals [CIs], 0.87-0.91, 0.89-0.92, 0.87-0.91, and 0.86-0.90). The 2D-CDFI model showed better accuracy (89.2%) on the test set than the 2D (87.9%) and 2D-CDFI-PW (88.7%) models. The 2D model showed accuracy of 81.7% on breast masses ≤1 cm and 82.3% on breast masses >1 cm; there was a significant difference between the two groups (P < 0.001). The accuracy of the CNN classifications for the test set (89.2%) was significantly higher than that of all the radiologists (30%). CONCLUSIONS: The CNN may have high accuracy for classification of US images of breast masses and perform significantly better than human radiologists. TRIAL REGISTRATION: Chictr.org, ChiCTR1900021375; http://www.chictr.org.cn/showproj.aspx?proj=33139.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Área Sob a Curva , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , China , Humanos , Curva ROC , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA