Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Bioengineering (Basel) ; 11(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38790320

RESUMEN

In recent years, deep convolutional neural networks (DCNNs) have shown promising performance in medical image analysis, including breast lesion classification in 2D ultrasound (US) images. Despite the outstanding performance of DCNN solutions, explaining their decisions remains an open investigation. Yet, the explainability of DCNN models has become essential for healthcare systems to accept and trust the models. This paper presents a novel framework for explaining DCNN classification decisions of lesions in ultrasound images using the saliency maps linking the DCNN decisions to known cancer characteristics in the medical domain. The proposed framework consists of three main phases. First, DCNN models for classification in ultrasound images are built. Next, selected methods for visualization are applied to obtain saliency maps on the input images of the DCNN models. In the final phase, the visualization outputs and domain-known cancer characteristics are mapped. The paper then demonstrates the use of the framework for breast lesion classification from ultrasound images. We first follow the transfer learning approach and build two DCNN models. We then analyze the visualization outputs of the trained DCNN models using the EGrad-CAM and Ablation-CAM methods. We map the DCNN model decisions of benign and malignant lesions through the visualization outputs to the characteristics such as echogenicity, calcification, shape, and margin. A retrospective dataset of 1298 US images collected from different hospitals is used to evaluate the effectiveness of the framework. The test results show that these characteristics contribute differently to the benign and malignant lesions' decisions. Our study provides the foundation for other researchers to explain the DCNN classification decisions of other cancer types.

3.
Med Biol Eng Comput ; 62(1): 135-149, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37735296

RESUMEN

Deep convolutional neural networks (DCNNs) have demonstrated promising performance in classifying breast lesions in 2D ultrasound (US) images. Exiting approaches typically use pre-trained models based on architectures designed for natural images with transfer learning. Fewer attempts have been made to design customized architectures specifically for this purpose. This paper presents a comprehensive evaluation on transfer learning based solutions and automatically designed networks, analyzing the accuracy and robustness of different recognition models in three folds. First, we develop six different DCNN models (BNet, GNet, SqNet, DsNet, RsNet, IncReNet) based on transfer learning. Second, we adapt the Bayesian optimization method to optimize a CNN network (BONet) for classifying breast lesions. A retrospective dataset of 3034 US images collected from various hospitals is then used for evaluation. Extensive tests show that the BONet outperforms other models, exhibiting higher accuracy (83.33%), lower generalization gap (1.85%), shorter training time (66 min), and less model complexity (approximately 0.5 million weight parameters). We also compare the diagnostic performance of all models against that by three experienced radiologists. Finally, we explore the use of saliency maps to explain the classification decisions made by different models. Our investigation shows that saliency maps can assist in comprehending the classification decisions.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Estudios Retrospectivos , Teorema de Bayes
4.
Ultrason Imaging ; 46(1): 41-55, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37865842

RESUMEN

Thyroid cancer is one of the common types of cancer worldwide, and Ultrasound (US) imaging is a modality normally used for thyroid cancer diagnostics. The American College of Radiology Thyroid Imaging Reporting and Data System (ACR TIRADS) has been widely adopted to identify and classify US image characteristics for thyroid nodules. This paper presents novel methods for detecting the characteristic descriptors derived from TIRADS. Our methods return descriptions of the nodule margin irregularity, margin smoothness, calcification as well as shape and echogenicity using conventional computer vision and deep learning techniques. We evaluate our methods using datasets of 471 US images of thyroid nodules acquired from US machines of different makes and labeled by multiple radiologists. The proposed methods achieved overall accuracies of 88.00%, 93.18%, and 89.13% in classifying nodule calcification, margin irregularity, and margin smoothness respectively. Further tests with limited data also show a promising overall accuracy of 90.60% for echogenicity and 100.00% for nodule shape. This study provides an automated annotation of thyroid nodule characteristics from 2D ultrasound images. The experimental results showed promising performance of our methods for thyroid nodule analysis. The automatic detection of correct characteristics not only offers supporting evidence for diagnosis, but also generates patient reports rapidly, thereby decreasing the workload of radiologists and enhancing productivity.


Asunto(s)
Calcinosis , Neoplasias de la Tiroides , Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Estudios Retrospectivos , Neoplasias de la Tiroides/diagnóstico por imagen , Ultrasonografía/métodos
5.
Ultrason Imaging ; 46(1): 17-28, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37981781

RESUMEN

Efficient Neural Architecture Search (ENAS) is a recent development in searching for optimal cell structures for Convolutional Neural Network (CNN) design. It has been successfully used in various applications including ultrasound image classification for breast lesions. However, the existing ENAS approach only optimizes cell structures rather than the whole CNN architecture nor its trainable hyperparameters. This paper presents a novel framework for automatic design of CNN architectures by combining strengths of ENAS and Bayesian Optimization in two-folds. Firstly, we use ENAS to search for optimal normal and reduction cells. Secondly, with the optimal cells and a suitable hyperparameter search space, we adopt Bayesian Optimization to find the optimal depth of the network and optimal configuration of the trainable hyperparameters. To test the validity of the proposed framework, a dataset of 1522 breast lesion ultrasound images is used for the searching and modeling. We then evaluate the robustness of the proposed approach by testing the optimized CNN model on three external datasets consisting of 727 benign and 506 malignant lesion images. We further compare the CNN model with the default ENAS-based CNN model, and then with CNN models based on the state-of-the-art architectures. The results (error rate of no more than 20.6% on internal tests and 17.3% on average of external tests) show that the proposed framework generates robust and light CNN models.


Asunto(s)
Redes Neurales de la Computación , Ultrasonografía Mamaria , Femenino , Humanos , Teorema de Bayes , Ultrasonografía , Mama/diagnóstico por imagen
6.
J Ultrasound Med ; 41(8): 1961-1974, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34751458

RESUMEN

BACKGROUND: This pilot study aims at exploiting machine learning techniques to extract color Doppler ultrasound (CDUS) features and to build an artificial neural network (ANN) model based on these CDUS features for improving the diagnostic performance of thyroid cancer classification. METHODS: A total of 674 patients with 712 thyroid nodules (TNs) (512 from internal dataset and 200 from external dataset) were randomly selected in this retrospective study. We used ANN to build a model (TDUS-Net) for classifying malignant and benign TNs using both the automatically extracted quantitative CDUS features (whole ratio, intranodular ratio, peripheral ratio, and number of vessels) and gray-scale ultrasound (US) features defined by the American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS). Then, we compared the diagnostic performance of the model, the performance of another ANN model based on the gray-scale US features alone (TUS-Net), and that of radiologists. RESULTS: The TDUS-Net (0.898, 95% CI: 0.868-0.922) achieved a higher area under the curve (AUC) than that of TUS-Net (0.881, 95% CI: 0.850-0.908) in the internal tests. Compared with radiologists, TDUS-Net (AUC: 0.925, 95% CI: 0.880-0.958) performed better than radiologists (AUC: 0.810, 95% CI: 0.749-0.862) in the external tests. CONCLUSIONS: Applying a machine learning model by combining both gray-scale US features and CDUS features can achieve comparable or even higher performance than radiologists in classifying TNs.


Asunto(s)
Neoplasias de la Tiroides , Nódulo Tiroideo , Estudios de Cohortes , Humanos , Aprendizaje Automático , Proyectos Piloto , Estudios Retrospectivos , Neoplasias de la Tiroides/diagnóstico por imagen , Neoplasias de la Tiroides/patología , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/patología , Ultrasonografía/métodos
7.
Ultrasonics ; 110: 106300, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33232887

RESUMEN

Breast and thyroid cancers are the two common cancers to affect women worldwide. Ultrasonography (US) is a commonly used non-invasive imaging modality to detect breast and thyroid cancers, but its clinical diagnostic accuracy for these cancers is controversial. Both thyroid and breast cancers share some similar high frequency ultrasound characteristics such as taller-than-wide shape ratio, hypo-echogenicity, and ill-defined margins. This study aims to develop an automatic scheme for classifying thyroid and breast lesions in ultrasound images using deep convolutional neural networks (DCNN). In particular, we propose a generic DCNN architecture with transfer learning and the same architectural parameter settings to train models for thyroid and breast cancers (TNet and BNet) respectively, and test the viability of such a generic approach with ultrasound images collected from clinical practices. In addition, the potentials of the thyroid model in learning the common features and its performance of classifying both breast and thyroid lesions are investigated. A retrospective dataset of 719 thyroid and 672 breast images captured from US machines of different makes between October 2016 and December 2018 is used in this study. Test results show that both TNet and BNet built on the same DCNN architecture have achieved good classification results (86.5% average accuracy for TNet and 89% for BNet). Furthermore, we used TNet to classify breast lesions and the model achieves sensitivity of 86.6% and specificity of 87.1%, indicating its capability in learning features commonly shared by thyroid and breast lesions. We further tested the diagnostic performance of the TNet model against that of three radiologists. The area under curve (AUC) for thyroid nodule classification is 0.861 (95% CI: 0.792-0.929) for the TNet model and 0.757-0.854 (95% CI: 0.658-0.934) for the three radiologists. The AUC for breast cancer classification is 0.875 (95% CI: 0.804-0.947) for the TNet model and 0.698-0.777 (95% CI: 0.593-0.872) for the radiologists, indicating the model's potential in classifying both breast and thyroid cancers with a higher level of accuracy than that of radiologists.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Aprendizaje Profundo , Neoplasias de la Tiroides/diagnóstico por imagen , Ultrasonografía/métodos , Conjuntos de Datos como Asunto , Diagnóstico Diferencial , Femenino , Humanos , Interpretación de Imagen Asistida por Computador , Persona de Mediana Edad , Estudios Retrospectivos , Sensibilidad y Especificidad , Ultrasonografía Mamaria
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...