Your browser doesn't support javascript.
loading
Multi-Instance Classification of Breast Tumor Ultrasound Images Using Convolutional Neural Networks and Transfer Learning.
Ciobotaru, Alexandru; Bota, Maria Aurora; Goța, Dan Ioan; Miclea, Liviu Cristian.
Afiliación
  • Ciobotaru A; Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania.
  • Bota MA; Department of Advanced Computing Sciences, Faculty of Sciences and Engineering, Maastricht University, 6229 EN Maastricht, The Netherlands.
  • Goța DI; Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania.
  • Miclea LC; Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania.
Bioengineering (Basel) ; 10(12)2023 Dec 13.
Article en En | MEDLINE | ID: mdl-38136010
ABSTRACT

BACKGROUND:

Breast cancer is arguably one of the leading causes of death among women around the world. The automation of the early detection process and classification of breast masses has been a prominent focus for researchers in the past decade. The utilization of ultrasound imaging is prevalent in the diagnostic evaluation of breast cancer, with its predictive accuracy being dependent on the expertise of the specialist. Therefore, there is an urgent need to create fast and reliable ultrasound image detection algorithms to address this issue.

METHODS:

This paper aims to compare the efficiency of six state-of-the-art, fine-tuned deep learning models that can classify breast tissue from ultrasound images into three classes benign, malignant, and normal, using transfer learning. Additionally, the architecture of a custom model is introduced and trained from the ground up on a public dataset containing 780 images, which was further augmented to 3900 and 7800 images, respectively. What is more, the custom model is further validated on another private dataset containing 163 ultrasound images divided into two classes benign and malignant. The pre-trained architectures used in this work are ResNet-50, Inception-V3, Inception-ResNet-V2, MobileNet-V2, VGG-16, and DenseNet-121. The performance evaluation metrics that are used in this study are as follows Precision, Recall, F1-Score and Specificity.

RESULTS:

The experimental results show that the models trained on the augmented dataset with 7800 images obtained the best performance on the test set, having 94.95 ± 0.64%, 97.69 ± 0.52%, 97.69 ± 0.13%, 97.77 ± 0.29%, 95.07 ± 0.41%, 98.11 ± 0.10%, and 96.75 ± 0.26% accuracy for the ResNet-50, MobileNet-V2, InceptionResNet-V2, VGG-16, Inception-V3, DenseNet-121, and our model, respectively.

CONCLUSION:

Our proposed model obtains competitive results, outperforming some state-of-the-art models in terms of accuracy and training time.
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Bioengineering (Basel) Año: 2023 Tipo del documento: Article País de afiliación: Rumanía

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Bioengineering (Basel) Año: 2023 Tipo del documento: Article País de afiliación: Rumanía