Your browser doesn't support javascript.
loading
Enhanced deep learning models for automatic fish species identification in underwater imagery.
D, Siri; Vellaturi, Gopikrishna; Shaik Ibrahim, Shaik Hussain; Molugu, Srikanth; Desanamukula, Venkata Subbaiah; Kocherla, Raviteja; Vatambeti, Ramesh.
Afiliação
  • D S; Department of CSE, Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad, India.
  • Vellaturi G; Department of Information Technology, MLR Institute of Technology, Hyderabad, India.
  • Shaik Ibrahim SH; Department of Computer Science and Engineering, Malla Reddy University, Hyderabad, 500043, India.
  • Molugu S; Department of Computer Science and Engineering, Malla Reddy University, Hyderabad, 500043, India.
  • Desanamukula VS; Department of Computer Science and Engineering, Lakireddy Bali Reddy College of Engineering, India.
  • Kocherla R; Department of Computer Science and Engineering, Malla Reddy University, Hyderabad, 500043, India.
  • Vatambeti R; School of Computer Science and Engineering, VIT-AP University, Vijayawada, 522237, India.
Heliyon ; 10(15): e35217, 2024 Aug 15.
Article em En | MEDLINE | ID: mdl-39170344
ABSTRACT
Underwater cameras are crucial in marine ecology, but their data management needs automatic species identification. This study proposes a two-stage deep learning approach. First, the Unsharp Mask Filter (UMF) preprocesses images. Then, an enhanced region-based fully convolutional network (R-FCN) detects fish using two-order integrals for position-sensitive score maps and precise region of interest (PS-Pr-RoI) pooling for accuracy. The second stage integrates ShuffleNetV2 with the Squeeze and Excitation (SE) module, forming the Improved ShuffleNetV2 model, enhancing classification focus. Hyperparameters are optimized with the Enhanced Northern Goshawk Optimization Algorithm (ENGO). The improved R-FCN model achieves 99.94 % accuracy, 99.58 % precision and recall, and a 99.27 % F-measure on the Fish4knowledge dataset. Similarly, the ENGO-based ShuffleNetV2 model, evaluated on the same dataset, shows 99.93 % accuracy, 99.19 % precision, 98.29 % recall, and a 98.71 % F-measure, highlighting its superior classification accuracy.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article