Your browser doesn't support javascript.
loading
CNN-BLSTM based deep learning framework for eukaryotic kinome classification: An explainability based approach.
John, Chinju; Sahoo, Jayakrushna; Sajan, Irish K; Madhavan, Manu; Mathew, Oommen K.
Affiliation
  • John C; Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kottayam, 686635, Kerala, India. Electronic address: chinjuj.phd201001@iiitkottayam.ac.in.
  • Sahoo J; Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kottayam, 686635, Kerala, India.
  • Sajan IK; Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kottayam, 686635, Kerala, India.
  • Madhavan M; Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kottayam, 686635, Kerala, India.
  • Mathew OK; Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kottayam, 686635, Kerala, India.
Comput Biol Chem ; 112: 108169, 2024 Aug 08.
Article in En | MEDLINE | ID: mdl-39137619
ABSTRACT
Classification of protein families from their sequences is an enduring task in Proteomics and related studies. Numerous deep-learning models have been moulded to tackle this challenge, but due to the black-box character, they still fall short in reliability. Here, we present a novel explainability pipeline that explains the pivotal decisions of the deep learning model on the classification of the Eukaryotic kinome. Based on a comparative and experimental analysis of the most cutting-edge deep learning algorithms, the best deep learning model CNN-BLSTM was chosen to classify the eight eukaryotic kinase sequences to their corresponding families. As a substitution for the conventional class activation map-based interpretation of CNN-based models in the domain, we have cascaded the GRAD CAM and Integrated Gradient (IG) explainability modus operandi for improved and responsible results. To ensure the trustworthiness of the classifier, we have masked the kinase domain traces, identified from the explainability pipeline and observed a class-specific drop in F1-score from 0.96 to 0.76. In compliance with the Explainable AI paradigm, our results are promising and contribute to enhancing the trustworthiness of deep learning models for biological sequence-associated studies.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Comput Biol Chem Journal subject: BIOLOGIA / INFORMATICA MEDICA / QUIMICA Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Comput Biol Chem Journal subject: BIOLOGIA / INFORMATICA MEDICA / QUIMICA Year: 2024 Document type: Article