Your browser doesn't support javascript.
loading
Adversarial training improves model interpretability in single-cell RNA-seq analysis.
Sadria, Mehrshad; Layton, Anita; Bader, Gary D.
Afiliação
  • Sadria M; Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada.
  • Layton A; Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada.
  • Bader GD; Cheriton School of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada.
Bioinform Adv ; 3(1): vbad166, 2023.
Article em En | MEDLINE | ID: mdl-38099262
ABSTRACT
Motivation Predictive computational models must be accurate, robust, and interpretable to be considered reliable in important areas such as biology and medicine. A sufficiently robust model should not have its output affected significantly by a slight change in the input. Also, these models should be able to explain how a decision is made to support user trust in the results. Efforts have been made to improve the robustness and interpretability of predictive computational models independently; however, the interaction of robustness and interpretability is poorly understood.

Results:

As an example task, we explore the computational prediction of cell type based on single-cell RNA-seq data and show that it can be made more robust by adversarially training a deep learning model. Surprisingly, we find this also leads to improved model interpretability, as measured by identifying genes important for classification using a range of standard interpretability methods. Our results suggest that adversarial training may be generally useful to improve deep learning robustness and interpretability and that it should be evaluated on a range of tasks. Availability and implementation Our Python implementation of all analysis in this publication can be found at https//github.com/MehrshadSD/robustness-interpretability. The analysis was conducted using numPy 0.2.5, pandas 2.0.3, scanpy 1.9.3, tensorflow 2.10.0, matplotlib 3.7.1, seaborn 0.12.2, sklearn 1.1.1, shap 0.42.0, lime 0.2.0.1, matplotlib_venn 0.11.9.

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Bioinform Adv Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Bioinform Adv Ano de publicação: 2023 Tipo de documento: Article