Your browser doesn't support javascript.
loading
Explainable artificial intelligence for omics data: a systematic mapping study.
Toussaint, Philipp A; Leiser, Florian; Thiebes, Scott; Schlesner, Matthias; Brors, Benedikt; Sunyaev, Ali.
Afiliação
  • Toussaint PA; Department of Economics and Management, Karlsruhe Institute of Technology, Karlsruhe, Germany.
  • Leiser F; HIDSS4Health - Helmholtz Information and Data Science School for Health, Karlsruhe, Heidelberg, Germany.
  • Thiebes S; Department of Economics and Management, Karlsruhe Institute of Technology, Karlsruhe, Germany.
  • Schlesner M; Department of Economics and Management, Karlsruhe Institute of Technology, Karlsruhe, Germany.
  • Brors B; Biomedical Informatics, Data Mining and Data Analytics, Faculty of Applied Computer Science and Medical Faculty, University of Augsburg, Augsburg, Germany.
  • Sunyaev A; Division of Applied Bioinformatics, German Cancer Research Center (DKFZ), Heidelberg, Germany.
Brief Bioinform ; 25(1)2023 11 22.
Article em En | MEDLINE | ID: mdl-38113073
ABSTRACT
Researchers increasingly turn to explainable artificial intelligence (XAI) to analyze omics data and gain insights into the underlying biological processes. Yet, given the interdisciplinary nature of the field, many findings have only been shared in their respective research community. An overview of XAI for omics data is needed to highlight promising approaches and help detect common issues. Toward this end, we conducted a systematic mapping study. To identify relevant literature, we queried Scopus, PubMed, Web of Science, BioRxiv, MedRxiv and arXiv. Based on keywording, we developed a coding scheme with 10 facets regarding the studies' AI methods, explainability methods and omics data. Our mapping study resulted in 405 included papers published between 2010 and 2023. The inspected papers analyze DNA-based (mostly genomic), transcriptomic, proteomic or metabolomic data by means of neural networks, tree-based methods, statistical methods and further AI methods. The preferred post-hoc explainability methods are feature relevance (n = 166) and visual explanation (n = 52), while papers using interpretable approaches often resort to the use of transparent models (n = 83) or architecture modifications (n = 72). With many research gaps still apparent for XAI for omics data, we deduced eight research directions and discuss their potential for the field. We also provide exemplary research questions for each direction. Many problems with the adoption of XAI for omics data in clinical practice are yet to be resolved. This systematic mapping study outlines extant research on the topic and provides research directions for researchers and practitioners.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Proteômica Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Proteômica Idioma: En Ano de publicação: 2023 Tipo de documento: Article