Your browser doesn't support javascript.
loading
The false hope of current approaches to explainable artificial intelligence in health care.
Ghassemi, Marzyeh; Oakden-Rayner, Luke; Beam, Andrew L.
Afiliação
  • Ghassemi M; Department of Electrical Engineering and Computer Science and Institute for Medical and Evaluative Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; Vector Institute, Toronto, ON, Canada.
  • Oakden-Rayner L; Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia.
  • Beam AL; CAUSALab and Department of Epidemiology, Harvard T H Chan School of Public Health, Boston, MA, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA. Electronic address: andrew_beam@hms.harvard.edu.
Lancet Digit Health ; 3(11): e745-e750, 2021 11.
Article em En | MEDLINE | ID: mdl-34711379
The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. We provide an overview of current explainability techniques and highlight how various failure cases can cause problems for decision making for individual patients. In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Comunicação / Atenção à Saúde / Dissidências e Disputas / Confiança / Compreensão Tipo de estudo: Diagnostic_studies / Prognostic_studies Limite: Humans Idioma: En Revista: Lancet Digit Health Ano de publicação: 2021 Tipo de documento: Article País de afiliação: Canadá

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Comunicação / Atenção à Saúde / Dissidências e Disputas / Confiança / Compreensão Tipo de estudo: Diagnostic_studies / Prognostic_studies Limite: Humans Idioma: En Revista: Lancet Digit Health Ano de publicação: 2021 Tipo de documento: Article País de afiliação: Canadá