Your browser doesn't support javascript.
loading
Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives.
Aquino, Yves Saint James; Carter, Stacy M; Houssami, Nehmat; Braunack-Mayer, Annette; Win, Khin Than; Degeling, Chris; Wang, Lei; Rogers, Wendy A.
Afiliação
  • Aquino YSJ; Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia yaquino@uow.edu.au.
  • Carter SM; Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia.
  • Houssami N; School of Public Health, The University of Sydney, Sydney, New South Wales, Australia.
  • Braunack-Mayer A; The Daffodil Centre, Sydney, New South Wales, Australia.
  • Win KT; Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia.
  • Degeling C; Centre for Persuasive Technology and Society, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia.
  • Wang L; Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia.
  • Rogers WA; Centre for Artificial Intelligence, School of Computing and Information Technology, University of Wollongong, Wollongong, New South Wales, Australia.
J Med Ethics ; 2023 Feb 23.
Article em En | MEDLINE | ID: mdl-36823101
ABSTRACT

BACKGROUND:

There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race).

OBJECTIVES:

Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias.

METHODOLOGY:

The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers.

RESULTS:

Findings reveal considerable divergent views on three key issues. First, views on whether bias is a problem in healthcare AI varied, with most participants agreeing bias is a problem (which we call the bias-critical view), a small number believing the opposite (the bias-denial view), and some arguing that the benefits of AI outweigh any harms or wrongs arising from the bias problem (the bias-apologist view). Second, there was a disagreement on the strategies to mitigate bias, and who is responsible for such strategies. Finally, there were divergent views on whether to include or exclude sociocultural identifiers (eg, race, ethnicity or gender-diverse identities) in the development of AI as a way to mitigate bias. CONCLUSION/

SIGNIFICANCE:

Based on the views of participants, we set out responses that stakeholders might pursue, including greater interdisciplinary collaboration, tailored stakeholder engagement activities, empirical studies to understand algorithmic bias and strategies to modify dominant approaches in AI development such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies / Qualitative_research Idioma: En Revista: J Med Ethics Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Austrália

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies / Qualitative_research Idioma: En Revista: J Med Ethics Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Austrália