Your browser doesn't support javascript.
loading
Risk of Injury in Moral Dilemmas With Autonomous Vehicles.
de Melo, Celso M; Marsella, Stacy; Gratch, Jonathan.
Afiliação
  • de Melo CM; CCDC US Army Research Laboratory, Playa Vista, CA, United States.
  • Marsella S; College of Computer and Information Science, Northeastern University, Boston, MA, United States.
  • Gratch J; Institute for Creative Technologies, University of Southern, Playa Vista, CA, United States.
Front Robot AI ; 7: 572529, 2020.
Article em En | MEDLINE | ID: mdl-34212006
ABSTRACT
As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others' behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Etiology_studies / Guideline / Prognostic_studies / Risk_factors_studies Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Etiology_studies / Guideline / Prognostic_studies / Risk_factors_studies Idioma: En Ano de publicação: 2020 Tipo de documento: Article