Your browser doesn't support javascript.
loading
The consequences of AI training on human decision-making.
Treiman, Lauren S; Ho, Chien-Ju; Kool, Wouter.
Affiliation
  • Treiman LS; Division of Computational & Data Sciences, Washington University in St. Louis, St. Louis, MO 63130.
  • Ho CJ; Division of Computational & Data Sciences, Washington University in St. Louis, St. Louis, MO 63130.
  • Kool W; Division of Computer Science & Engineering, Washington University in St. Louis, St. Louis, MO 63130.
Proc Natl Acad Sci U S A ; 121(33): e2408731121, 2024 Aug 13.
Article in En | MEDLINE | ID: mdl-39106305
ABSTRACT
AI is now an integral part of everyday decision-making, assisting us in both routine and high-stakes choices. These AI models often learn from human behavior, assuming this training data is unbiased. However, we report five studies that show that people change their behavior to instill desired routines into AI, indicating this assumption is invalid. To show this behavioral shift, we recruited participants to play the ultimatum game, where they were asked to decide whether to accept proposals of monetary splits made by either other human participants or AI. Some participants were informed their choices would be used to train an AI proposer, while others did not receive this information. Across five experiments, we found that people modified their behavior to train AI to make fair proposals, regardless of whether they could directly benefit from the AI training. After completing this task once, participants were invited to complete this task again but were told their responses would not be used for AI training. People who had previously trained AI persisted with this behavioral shift, indicating that the new behavioral routine had become habitual. This work demonstrates that using human behavior as training data has more consequences than previously thought since it can engender AI to perpetuate human biases and cause people to form habits that deviate from how they would normally act. Therefore, this work underscores a problem for AI algorithms that aim to learn unbiased representations of human preferences.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Artificial Intelligence / Decision Making Limits: Adult / Female / Humans / Male Language: En Journal: Proc Natl Acad Sci U S A Year: 2024 Type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Artificial Intelligence / Decision Making Limits: Adult / Female / Humans / Male Language: En Journal: Proc Natl Acad Sci U S A Year: 2024 Type: Article