Your browser doesn't support javascript.
loading
Adversarial Dynamics in Centralized Versus Decentralized Intelligent Systems.
Brinkmann, Levin; Cebrian, Manuel; Pescetelli, Niccolò.
Afiliação
  • Brinkmann L; Center for Humans and Machines, Max Planck Institute for Human Development.
  • Cebrian M; Department of Statistics, Universidad Carlos III de Madrid.
  • Pescetelli N; UC3M-Santander Big Data Institute, Universidad Carlos III de Madrid.
Top Cogn Sci ; 2023 Oct 30.
Article em En | MEDLINE | ID: mdl-37902444
ABSTRACT
Artificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals' and collectives' freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi-agent reinforcement learning to simulate dynamics within a human-machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q-learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: Top Cogn Sci Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: Top Cogn Sci Ano de publicação: 2023 Tipo de documento: Article