Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
Behav Brain Sci ; 47: e50, 2024 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-38311444

RESUMO

To succeed, we posit that research cartography will require high-throughput natural description to identify unknown unknowns in a particular design space. High-throughput natural description, the systematic collection and annotation of representative corpora of real-world stimuli, faces logistical challenges, but these can be overcome by solutions that are deployed in the later stages of integrative experiment design.

2.
Annu Rev Psychol ; 75: 653-675, 2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-37722750

RESUMO

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.


Assuntos
Inteligência Artificial , Princípios Morais , Animais , Humanos , Inteligência
3.
Nat Hum Behav ; 7(11): 1855-1868, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37985914

RESUMO

The ability of humans to create and disseminate culture is often credited as the single most important factor of our success as a species. In this Perspective, we explore the notion of 'machine culture', culture mediated or generated by machines. We argue that intelligent machines simultaneously transform the cultural evolutionary processes of variation, transmission and selection. Recommender algorithms are altering social learning dynamics. Chatbots are forming a new mode of cultural transmission, serving as cultural models. Furthermore, intelligent machines are evolving as contributors in generating cultural traits-from game strategies and visual art to scientific results. We provide a conceptual framework for studying the present and anticipated future impact of machines on cultural evolution, and present a research agenda for the study of machine culture.


Assuntos
Evolução Cultural , Hominidae , Humanos , Animais , Cultura , Aprendizagem
4.
Behav Brain Sci ; 46: e297, 2023 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-37789540

RESUMO

Puritanism may evolve into a technological variant based on norms of delegation of actions and perceptions to artificial intelligence. Instead of training self-control, people may be expected to cede their agency to self-controlled machines. The cost-benefit balance of this machine puritanism may be less aversive to wealthy individualistic democracies than the old puritanism they have abandoned.


Assuntos
Inteligência Artificial , Princípios Morais , Humanos , Afeto
5.
PNAS Nexus ; 2(6): pgad179, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37325024

RESUMO

Artificial intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems-enabling people and organizations to form judgments of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.

6.
Nat Commun ; 14(1): 3108, 2023 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-37253759

RESUMO

With the progress of artificial intelligence and the emergence of global online communities, humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Human societies have had thousands of years to consolidate the social norms that promote cooperation; but mixed collectives often struggle to articulate the norms which hold when humans coexist with machines. In five studies involving 7917 individuals, we document the way people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors. We show that a different amount of trust is gained by helpers and punishers when they follow norms over not doing so. We also demonstrate that the trust-gain of norm-followers is associated with trustors' assessment about the consensual nature of cooperative norms over helping and punishing. Lastly, we establish that, under certain conditions, informing trustors about the norm-consensus over helping tends to decrease the differential treatment of both machines and people interacting with them. These results allow us to anticipate how humans may develop cooperative norms for human-machine collectives, specifically, by relying on already extant norms in human-only groups. We also demonstrate that this evolution may be accelerated by making people aware of their emerging consensus.


Assuntos
Comportamento Cooperativo , Confiança , Humanos , Inteligência Artificial , Consenso , Normas Sociais
7.
MDM Policy Pract ; 7(2): 23814683221113573, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35911175

RESUMO

Objective. When medical resources are scarce, clinicians must make difficult triage decisions. When these decisions affect public trust and morale, as was the case during the COVID-19 pandemic, experts will benefit from knowing which triage metrics have citizen support. Design. We conducted an online survey in 20 countries, comparing support for 5 common metrics (prognosis, age, quality of life, past and future contribution as a health care worker) to a benchmark consisting of support for 2 no-triage mechanisms (first-come-first-served and random allocation). Results. We surveyed nationally representative samples of 1000 citizens in each of Brazil, France, Japan, and the United States and also self-selected samples from 20 countries (total N = 7599) obtained through a citizen science website (the Moral Machine). We computed the support for each metric by comparing its usability to the usability of the 2 no-triage mechanisms. We further analyzed the polarizing nature of each metric by considering its usability among participants who had a preference for no triage. In all countries, preferences were polarized, with the 2 largest groups preferring either no triage or extensive triage using all metrics. Prognosis was the least controversial metric. There was little support for giving priority to healthcare workers. Conclusions. It will be difficult to define triage guidelines that elicit public trust and approval. Given the importance of prognosis in triage protocols, it is reassuring that it is the least controversial metric. Experts will need to prepare strong arguments for other metrics if they wish to preserve public trust and morale during health crises. Highlights: We collected citizen preferences regarding triage decisions about scarce medical resources from 20 countries.We find that citizen preferences are universally polarized.Citizens either prefer no triage (random allocation or first-come-first served) or extensive triage using all common triage metrics, with "prognosis" being the least controversial.Experts will need to prepare strong arguments to preserve or elicit public trust in triage decisions.

8.
Proc Natl Acad Sci U S A ; 118(38)2021 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-34526400

RESUMO

How does the public want a COVID-19 vaccine to be allocated? We conducted a conjoint experiment asking 15,536 adults in 13 countries to evaluate 248,576 profiles of potential vaccine recipients who varied randomly on five attributes. Our sample includes diverse countries from all continents. The results suggest that in addition to giving priority to health workers and to those at high risk, the public favors giving priority to a broad range of key workers and to those with lower income. These preferences are similar across respondents of different education levels, incomes, and political ideologies, as well as across most surveyed countries. The public favored COVID-19 vaccines being allocated solely via government programs but were highly polarized in some developed countries on whether taking a vaccine should be mandatory. There is a consensus among the public on many aspects of COVID-19 vaccination, which needs to be taken into account when developing and communicating rollout strategies.


Assuntos
Vacinas contra COVID-19/administração & dosagem , COVID-19/prevenção & controle , Saúde Pública , Opinião Pública , Vacinação/psicologia , Adulto , Pessoal de Saúde , Humanos , SARS-CoV-2 , Inquéritos e Questionários
9.
Nat Hum Behav ; 5(6): 679-685, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34083752

RESUMO

As machines powered by artificial intelligence (AI) influence humans' behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human-computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.


Assuntos
Inteligência Artificial , Comportamento , Princípios Morais , Humanos , Interface Usuário-Computador
11.
J Exp Psychol Gen ; 150(6): 1081-1094, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33119351

RESUMO

Human interactions often involve a choice between acting selfishly (in ones' own interest) and acting prosocially (in the interest of others). Fast and slow models of prosociality posit that people intuitively favor 1 of these choices (the selfish choice in some models, the prosocial choice in other models) and need to correct this intuition through deliberation to make the other choice. We present 7 studies that force us to reconsider this longstanding corrective dual-process view. Participants played various economic games in which they had to choose between a prosocial and a selfish option. We used a 2-response paradigm in which participants had to give their first, initial response under time pressure and cognitive load. Next, participants could take all the time they wanted to reflect on the problem and give a final response. This allowed us to identify the intuitively generated response that preceded the final response given after deliberation. Results consistently showed that both prosocial and selfish responses were predominantly made intuitively rather than after deliberate correction. Pace the deliberate correction view, the findings indicate that making prosocial and selfish choices does typically not rely on different types of reasoning modes (intuition vs. deliberation) but rather on different types of intuitions. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Intuição , Resolução de Problemas , Humanos
12.
Trends Cogn Sci ; 24(12): 1019-1027, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33129719

RESUMO

Machines do not 'think fast and slow' in the sense that humans do in dual-process models of cognition. However, the people who create the machines may attempt to emulate or simulate these fast and slow modes of thinking, which will in turn affect the way end users relate to these machines. In this opinion article we consider the complex interplay in the way various stakeholders (engineers, user experience designers, regulators, ethicists, and end users) can be inspired, challenged, or misled by the analogy between the fast and slow thinking of humans and the Fast and Slow Thinking of machines.


Assuntos
Tecnologia , Pensamento , Humanos , Aprendizado de Máquina
16.
Proc Natl Acad Sci U S A ; 117(5): 2332-2337, 2020 02 04.
Artigo em Inglês | MEDLINE | ID: mdl-31964849

RESUMO

When do people find it acceptable to sacrifice one life to save many? Cross-cultural studies suggested a complex pattern of universals and variations in the way people approach this question, but data were often based on small samples from a small number of countries outside of the Western world. Here we analyze responses to three sacrificial dilemmas by 70,000 participants in 10 languages and 42 countries. In every country, the three dilemmas displayed the same qualitative ordering of sacrifice acceptability, suggesting that this ordering is best explained by basic cognitive processes rather than cultural norms. The quantitative acceptability of each sacrifice, however, showed substantial country-level variations. We show that low relational mobility (where people are more cautious about not alienating their current social partners) is strongly associated with the rejection of sacrifices for the greater good (especially for Eastern countries), which may be explained by the signaling value of this rejection. We make our dataset fully available as a public resource for researchers studying universals and variations in human morality.


Assuntos
Tomada de Decisões/ética , Princípios Morais , Cognição/ética , Cognição/fisiologia , Comparação Transcultural , Tomada de Decisões/fisiologia , Teoria Ética , Humanos , Mobilidade Social , Inquéritos e Questionários
17.
Nat Hum Behav ; 4(2): 134-143, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31659321

RESUMO

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human-machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.


Assuntos
Acidentes de Trânsito , Automação , Condução de Veículo , Automóveis , Sistemas Homem-Máquina , Segurança , Percepção Social , Acidentes de Trânsito/legislação & jurisprudência , Adulto , Automação/ética , Automação/legislação & jurisprudência , Condução de Veículo/legislação & jurisprudência , Automóveis/ética , Automóveis/legislação & jurisprudência , Humanos , Pedestres/legislação & jurisprudência , Segurança/legislação & jurisprudência
18.
Nat Hum Behav ; 3(5): 446-452, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30936426

RESUMO

Bows and arrows, houses and kayaks are just a few examples of the highly optimized tools that humans have produced and used to colonize new environments1,2. Because there is much evidence that humans' cognitive abilities are unparalleled3,4, many believe that such technologies resulted from our superior causal reasoning abilities5-7. However, others have stressed that the high dimensionality of human technologies makes them very difficult to understand causally8. Instead, they argue that optimized technologies emerge through the retention of small improvements across generations without requiring understanding of how these technologies work1,9. Here we show that a physical artefact becomes progressively optimized across generations of social learners in the absence of explicit causal understanding. Moreover, we find that the transmission of causal models across generations has no noticeable effect on the pace of cultural evolution. The reason is that participants do not spontaneously create multidimensional causal theories but, instead, mainly produce simplistic models related to a salient dimension. Finally, we show that the transmission of these inaccurate theories constrains learners' exploration and has downstream effects on their understanding. These results indicate that complex technologies need not result from enhanced causal reasoning but, instead, can emerge from the accumulation of improvements made across generations.


Assuntos
Compreensão/fisiologia , Evolução Cultural , Resolução de Problemas/fisiologia , Desempenho Psicomotor/fisiologia , Aprendizado Social , Tecnologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
19.
Nature ; 568(7753): 477-486, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-31019318

RESUMO

Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. Here we argue that this necessitates a broad scientific research agenda to study machine behaviour that incorporates and expands upon the discipline of computer science and includes insights from across the sciences. We first outline a set of questions that are fundamental to this emerging field and then explore the technical, legal and institutional constraints on the study of machine behaviour.


Assuntos
Inteligência Artificial , Inteligência Artificial/legislação & jurisprudência , Inteligência Artificial/tendências , Humanos , Motivação , Robótica
20.
Nature ; 563(7729): 59-64, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30356211

RESUMO

With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents' demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.


Assuntos
Acidentes de Trânsito , Inteligência Artificial/ética , Redução do Dano , Internet , Princípios Morais , Veículos Automotores , Opinião Pública , Robótica/ética , Coleta de Dados , Tomada de Decisões , Feminino , Humanos , Internacionalidade , Masculino , Veículos Automotores/ética , Pedestres , Robótica/métodos , Tradução
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...