Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
Science ; 385(6714): 1164-1165, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39265030

RESUMEN

Conversation with a trained chatbot can reduce conspiratorial beliefs.


Asunto(s)
Inteligencia Artificial , Comunicación , Humanos
2.
iScience ; 27(7): 110201, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39109173

RESUMEN

Humans, aware of the social costs associated with false accusations, are generally hesitant to accuse others of lying. Our study shows how lie detection algorithms disrupt this social dynamic. We develop a supervised machine-learning classifier that surpasses human accuracy and conduct a large-scale incentivized experiment manipulating the availability of this lie-detection algorithm. In the absence of algorithmic support, people are reluctant to accuse others of lying, but when the algorithm becomes available, a minority actively seeks its prediction and consistently relies on it for accusations. Although those who request machine predictions are not inherently more prone to accuse, they more willingly follow predictions that suggest accusation than those who receive such predictions without actively seeking them.

3.
PNAS Nexus ; 3(6): pgae191, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38864006

RESUMEN

Generative artificial intelligence (AI) has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the potential impacts of generative AI on (mis)information and three information-intensive domains: work, education, and healthcare. Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems. In the information domain, generative AI can democratize content creation and access but may dramatically expand the production and proliferation of misinformation. In the workplace, it can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning, but may widen the digital divide. In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities. In each section, we cover a specific topic, evaluate existing research, identify critical gaps, and recommend research directions, including explicit trade-offs that complicate the derivation of a priori hypotheses. We conclude with a section highlighting the role of policymaking to maximize generative AI's potential to reduce inequalities while mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We propose several concrete policies that could promote shared prosperity through the advancement of generative AI. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI.

4.
Behav Brain Sci ; 47: e50, 2024 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-38311444

RESUMEN

To succeed, we posit that research cartography will require high-throughput natural description to identify unknown unknowns in a particular design space. High-throughput natural description, the systematic collection and annotation of representative corpora of real-world stimuli, faces logistical challenges, but these can be overcome by solutions that are deployed in the later stages of integrative experiment design.

5.
Annu Rev Psychol ; 75: 653-675, 2024 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-37722750

RESUMEN

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.


Asunto(s)
Inteligencia Artificial , Principios Morales , Animales , Humanos , Inteligencia
6.
Nat Hum Behav ; 7(11): 1855-1868, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37985914

RESUMEN

The ability of humans to create and disseminate culture is often credited as the single most important factor of our success as a species. In this Perspective, we explore the notion of 'machine culture', culture mediated or generated by machines. We argue that intelligent machines simultaneously transform the cultural evolutionary processes of variation, transmission and selection. Recommender algorithms are altering social learning dynamics. Chatbots are forming a new mode of cultural transmission, serving as cultural models. Furthermore, intelligent machines are evolving as contributors in generating cultural traits-from game strategies and visual art to scientific results. We provide a conceptual framework for studying the present and anticipated future impact of machines on cultural evolution, and present a research agenda for the study of machine culture.


Asunto(s)
Evolución Cultural , Hominidae , Humanos , Animales , Cultura , Aprendizaje
7.
Behav Brain Sci ; 46: e297, 2023 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-37789540

RESUMEN

Puritanism may evolve into a technological variant based on norms of delegation of actions and perceptions to artificial intelligence. Instead of training self-control, people may be expected to cede their agency to self-controlled machines. The cost-benefit balance of this machine puritanism may be less aversive to wealthy individualistic democracies than the old puritanism they have abandoned.


Asunto(s)
Inteligencia Artificial , Principios Morales , Humanos , Afecto
8.
PNAS Nexus ; 2(6): pgad179, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37325024

RESUMEN

Artificial intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems-enabling people and organizations to form judgments of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.

9.
Nat Commun ; 14(1): 3108, 2023 05 30.
Artículo en Inglés | MEDLINE | ID: mdl-37253759

RESUMEN

With the progress of artificial intelligence and the emergence of global online communities, humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Human societies have had thousands of years to consolidate the social norms that promote cooperation; but mixed collectives often struggle to articulate the norms which hold when humans coexist with machines. In five studies involving 7917 individuals, we document the way people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors. We show that a different amount of trust is gained by helpers and punishers when they follow norms over not doing so. We also demonstrate that the trust-gain of norm-followers is associated with trustors' assessment about the consensual nature of cooperative norms over helping and punishing. Lastly, we establish that, under certain conditions, informing trustors about the norm-consensus over helping tends to decrease the differential treatment of both machines and people interacting with them. These results allow us to anticipate how humans may develop cooperative norms for human-machine collectives, specifically, by relying on already extant norms in human-only groups. We also demonstrate that this evolution may be accelerated by making people aware of their emerging consensus.


Asunto(s)
Conducta Cooperativa , Confianza , Humanos , Inteligencia Artificial , Consenso , Normas Sociales
10.
MDM Policy Pract ; 7(2): 23814683221113573, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35911175

RESUMEN

Objective. When medical resources are scarce, clinicians must make difficult triage decisions. When these decisions affect public trust and morale, as was the case during the COVID-19 pandemic, experts will benefit from knowing which triage metrics have citizen support. Design. We conducted an online survey in 20 countries, comparing support for 5 common metrics (prognosis, age, quality of life, past and future contribution as a health care worker) to a benchmark consisting of support for 2 no-triage mechanisms (first-come-first-served and random allocation). Results. We surveyed nationally representative samples of 1000 citizens in each of Brazil, France, Japan, and the United States and also self-selected samples from 20 countries (total N = 7599) obtained through a citizen science website (the Moral Machine). We computed the support for each metric by comparing its usability to the usability of the 2 no-triage mechanisms. We further analyzed the polarizing nature of each metric by considering its usability among participants who had a preference for no triage. In all countries, preferences were polarized, with the 2 largest groups preferring either no triage or extensive triage using all metrics. Prognosis was the least controversial metric. There was little support for giving priority to healthcare workers. Conclusions. It will be difficult to define triage guidelines that elicit public trust and approval. Given the importance of prognosis in triage protocols, it is reassuring that it is the least controversial metric. Experts will need to prepare strong arguments for other metrics if they wish to preserve public trust and morale during health crises. Highlights: We collected citizen preferences regarding triage decisions about scarce medical resources from 20 countries.We find that citizen preferences are universally polarized.Citizens either prefer no triage (random allocation or first-come-first served) or extensive triage using all common triage metrics, with "prognosis" being the least controversial.Experts will need to prepare strong arguments to preserve or elicit public trust in triage decisions.

11.
Proc Natl Acad Sci U S A ; 118(38)2021 09 21.
Artículo en Inglés | MEDLINE | ID: mdl-34526400

RESUMEN

How does the public want a COVID-19 vaccine to be allocated? We conducted a conjoint experiment asking 15,536 adults in 13 countries to evaluate 248,576 profiles of potential vaccine recipients who varied randomly on five attributes. Our sample includes diverse countries from all continents. The results suggest that in addition to giving priority to health workers and to those at high risk, the public favors giving priority to a broad range of key workers and to those with lower income. These preferences are similar across respondents of different education levels, incomes, and political ideologies, as well as across most surveyed countries. The public favored COVID-19 vaccines being allocated solely via government programs but were highly polarized in some developed countries on whether taking a vaccine should be mandatory. There is a consensus among the public on many aspects of COVID-19 vaccination, which needs to be taken into account when developing and communicating rollout strategies.


Asunto(s)
Vacunas contra la COVID-19/administración & dosificación , COVID-19/prevención & control , Salud Pública , Opinión Pública , Vacunación/psicología , Adulto , Personal de Salud , Humanos , SARS-CoV-2 , Encuestas y Cuestionarios
12.
Nat Hum Behav ; 5(6): 679-685, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-34083752

RESUMEN

As machines powered by artificial intelligence (AI) influence humans' behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human-computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.


Asunto(s)
Inteligencia Artificial , Conducta , Principios Morales , Humanos , Interfaz Usuario-Computador
14.
J Exp Psychol Gen ; 150(6): 1081-1094, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33119351

RESUMEN

Human interactions often involve a choice between acting selfishly (in ones' own interest) and acting prosocially (in the interest of others). Fast and slow models of prosociality posit that people intuitively favor 1 of these choices (the selfish choice in some models, the prosocial choice in other models) and need to correct this intuition through deliberation to make the other choice. We present 7 studies that force us to reconsider this longstanding corrective dual-process view. Participants played various economic games in which they had to choose between a prosocial and a selfish option. We used a 2-response paradigm in which participants had to give their first, initial response under time pressure and cognitive load. Next, participants could take all the time they wanted to reflect on the problem and give a final response. This allowed us to identify the intuitively generated response that preceded the final response given after deliberation. Results consistently showed that both prosocial and selfish responses were predominantly made intuitively rather than after deliberate correction. Pace the deliberate correction view, the findings indicate that making prosocial and selfish choices does typically not rely on different types of reasoning modes (intuition vs. deliberation) but rather on different types of intuitions. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Intuición , Solución de Problemas , Humanos
15.
Trends Cogn Sci ; 24(12): 1019-1027, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33129719

RESUMEN

Machines do not 'think fast and slow' in the sense that humans do in dual-process models of cognition. However, the people who create the machines may attempt to emulate or simulate these fast and slow modes of thinking, which will in turn affect the way end users relate to these machines. In this opinion article we consider the complex interplay in the way various stakeholders (engineers, user experience designers, regulators, ethicists, and end users) can be inspired, challenged, or misled by the analogy between the fast and slow thinking of humans and the Fast and Slow Thinking of machines.


Asunto(s)
Tecnología , Pensamiento , Humanos , Aprendizaje Automático
19.
Proc Natl Acad Sci U S A ; 117(5): 2332-2337, 2020 02 04.
Artículo en Inglés | MEDLINE | ID: mdl-31964849

RESUMEN

When do people find it acceptable to sacrifice one life to save many? Cross-cultural studies suggested a complex pattern of universals and variations in the way people approach this question, but data were often based on small samples from a small number of countries outside of the Western world. Here we analyze responses to three sacrificial dilemmas by 70,000 participants in 10 languages and 42 countries. In every country, the three dilemmas displayed the same qualitative ordering of sacrifice acceptability, suggesting that this ordering is best explained by basic cognitive processes rather than cultural norms. The quantitative acceptability of each sacrifice, however, showed substantial country-level variations. We show that low relational mobility (where people are more cautious about not alienating their current social partners) is strongly associated with the rejection of sacrifices for the greater good (especially for Eastern countries), which may be explained by the signaling value of this rejection. We make our dataset fully available as a public resource for researchers studying universals and variations in human morality.


Asunto(s)
Toma de Decisiones/ética , Principios Morales , Cognición/ética , Cognición/fisiología , Comparación Transcultural , Toma de Decisiones/fisiología , Teoría Ética , Humanos , Movilidad Social , Encuestas y Cuestionarios
20.
Nat Hum Behav ; 4(2): 134-143, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31659321

RESUMEN

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human-machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.


Asunto(s)
Accidentes de Tránsito , Automatización , Conducción de Automóvil , Automóviles , Sistemas Hombre-Máquina , Seguridad , Percepción Social , Accidentes de Tránsito/legislación & jurisprudencia , Adulto , Automatización/ética , Automatización/legislación & jurisprudencia , Conducción de Automóvil/legislación & jurisprudencia , Automóviles/ética , Automóviles/legislación & jurisprudencia , Humanos , Peatones/legislación & jurisprudencia , Seguridad/legislación & jurisprudencia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA