Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Acad Mark Sci ; 50(6): 1257-1276, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35221393

RESUMO

Marketers are adopting increasingly sophisticated ways to engage with customers throughout their journeys. We extend prior perspectives on the customer journey by introducing the role of digital signals that consumers emit throughout their activities. We argue that the ability to detect and act on consumer digital signals is a source of competitive advantage for firms. Technology enables firms to collect, interpret, and act on these signals to better manage the customer journey. While some consumers' desire for privacy can restrict the opportunities technology provides marketers, other consumers' desire for personalization can encourage the use of technology to inform marketing efforts. We posit that this difference in consumers' willingness to emit observable signals may hinge on the strength of their relationship with the firm. We next discuss factors that may shift consumer preferences and consequently affect the technology-enabled opportunities available to firms. We conclude with a research agenda that focuses on consumers, firms, and regulators.

2.
Cognition ; 254: 105937, 2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39317021

RESUMO

The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people's receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined N = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision-an effect we term AI-induced indifference. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people's desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed. All preregistrations, data, code, statistical outputs, stimuli qsf files, and the Supplementary Appendix are posted on OSF at: https://bit.ly/OSF_unfairAI.

3.
PNAS Nexus ; 3(6): pgae191, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38864006

RESUMO

Generative artificial intelligence (AI) has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the potential impacts of generative AI on (mis)information and three information-intensive domains: work, education, and healthcare. Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems. In the information domain, generative AI can democratize content creation and access but may dramatically expand the production and proliferation of misinformation. In the workplace, it can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning, but may widen the digital divide. In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities. In each section, we cover a specific topic, evaluate existing research, identify critical gaps, and recommend research directions, including explicit trade-offs that complicate the derivation of a priori hypotheses. We conclude with a section highlighting the role of policymaking to maximize generative AI's potential to reduce inequalities while mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We propose several concrete policies that could promote shared prosperity through the advancement of generative AI. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI.

4.
Sci Data ; 10(1): 272, 2023 05 11.
Artigo em Inglês | MEDLINE | ID: mdl-37169799

RESUMO

The COVID-19 pandemic has affected all domains of human life, including the economic and social fabric of societies. One of the central strategies for managing public health throughout the pandemic has been through persuasive messaging and collective behaviour change. To help scholars better understand the social and moral psychology behind public health behaviour, we present a dataset comprising of 51,404 individuals from 69 countries. This dataset was collected for the International Collaboration on Social & Moral Psychology of COVID-19 project (ICSMP COVID-19). This social science survey invited participants around the world to complete a series of moral and psychological measures and public health attitudes about COVID-19 during an early phase of the COVID-19 pandemic (between April and June 2020). The survey included seven broad categories of questions: COVID-19 beliefs and compliance behaviours; identity and social attitudes; ideology; health and well-being; moral beliefs and motivation; personality traits; and demographic variables. We report both raw and cleaned data, along with all survey materials, data visualisations, and psychometric evaluations of key variables.


Assuntos
COVID-19 , Humanos , Atitude , COVID-19/psicologia , Princípios Morais , Pandemias , Inquéritos e Questionários , Mudança Social , Fatores Socioeconômicos
5.
Nat Hum Behav ; 5(12): 1636-1642, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34183800

RESUMO

Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a 'black box') and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1-3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).


Assuntos
Inteligência Artificial , Tomada de Decisão Clínica , Atenção à Saúde , Adulto , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA