Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Artif Intell Med ; 149: 102780, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38462282

RESUMEN

The rise of complex AI systems in healthcare and other sectors has led to a growing area of research called Explainable AI (XAI) designed to increase transparency. In this area, quantitative and qualitative studies focus on improving user trust and task performance by providing system- and prediction-level XAI features. We analyze stakeholder engagement events (interviews and workshops) on the use of AI for kidney transplantation. From this we identify themes which we use to frame a scoping literature review on current XAI features. The stakeholder engagement process lasted over nine months covering three stakeholder group's workflows, determining where AI could intervene and assessing a mock XAI decision support system. Based on the stakeholder engagement, we identify four major themes relevant to designing XAI systems - 1) use of AI predictions, 2) information included in AI predictions, 3) personalization of AI predictions for individual differences, and 4) customizing AI predictions for specific cases. Using these themes, our scoping literature review finds that providing AI predictions before, during, or after decision-making could be beneficial depending on the complexity of the stakeholder's task. Additionally, expert stakeholders like surgeons prefer minimal to no XAI features, AI prediction, and uncertainty estimates for easy use cases. However, almost all stakeholders prefer to have optional XAI features to review when needed, especially in hard-to-predict cases. The literature also suggests that providing both system- and prediction-level information is necessary to build the user's mental model of the system appropriately. Although XAI features improve users' trust in the system, human-AI team performance is not always enhanced. Overall, stakeholders prefer to have agency over the XAI interface to control the level of information based on their needs and task complexity. We conclude with suggestions for future research, especially on customizing XAI features based on preferences and tasks.


Asunto(s)
Trasplante de Riñón , Cirujanos , Humanos , Confianza , Incertidumbre , Flujo de Trabajo
2.
J Exp Psychol Appl ; 29(3): 676-692, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36006713

RESUMEN

The use of artificial intelligence (AI) to compose music is becoming mainstream. Yet, there is a concern that listeners may have biases against AIs. Here, we test the hypothesis that listeners will like music less if they think it was composed by an AI. In Study 1, participants listened to excerpts of electronic and classical music and rated how much they liked the excerpts and whether they thought they were composed by an AI or human. Participants were more likely to attribute an AI composer to electronic music and liked music less that they thought was composed by an AI. In Study 2, we directly manipulated composer identity by telling participants that the music they heard (electronic music) was composed by an AI or by a human, yet we found no effect of composer identity on liking. We hypothesized that this was due to the "AI-sounding" nature of electronic music. Therefore, in Study 3, we used a set of "human-sounding" classical music excerpts. Here, participants liked the music less when it was purportedly composed by an AI. We conclude with implications of the AI composer bias for understanding perception of AIs in arts and aesthetic processing theories more broadly. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Música , Humanos , Inteligencia Artificial , Percepción Auditiva , Emociones , Audición
3.
Soc Sci Res ; 105: 102723, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35659049

RESUMEN

Stable impressions of how good, powerful, and active an organization is may be jointly shared with their employees, yet the impression produced by employees' behavior may be transferred back to the organization. Our first studies shows that stable impressions, or sentiments, of organizations (e.g., a library) are fairly similar to those of their employees (e.g., an employee of a library), with organizations viewed as more powerful and morally extreme than their employees. Our principal studies along with affect control theory simulations show how the impressions created by an employee's behavior toward a customer (e.g., an employee of a library shouts at a customer) transfer to the employee's organization. Affect control theory simulations predict the impressions of an organization as well as they predict impressions of the individual employee. Regression and classification analyses give support to impression transfer, with the most transfer occurring for evaluation impressions, and more so for transferring bad impressions rather than good ones. Therefore, this research shows how a single behavior by a rank-and-file employee can shape outsider's impressions of organizations and the potential for applying affect control theory predictions to impressions of organizations.


Asunto(s)
Emociones , Organizaciones , Actitud , Humanos
4.
Hum Factors ; : 187208221100691, 2022 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-35603703

RESUMEN

OBJECTIVE: This study manipulates the presence and reliability of AI recommendations for risky decisions to measure the effect on task performance, behavioral consequences of trust, and deviation from a probability matching collaborative decision-making model. BACKGROUND: Although AI decision support improves performance, people tend to underutilize AI recommendations, particularly when outcomes are uncertain. As AI reliability increases, task performance improves, largely due to higher rates of compliance (following action recommendations) and reliance (following no-action recommendations). METHODS: In a between-subject design, participants were assigned to a high reliability AI, low reliability AI, or a control condition. Participants decided whether to bet that their team would win in a series of basketball games tying compensation to performance. We evaluated task performance (in accuracy and signal detection terms) and the behavioral consequences of trust (via compliance and reliance). RESULTS: AI recommendations improved task performance, had limited impact on risk-taking behavior, and were under-valued by participants. Accuracy, sensitivity (d'), and reliance increased in the high reliability AI condition, but there was no effect on response bias (c) or compliance. Participant behavior was only consistent with a probability matching model for compliance in the low reliability condition. CONCLUSION: In a pay-off structure that incentivized risk-taking, the primary value of the AI recommendations was in determining when to perform no action (i.e., pass on bets). APPLICATION: In risky contexts, designers need to consider whether action or no-action recommendations will be more influential to design appropriate interventions.

5.
Clim Change ; 164(1): 4, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33500596

RESUMEN

In the era when human activities can fundamentally alter the planetary climate system, a stable climate is a global commons. However, the need to develop the economy to sustain the growing human population poses the Climate Commons Dilemma. Although citizens may need to support policies that forgo their country's economic growth, they may instead be motivated to grow their economy while freeriding on others' efforts to mitigate the ongoing climate change. To examine how to resolve the climate commons dilemma, we constructed a Climate Commons Game (CCG), an experimental analogue of the climate commons dilemma that embeds a simple model of the effects of economic activities on global temperature rise and its eventual adverse effects on the economy. The game includes multiple economic units, and each participant is tasked to manage one economic unit while keeping global temperature rise to a sustainable level. In two experiments, we show that people can manage the climate system and their economies better when they regarded the goal of environmentally sustainable economic growth as a singular global goal that all economic units collectively pursue rather than a goal to be achieved by each unit individually. In addition, beliefs that everyone shares the knowledge about the climate system help the group coordinate their economic activities better to mitigate global warming in the CCG. However, we also found that the resolution of the climate commons dilemma came at the cost of exacerbating inequality among the economic units in the current constrains of the CCG. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10584-021-02989-2.

6.
Curr Transplant Rep ; 8(4): 263-271, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35059280

RESUMEN

PURPOSE OF REVIEW: A transdisciplinary systems approach to the design of an artificial intelligence (AI) decision support system can more effectively address the limitations of AI systems. By incorporating stakeholder input early in the process, the final product is more likely to improve decision-making and effectively reduce kidney discard. RECENT FINDINGS: Kidney discard is a complex problem that will require increased coordination between transplant stakeholders. An AI decision support system has significant potential, but there are challenges associated with overfitting, poor explainability, and inadequate trust. A transdisciplinary approach provides a holistic perspective that incorporates expertise from engineering, social science, and transplant healthcare. A systems approach leverages techniques for visualizing the system architecture to support solution design from multiple perspectives. SUMMARY: Developing a systems-based approach to AI decision support involves engaging in a cycle of documenting the system architecture, identifying pain points, developing prototypes, and validating the system. Early efforts have focused on describing process issues to prioritize tasks that would benefit from AI support.

7.
PLoS One ; 15(1): e0228445, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31978170

RESUMEN

[This corrects the article DOI: 10.1371/journal.pone.0184480.].

8.
Data Brief ; 25: 104220, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31367659

RESUMEN

This article presents the data from two surveys that asked about everyday encounters with artificial intelligence (AI) systems that are perceived to have attributes of mind. In response to specific attribute prompts about an AI, the participants qualitatively described a personally-known encounter with an AI. In survey 1 the prompts asked about an AI planning, having memory, controlling resources, or doing something surprising. In survey 2 the prompts asked about an AI experiencing emotion, expressing desires or beliefs, having human-like physical features, or being mistaken for a human. The original responses were culled based on the ratings of multiple coders to eliminate responses that did not adhere to the prompts. This article includes the qualitative responses, coded categories of those qualitative responses, quantitative measures of mind perception and demographics. For interpretation of this data related to people's emotions, see Feeling our Way to Machine Minds: People's Emotions when Perceiving Mind in Artificial Intelligence Shank et al., 2019.

9.
J Pers Soc Psychol ; 117(1): 99-123, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30475008

RESUMEN

Norm talk is verbal communication that explicitly states or implicitly implies a social norm. To investigate its ability to shape cultural dynamics, 2 types of norm talk were examined: injunction, which explicitly states what should be done, and gossip, which implies a norm by stating an action approved or disapproved of by the communicator. In 2 experiments, participants engaged in norm talk in repeated public goods games. Norm talk was found to help sustain cooperation relative to the control condition; immediately after every norm talk opportunity, cooperation spiked, followed by a gradual decline. Despite the macrolevel uniformity in their effects on cooperation, evidence suggests different microlevel mechanisms for the cooperation-enhancing effects of injunction and gossip. A 3rd study confirmed that both injunction and gossip sustain cooperation by making salient the norm of cooperation, but injunction also effects mutual verification of the communicated norm, whereas gossip emphasizes its reputational implications by linking cooperation to status conferral and noncooperation to reputational damage. A 4th experiment provided additional evidence that norm talk was superior to the promise of conditional cooperation in sustaining cooperation. Implications of the findings for cultural dynamics are discussed in terms of how feelings of shared morality, language-based interpersonal communication, and ritualization of norm communication contribute to social regulation. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Comunicación , Conducta Cooperativa , Relaciones Interpersonales , Normas Sociales , Cultura , Femenino , Humanos , Masculino , Adulto Joven
10.
PLoS One ; 12(9): e0184480, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28880945

RESUMEN

Adopting successful climate change mitigation policies requires the public to choose how to balance the sometimes competing goals of managing CO2 emissions and achieving economic growth. It follows that collective action on climate change depends on members of the public to be knowledgeable of the causes and economic ramifications of climate change. The existing literature, however, shows that people often struggle to correctly reason about the fundamental accumulation dynamics that drive climate change. Previous research has focused on using analogy to improve people's reasoning about accumulation, which has been met with some success. However, these existing studies have neglected the role economic factors might play in shaping people's decisions in relation to climate change. Here, we introduce a novel iterated decision task in which people attempt to achieve a specific economic goal by interacting with a causal dynamic system in which human economic activities, CO2 emissions, and warming are all causally interrelated. We show that when the causal links between these factors are highlighted, people's ability to achieve the economic goal of the task is enhanced in a way that approaches optimal responding, and avoids dangerous levels of warming.


Asunto(s)
Cambio Climático , Dióxido de Carbono/análisis , Desarrollo Económico , Humanos , Conducta Social
11.
PLoS One ; 10(3): e0120379, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25799355

RESUMEN

Empirical findings on public goods dilemmas indicate an unresolved dilemma: that increasing size-the number of people in the dilemma-sometimes increases, decreases, or does not influence cooperation. We clarify this dilemma by first classifying public goods dilemma properties that specify individual outcomes as individual properties (e.g., Marginal Per Capita Return) and group outcomes as group properties (e.g., public good multiplier), mathematically showing how only one set of properties can remain constant as the dilemma size increases. Underpinning decision-making regarding individual and group properties, we propose that individuals are motivated by both individual and group preferences based on a theory of collective rationality. We use Van Lange's integrated model of social value orientations to operationalize these preferences as an amalgamation of outcomes for self, outcomes for others, and equality of outcomes. Based on this model, we then predict how the public good's benefit and size, combined with controlling individual versus group properties, produce different levels of cooperation in public goods dilemmas. A two (low vs. high benefit) by three (2-person baseline vs. 5-person holding constant individual properties vs. 5-person holding constant group properties) factorial experiment (group n = 99; participant n = 390) confirms our hypotheses. The results indicate that when holding constant group properties, size decreases cooperation. Yet when holding constant individual properties, size increases cooperation when benefit is low and does not affect cooperation when benefit is high. Using agent-based simulations of individual and group preferences vis-à-vis the integrative model, we fit a weighted simulation model to the empirical data. This fitted model is sufficient to reproduce the empirical results, but only when both individual (self-interest) and group (other-interest and equality) preference are included. Our research contributes to understanding how people's motivations and behaviors within public goods dilemmas interact with the properties of the dilemma to lead to collective outcomes.


Asunto(s)
Conducta Cooperativa , Modelos Biológicos , Humanos , Motivación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA