Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Front Psychol ; 14: 1129369, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37408965

RESUMO

The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group-vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology.

2.
Sci Rep ; 13(1): 78, 2023 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-36596816

RESUMO

While some theoretical perspectives imply that the context of a virtual training should be customized to match the intended context where those skills would ultimately be applied, others suggest this might not be necessary for learning. It is important to determine whether manipulating context matters for performance in training applications because customized virtual training systems made for specific use cases are more costly than generic "off-the-shelf" ones designed for a broader set of users. Accordingly, we report a study where military cadets use a virtual platform to practice their negotiation skills, and are randomly assigned to one of two virtual context conditions: military versus civilian. Out of 28 measures capturing performance in the negotiation, there was only one significant result: cadets in the civilian condition politely ask the agent to make an offer significantly more than those in the military condition. These results imply that-for this interpersonal skills application, and perhaps ones like it-virtual context may matter very little for performance during social skills training, and that commercial systems may yield real benefits to military scenarios with little-to-no modification.


Assuntos
Aprendizagem , Militares , Habilidades Sociais , Interface Usuário-Computador , Militares/educação , Militares/psicologia , Humanos , Distribuição Aleatória
3.
Proc Natl Acad Sci U S A ; 119(6)2022 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-35131848

RESUMO

Across 11 studies involving six countries from four continents (n = 3,285), we extend insights from field investigations in conflict zones to offline and online surveys to show that personal spiritual formidability-the conviction and immaterial resources (values, strengths of beliefs, character) of a person to fight-is positively associated with the will to fight and sacrifice for others. The physical formidability of groups in conflict has long been promoted as the primary factor in human decisions to fight or flee in times of conflict. Here, studies in Spain, Iraq, Lebanon, Palestine, and Morocco reveal that personal spiritual formidability, a construct distinct from religiosity, is more strongly associated with the willingness to fight and make costly self-sacrifices for the group than physical formidability. A follow-on study among cadets of the US Air Force Academy further indicates that this effect is mediated by a stronger loyalty to the group, a finding replicated in a separate study with a European sample. The results demonstrate that personal spiritual formidability is a primary determinant of the will to fight across cultures, and this individual-level factor, propelled by loyal bonds made with others, disposes citizens and combatants to fight at great personal risk.


Assuntos
Negociação/psicologia , Percepção Social/psicologia , Adolescente , Adulto , Idoso , Comparação Transcultural , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Lealdade ao Trabalho , Religião , Inquéritos e Questionários , Adulto Jovem
4.
Sensors (Basel) ; 22(3)2022 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-35162032

RESUMO

To understand how to improve interactions with dog-like robots, we evaluated the importance of "dog-like" framing and physical appearance on interaction, hypothesizing multiple interactive benefits of each. We assessed whether framing Aibo as a puppy (i.e., in need of development) versus simply a robot would result in more positive responses and interactions. We also predicted that adding fur to Aibo would make it appear more dog-like, likable, and interactive. Twenty-nine participants engaged with Aibo in a 2 × 2 (framing × appearance) design by issuing commands to the robot. Aibo and participant behaviors were monitored per second, and evaluated via an analysis of commands issued, an analysis of command blocks (i.e., chains of commands), and using a T-pattern analysis of participant behavior. Participants were more likely to issue the "Come Here" command than other types of commands. When framed as a puppy, participants used Aibo's dog name more often, praised it more, and exhibited more unique, interactive, and complex behavior with Aibo. Participants exhibited the most smiling and laughing behaviors with Aibo framed as a puppy without fur. Across conditions, after interacting with Aibo, participants felt Aibo was more trustworthy, intelligent, warm, and connected than at their initial meeting. This study shows the benefits of introducing a socially robotic agent with a particular frame and importance on realism (i.e., introducing the robot dog as a puppy) for more interactive engagement.


Assuntos
Robótica , Animais , Cães , Emoções , Amigos , Humanos
5.
Front Psychol ; 12: 604977, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34737716

RESUMO

With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.

6.
Front Psychol ; 12: 625713, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34135804

RESUMO

The anticipated social capabilities of robots may allow them to serve in authority roles as part of human-machine teams. To date, it is unclear if, and to what extent, human team members will comply with requests from their robotic teammates, and how such compliance compares to requests from human teammates. This research examined how the human-likeness and physical embodiment of a robot affect compliance to a robot's request to perseverate utilizing a novel task paradigm. Across a set of two studies, participants performed a visual search task while receiving ambiguous performance feedback. Compliance was evaluated when the participant requested to stop the task and the coach urged the participant to keep practicing multiple times. In the first study, the coach was either physically co-located with the participant or located remotely via a live-video. Coach type varied in human-likeness and included either a real human (confederate), a Nao robot, or a modified Roomba robot. The second study expanded on the first by including a Baxter robot as a coach and replicated the findings in a different sample population with a strict chain of command culture. Results from both studies showed that participants comply with the requests of a robot for up to 11 min. Compliance is less than to a human and embodiment and human-likeness on had weak effects on compliance.

7.
Front Robot AI ; 8: 772141, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35155588

RESUMO

The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.

8.
Ergonomics ; 63(4): 421-439, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32096445

RESUMO

Stereotypes are cognitive shortcuts that facilitate efficient social judgments about others. Just as causal attributions affect perceptions of people, they may similarly affect perceptions of technology, particularly anthropomorphic technology such as robots. In a scenario-based study, younger and older adults judged the performance and capability of an anthropomorphised robot that appeared young or old. In some cases, the robot successfully performed a task while at other times it failed. Results showed that older adult participants were more susceptible to aging stereotypes as indicated by trust. In addition, both younger and older adult participants succumbed to aging stereotypes when measuring perceived capability of the robots. Finally, a summary of causal reasoning results showed that our participants may have applied aging stereotypes to older-appearing robots: they were most likely to give credit to a properly functioning robot when it appeared young and performed a cognitive task. Our results tentatively suggest that human theories of social cognition do not wholly translate to technology-based contexts and that future work may elaborate on these findings. Practitioner summary: Perception and expectations of the capabilities of robots may influence whether users accept and use them, especially older users. The current results suggest that care must be taken in the design of these robots as users may stereotype them.


Assuntos
Fatores Etários , Robótica , Percepção Social , Estereotipagem , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Adulto Jovem
9.
Hum Factors ; 62(2): 194-210, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31419163

RESUMO

OBJECTIVE: The present study aims to evaluate driver intervention behaviors during a partially automated parking task. BACKGROUND: Cars with partially automated parking features are becoming widely available. Although recent research explores the use of automation features in partially automated cars, none have focused on partially automated parking. Recent incidents and research have demonstrated that drivers sometimes use partially automated features in unexpected, inefficient, and harmful ways. METHOD: Participants completed a series of partially automated parking trials with a Tesla Model X and their behavioral interventions were recorded. Participants also completed a risk-taking behavior test and a post-experiment questionnaire that included questions about trust in the system, likelihood of using the Autopark feature, and preference for either the partially automated parking feature or self-parking. RESULTS: Initial intervention rates were over 50%, but declined steeply in later trials. Responses to open-ended questions revealed that once participants understood what the system was doing, they were much more likely to trust it. Trust in the partially automated parking feature was predicted by a model including risk-taking behaviors, self-confidence, self-reported number of errors committed by the Tesla, and the proportion of trials in which the driver intervened. CONCLUSION: Using partially automated parking with little knowledge of its workings can lead to high degree of initial distrust. Repeated exposure of partially automated features to drivers can greatly increase their use. APPLICATION: Short tutorials and brief explanations of the workings of partially automated features may greatly improve trust in the system when drivers are first introduced to partially automated systems.


Assuntos
Automação , Condução de Veículo/psicologia , Automóveis , Sistemas Homem-Máquina , Confiança , Adolescente , Humanos , Masculino , Assunção de Riscos , Inquéritos e Questionários , Adulto Jovem
10.
Front Hum Neurosci ; 12: 309, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30147648

RESUMO

With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm's degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.

11.
Ergonomics ; 61(10): 1409-1427, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29578376

RESUMO

Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationships with autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centred approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user's needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems. Practitioner Summary: This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.


Assuntos
Computadores , Sistemas Homem-Máquina , Humanos , Tecnologia , Confiança
12.
Hum Factors ; 59(1): 116-133, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-28146673

RESUMO

OBJECTIVE: We investigated the effects of exogenous oxytocin on trust, compliance, and team decision making with agents varying in anthropomorphism (computer, avatar, human) and reliability (100%, 50%). BACKGROUND: Authors of recent work have explored psychological similarities in how people trust humanlike automation compared with how they trust other humans. Exogenous administration of oxytocin, a neuropeptide associated with trust among humans, offers a unique opportunity to probe the anthropomorphism continuum of automation to infer when agents are trusted like another human or merely a machine. METHOD: Eighty-four healthy male participants collaborated with automated agents varying in anthropomorphism that provided recommendations in a pattern recognition task. RESULTS: Under placebo, participants exhibited less trust and compliance with automated aids as the anthropomorphism of those aids increased. Under oxytocin, participants interacted with aids on the extremes of the anthropomorphism continuum similarly to placebos but increased their trust, compliance, and performance with the avatar, an agent on the midpoint of the anthropomorphism continuum. CONCLUSION: This study provides the first evidence that administration of exogenous oxytocin affected trust, compliance, and team decision making with automated agents. These effects provide support for the premise that oxytocin increases affinity for social stimuli in automated aids. APPLICATION: Designing automation to mimic basic human characteristics is sufficient to elicit behavioral trust outcomes that are driven by neurological processes typically observed in human-human interactions. Designers of automated systems should consider the task, the individual, and the level of anthropomorphism to achieve the desired outcome.


Assuntos
Automação , Comportamento Cooperativo , Tomada de Decisões/fisiologia , Sistemas Homem-Máquina , Ocitocina/farmacologia , Confiança , Adulto , Tomada de Decisões/efeitos dos fármacos , Humanos , Masculino
13.
J Exp Psychol Appl ; 22(3): 331-49, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27505048

RESUMO

We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human­automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism­the degree to which an agent exhibits human characteristics­is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater trust resilience, a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human­agent trust as well as novel automation design.


Assuntos
Inteligência Artificial , Cognição , Confiança , Interface Usuário-Computador , Adolescente , Adulto , Automação , Computadores , Feminino , Humanos , Masculino , Adulto Jovem
14.
Work ; 54(2): 351-66, 2016 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-27232057

RESUMO

BACKGROUND: Resilience to stress is critical in today's military service. Past work has shown that experts handle stress in more productive ways compared to novices. Training that specifically addresses stress regulation, such as the Graduated Stress Exposure paradigm, can build individual and unit resilience as well as adaptability so that stressors trigger effective stress coping skills rather than stress injury. OBJECTIVE: We developed the Stress Resilience Training System (SRTS), a product of Perceptronics Solutions Inc., to demonstrate that a software training app can provide an effective individualized method for mitigating the negative effects of situational and mission-related stress, at the same time eliciting potentially positive effects on performance. METHODS: Seven separate evaluations including a usability study, controlled experiments, and field evaluations have been conducted to date. RESULTS: These studies have shown that the SRTS program effectively engages users to manage their stress, effectively reduces stress symptoms, and improves job performance. CONCLUSIONS: The SRTS system is a highly effective method for individualized training to inoculate professionals against the negative consequences of stress, while teaching them to harness its positive effects. SRTS is a technology that can be widely applied to many professions that are concerned with well-being. We discuss applications to law enforcement, athletics, personal fitness and healthcare in the Appendix.


Assuntos
Adaptação Psicológica , Militares/psicologia , Aplicativos Móveis , Resiliência Psicológica , Estresse Psicológico/prevenção & controle , Atitude , Feedback Formativo , Humanos , Internet , Militares/educação , Senso de Coerência , Interface Usuário-Computador , Jogos de Vídeo/psicologia
15.
Hum Factors ; 53(5): 517-27, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22046724

RESUMO

OBJECTIVE: We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). BACKGROUND: To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. METHOD: Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. RESULTS: The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. CONCLUSION: Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. APPLICATION: The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.


Assuntos
Relações Interpessoais , Robótica , Confiança , Humanos , Comportamento Social
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA