Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Front Psychol ; 14: 1129369, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37408965

RESUMO

The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group-vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology.

2.
Sci Rep ; 13(1): 78, 2023 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-36596816

RESUMO

While some theoretical perspectives imply that the context of a virtual training should be customized to match the intended context where those skills would ultimately be applied, others suggest this might not be necessary for learning. It is important to determine whether manipulating context matters for performance in training applications because customized virtual training systems made for specific use cases are more costly than generic "off-the-shelf" ones designed for a broader set of users. Accordingly, we report a study where military cadets use a virtual platform to practice their negotiation skills, and are randomly assigned to one of two virtual context conditions: military versus civilian. Out of 28 measures capturing performance in the negotiation, there was only one significant result: cadets in the civilian condition politely ask the agent to make an offer significantly more than those in the military condition. These results imply that-for this interpersonal skills application, and perhaps ones like it-virtual context may matter very little for performance during social skills training, and that commercial systems may yield real benefits to military scenarios with little-to-no modification.


Assuntos
Aprendizagem , Militares , Habilidades Sociais , Interface Usuário-Computador , Militares/educação , Militares/psicologia , Humanos , Distribuição Aleatória
3.
Proc Natl Acad Sci U S A ; 119(6)2022 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-35131848

RESUMO

Across 11 studies involving six countries from four continents (n = 3,285), we extend insights from field investigations in conflict zones to offline and online surveys to show that personal spiritual formidability-the conviction and immaterial resources (values, strengths of beliefs, character) of a person to fight-is positively associated with the will to fight and sacrifice for others. The physical formidability of groups in conflict has long been promoted as the primary factor in human decisions to fight or flee in times of conflict. Here, studies in Spain, Iraq, Lebanon, Palestine, and Morocco reveal that personal spiritual formidability, a construct distinct from religiosity, is more strongly associated with the willingness to fight and make costly self-sacrifices for the group than physical formidability. A follow-on study among cadets of the US Air Force Academy further indicates that this effect is mediated by a stronger loyalty to the group, a finding replicated in a separate study with a European sample. The results demonstrate that personal spiritual formidability is a primary determinant of the will to fight across cultures, and this individual-level factor, propelled by loyal bonds made with others, disposes citizens and combatants to fight at great personal risk.


Assuntos
Negociação/psicologia , Percepção Social/psicologia , Adolescente , Adulto , Idoso , Comparação Transcultural , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Lealdade ao Trabalho , Religião , Inquéritos e Questionários , Adulto Jovem
4.
Sensors (Basel) ; 22(3)2022 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-35162032

RESUMO

To understand how to improve interactions with dog-like robots, we evaluated the importance of "dog-like" framing and physical appearance on interaction, hypothesizing multiple interactive benefits of each. We assessed whether framing Aibo as a puppy (i.e., in need of development) versus simply a robot would result in more positive responses and interactions. We also predicted that adding fur to Aibo would make it appear more dog-like, likable, and interactive. Twenty-nine participants engaged with Aibo in a 2 × 2 (framing × appearance) design by issuing commands to the robot. Aibo and participant behaviors were monitored per second, and evaluated via an analysis of commands issued, an analysis of command blocks (i.e., chains of commands), and using a T-pattern analysis of participant behavior. Participants were more likely to issue the "Come Here" command than other types of commands. When framed as a puppy, participants used Aibo's dog name more often, praised it more, and exhibited more unique, interactive, and complex behavior with Aibo. Participants exhibited the most smiling and laughing behaviors with Aibo framed as a puppy without fur. Across conditions, after interacting with Aibo, participants felt Aibo was more trustworthy, intelligent, warm, and connected than at their initial meeting. This study shows the benefits of introducing a socially robotic agent with a particular frame and importance on realism (i.e., introducing the robot dog as a puppy) for more interactive engagement.


Assuntos
Robótica , Animais , Cães , Emoções , Amigos , Humanos
5.
Front Psychol ; 12: 604977, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34737716

RESUMO

With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.

6.
Front Psychol ; 12: 625713, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34135804

RESUMO

The anticipated social capabilities of robots may allow them to serve in authority roles as part of human-machine teams. To date, it is unclear if, and to what extent, human team members will comply with requests from their robotic teammates, and how such compliance compares to requests from human teammates. This research examined how the human-likeness and physical embodiment of a robot affect compliance to a robot's request to perseverate utilizing a novel task paradigm. Across a set of two studies, participants performed a visual search task while receiving ambiguous performance feedback. Compliance was evaluated when the participant requested to stop the task and the coach urged the participant to keep practicing multiple times. In the first study, the coach was either physically co-located with the participant or located remotely via a live-video. Coach type varied in human-likeness and included either a real human (confederate), a Nao robot, or a modified Roomba robot. The second study expanded on the first by including a Baxter robot as a coach and replicated the findings in a different sample population with a strict chain of command culture. Results from both studies showed that participants comply with the requests of a robot for up to 11 min. Compliance is less than to a human and embodiment and human-likeness on had weak effects on compliance.

7.
Front Robot AI ; 8: 772141, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35155588

RESUMO

The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.

8.
Ergonomics ; 63(4): 421-439, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32096445

RESUMO

Stereotypes are cognitive shortcuts that facilitate efficient social judgments about others. Just as causal attributions affect perceptions of people, they may similarly affect perceptions of technology, particularly anthropomorphic technology such as robots. In a scenario-based study, younger and older adults judged the performance and capability of an anthropomorphised robot that appeared young or old. In some cases, the robot successfully performed a task while at other times it failed. Results showed that older adult participants were more susceptible to aging stereotypes as indicated by trust. In addition, both younger and older adult participants succumbed to aging stereotypes when measuring perceived capability of the robots. Finally, a summary of causal reasoning results showed that our participants may have applied aging stereotypes to older-appearing robots: they were most likely to give credit to a properly functioning robot when it appeared young and performed a cognitive task. Our results tentatively suggest that human theories of social cognition do not wholly translate to technology-based contexts and that future work may elaborate on these findings. Practitioner summary: Perception and expectations of the capabilities of robots may influence whether users accept and use them, especially older users. The current results suggest that care must be taken in the design of these robots as users may stereotype them.


Assuntos
Fatores Etários , Robótica , Percepção Social , Estereotipagem , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Adulto Jovem
9.
Hum Factors ; 62(2): 194-210, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31419163

RESUMO

OBJECTIVE: The present study aims to evaluate driver intervention behaviors during a partially automated parking task. BACKGROUND: Cars with partially automated parking features are becoming widely available. Although recent research explores the use of automation features in partially automated cars, none have focused on partially automated parking. Recent incidents and research have demonstrated that drivers sometimes use partially automated features in unexpected, inefficient, and harmful ways. METHOD: Participants completed a series of partially automated parking trials with a Tesla Model X and their behavioral interventions were recorded. Participants also completed a risk-taking behavior test and a post-experiment questionnaire that included questions about trust in the system, likelihood of using the Autopark feature, and preference for either the partially automated parking feature or self-parking. RESULTS: Initial intervention rates were over 50%, but declined steeply in later trials. Responses to open-ended questions revealed that once participants understood what the system was doing, they were much more likely to trust it. Trust in the partially automated parking feature was predicted by a model including risk-taking behaviors, self-confidence, self-reported number of errors committed by the Tesla, and the proportion of trials in which the driver intervened. CONCLUSION: Using partially automated parking with little knowledge of its workings can lead to high degree of initial distrust. Repeated exposure of partially automated features to drivers can greatly increase their use. APPLICATION: Short tutorials and brief explanations of the workings of partially automated features may greatly improve trust in the system when drivers are first introduced to partially automated systems.


Assuntos
Automação , Condução de Veículo/psicologia , Automóveis , Sistemas Homem-Máquina , Confiança , Adolescente , Humanos , Masculino , Assunção de Riscos , Inquéritos e Questionários , Adulto Jovem
10.
Front Hum Neurosci ; 12: 309, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30147648

RESUMO

With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm's degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.

11.
Ergonomics ; 61(10): 1409-1427, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29578376

RESUMO

Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationships with autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centred approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user's needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems. Practitioner Summary: This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.


Assuntos
Computadores , Sistemas Homem-Máquina , Humanos , Tecnologia , Confiança
12.
Hum Factors ; 59(1): 116-133, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-28146673

RESUMO

OBJECTIVE: We investigated the effects of exogenous oxytocin on trust, compliance, and team decision making with agents varying in anthropomorphism (computer, avatar, human) and reliability (100%, 50%). BACKGROUND: Authors of recent work have explored psychological similarities in how people trust humanlike automation compared with how they trust other humans. Exogenous administration of oxytocin, a neuropeptide associated with trust among humans, offers a unique opportunity to probe the anthropomorphism continuum of automation to infer when agents are trusted like another human or merely a machine. METHOD: Eighty-four healthy male participants collaborated with automated agents varying in anthropomorphism that provided recommendations in a pattern recognition task. RESULTS: Under placebo, participants exhibited less trust and compliance with automated aids as the anthropomorphism of those aids increased. Under oxytocin, participants interacted with aids on the extremes of the anthropomorphism continuum similarly to placebos but increased their trust, compliance, and performance with the avatar, an agent on the midpoint of the anthropomorphism continuum. CONCLUSION: This study provides the first evidence that administration of exogenous oxytocin affected trust, compliance, and team decision making with automated agents. These effects provide support for the premise that oxytocin increases affinity for social stimuli in automated aids. APPLICATION: Designing automation to mimic basic human characteristics is sufficient to elicit behavioral trust outcomes that are driven by neurological processes typically observed in human-human interactions. Designers of automated systems should consider the task, the individual, and the level of anthropomorphism to achieve the desired outcome.


Assuntos
Automação , Comportamento Cooperativo , Tomada de Decisões/fisiologia , Sistemas Homem-Máquina , Ocitocina/farmacologia , Confiança , Adulto , Tomada de Decisões/efeitos dos fármacos , Humanos , Masculino
13.
Soc Neurosci ; 12(5): 570-581, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-27409387

RESUMO

As society becomes more reliant on machines and automation, understanding how people utilize advice is a necessary endeavor. Our objective was to reveal the underlying neural associations during advice utilization from expert human and machine agents with fMRI and multivariate Granger causality analysis. During an X-ray luggage-screening task, participants accepted or rejected good or bad advice from either the human or machine agent framed as experts with manipulated reliability (high miss rate). We showed that the machine-agent group decreased their advice utilization compared to the human-agent group and these differences in behaviors during advice utilization could be accounted for by high expectations of reliable advice and changes in attention allocation due to miss errors. Brain areas involved with the salience and mentalizing networks, as well as sensory processing involved with attention, were recruited during the task and the advice utilization network consisted of attentional modulation of sensory information with the lingual gyrus as the driver during the decision phase and the fusiform gyrus as the driver during the feedback phase. Our findings expand on the existing literature by showing that misses degrade advice utilization, which is represented in a neural network involving salience detection and self-processing with perceptual integration.


Assuntos
Atitude Frente aos Computadores , Encéfalo/fisiologia , Tomada de Decisões/fisiologia , Comportamento Social , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Análise Multivariada , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiologia , Testes Neuropsicológicos , Reconhecimento Visual de Modelos/fisiologia , Distribuição Aleatória , Adulto Jovem
14.
J Exp Psychol Appl ; 22(3): 331-49, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27505048

RESUMO

We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human­automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism­the degree to which an agent exhibits human characteristics­is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater trust resilience, a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human­agent trust as well as novel automation design.


Assuntos
Inteligência Artificial , Cognição , Confiança , Interface Usuário-Computador , Adolescente , Adulto , Automação , Computadores , Feminino , Humanos , Masculino , Adulto Jovem
15.
Work ; 54(2): 351-66, 2016 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-27232057

RESUMO

BACKGROUND: Resilience to stress is critical in today's military service. Past work has shown that experts handle stress in more productive ways compared to novices. Training that specifically addresses stress regulation, such as the Graduated Stress Exposure paradigm, can build individual and unit resilience as well as adaptability so that stressors trigger effective stress coping skills rather than stress injury. OBJECTIVE: We developed the Stress Resilience Training System (SRTS), a product of Perceptronics Solutions Inc., to demonstrate that a software training app can provide an effective individualized method for mitigating the negative effects of situational and mission-related stress, at the same time eliciting potentially positive effects on performance. METHODS: Seven separate evaluations including a usability study, controlled experiments, and field evaluations have been conducted to date. RESULTS: These studies have shown that the SRTS program effectively engages users to manage their stress, effectively reduces stress symptoms, and improves job performance. CONCLUSIONS: The SRTS system is a highly effective method for individualized training to inoculate professionals against the negative consequences of stress, while teaching them to harness its positive effects. SRTS is a technology that can be widely applied to many professions that are concerned with well-being. We discuss applications to law enforcement, athletics, personal fitness and healthcare in the Appendix.


Assuntos
Adaptação Psicológica , Militares/psicologia , Aplicativos Móveis , Resiliência Psicológica , Estresse Psicológico/prevenção & controle , Atitude , Feedback Formativo , Humanos , Internet , Militares/educação , Senso de Coerência , Interface Usuário-Computador , Jogos de Vídeo/psicologia
16.
Hum Factors ; 56(3): 463-75, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24930169

RESUMO

OBJECTIVE: Assess team performance within a net-worked supervisory control setting while manipulating automated decision aids and monitoring team communication and working memory ability. BACKGROUND: Networked systems such as multi-unmanned air vehicle (UAV) supervision have complex properties that make prediction of human-system performance difficult. Automated decision aid can provide valuable information to operators, individual abilities can limit or facilitate team performance, and team communication patterns can alter how effectively individuals work together. We hypothesized that reliable automation, higher working memory capacity, and increased communication rates of task-relevant information would offset performance decrements attributed to high task load. METHOD: Two-person teams performed a simulated air defense task with two levels of task load and three levels of automated aid reliability. Teams communicated and received decision aid messages via chat window text messages. RESULTS: Task Load x Automation effects were significant across all performance measures. Reliable automation limited the decline in team performance with increasing task load. Average team spatial working memory was a stronger predictor than other measures of team working memory. Frequency of team rapport and enemy location communications positively related to team performance, and word count was negatively related to team performance. CONCLUSION: Reliable decision aiding mitigated team performance decline during increased task load during multi-UAV supervisory control. Team spatial working memory, communication of spatial information, and team rapport predicted team success. APPLICATION: An automated decision aid can improve team performance under high task load. Assessment of spatial working memory and the communication of task-relevant information can help in operator and team selection in supervisory control systems.


Assuntos
Aviação , Comunicação , Sistemas Homem-Máquina , Memória de Curto Prazo , Análise e Desempenho de Tarefas , Adulto , Automação , Técnicas de Apoio para a Decisão , Feminino , Processos Grupais , Humanos , Masculino , Envio de Mensagens de Texto , Interface Usuário-Computador , Adulto Jovem
17.
Ergonomics ; 57(3): 295-318, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24308716

RESUMO

This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.


Assuntos
Atenção , Tomada de Decisões , Sistemas Homem-Máquina , Memória de Curto Prazo , Aeronaves , Teorema de Bayes , Simulação por Computador , Feminino , Humanos , Modelos Lineares , Masculino , Análise de Regressão , Robótica , Estatísticas não Paramétricas , Análise e Desempenho de Tarefas , Carga de Trabalho/psicologia
18.
Soc Cogn Affect Neurosci ; 8(5): 494-8, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22368214

RESUMO

The neuropeptide oxytocin functions as a hormone and neurotransmitter and facilitates complex social cognition and approach behavior. Given that empathy is an essential ingredient for third-party decision-making in institutions of justice, we investigated whether exogenous oxytocin modulates empathy of an unaffected third-party toward offenders and victims of criminal offenses. Healthy male participants received intranasal oxytocin or placebo in a randomized, double-blind, placebo-controlled, between-subjects design. Participants were given a set of legal vignettes that described an event during which an offender engaged in criminal offenses against victims. As an unaffected third-party, participants were asked to rate those criminal offenses on the degree to which the offender deserved punishment and how much harm was inflicted on the victim. Exogenous oxytocin selectively increased third-party decision-makers' perceptions of harm for victims but not the desire to punish offenders of criminal offenses. We argue that oxytocin promoted empathic concern for the victim, which in turn increased the tendency for prosocial approach behavior regarding the interpersonal relationship between an unaffected third-party and a fictional victim in the criminal scenarios. Future research should explore the context- and person-dependent nature of exogenous oxytocin in individuals with antisocial personality disorder and psychopathy, in whom deficits in empathy feature prominently.


Assuntos
Transtorno da Personalidade Antissocial/metabolismo , Transtorno da Personalidade Antissocial/fisiopatologia , Vítimas de Crime , Criminosos , Empatia/efeitos dos fármacos , Ocitocina/farmacologia , Adolescente , Adulto , Análise de Variância , Vítimas de Crime/psicologia , Criminosos/psicologia , Tomada de Decisões/efeitos dos fármacos , Método Duplo-Cego , Humanos , Masculino , Testes Psicológicos , Autorrelato , Adulto Jovem
19.
PLoS One ; 7(6): e39675, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22761865

RESUMO

Computerized aiding systems can assist human decision makers in complex tasks but can impair performance when they provide incorrect advice that humans erroneously follow, a phenomenon known as "automation bias." The extent to which people exhibit automation bias varies significantly and may reflect inter-individual variation in the capacity of working memory and the efficiency of executive function, both of which are highly heritable and under dopaminergic and noradrenergic control in prefrontal cortex. The dopamine beta hydroxylase (DBH) gene is thought to regulate the differential availability of dopamine and norepinephrine in prefrontal cortex. We therefore examined decision-making performance under imperfect computer aiding in 100 participants performing a simulated command and control task. Based on two single nucleotide polymorphism (SNPs) of the DBH gene, -1041 C/T (rs1611115) and 444 G/A (rs1108580), participants were divided into groups of low and high DBH enzyme activity, where low enzyme activity is associated with greater dopamine relative to norepinephrine levels in cortex. Compared to those in the high DBH enzyme activity group, individuals in the low DBH enzyme activity group were more accurate and speedier in their decisions when incorrect advice was given and verified automation recommendations more frequently. These results indicate that a gene that regulates relative prefrontal cortex dopamine availability, DBH, can identify those individuals who are less susceptible to bias in using computerized decision-aiding systems.


Assuntos
Tomada de Decisões Assistida por Computador , Dopamina beta-Hidroxilase/genética , Adulto , Automação , Genótipo , Humanos , Polimorfismo de Nucleotídeo Único
20.
Work ; 41 Suppl 1: 5877-9, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22317716

RESUMO

The proliferation of portable communication and entertainment devices has introduced new dangers to the driving environment, particularly for young and inexperienced drivers. Graduate students from George Mason University illustrate a powerful, practical, and cost-effective program that has been successful in educating these drivers on the dangers of texting while driving, which can easily be adapted and implemented in other communities.


Assuntos
Acidentes de Trânsito/prevenção & controle , Condução de Veículo/educação , Simulação por Computador , Envio de Mensagens de Texto , Jogos de Vídeo , Atenção , Condução de Veículo/psicologia , District of Columbia , Feminino , Humanos , Masculino , Avaliação de Programas e Projetos de Saúde , Estudantes/psicologia , Universidades , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA