Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Rep ; 14(1): 19751, 2024 09 04.
Article in English | MEDLINE | ID: mdl-39231986

ABSTRACT

This research explores prospective determinants of trust in the recommendations of artificial agents regarding decisions to kill, using a novel visual challenge paradigm simulating threat-identification (enemy combatants vs. civilians) under uncertainty. In Experiment 1, we compared trust in the advice of a physically embodied versus screen-mediated anthropomorphic robot, observing no effects of embodiment; in Experiment 2, we manipulated the relative anthropomorphism of virtual robots, observing modestly greater trust in the most anthropomorphic agent relative to the least. Across studies, when any version of the agent randomly disagreed, participants reversed their threat-identifications and decisions to kill in the majority of cases, substantially degrading their initial performance. Participants' subjective confidence in their decisions tracked whether the agent (dis)agreed, while both decision-reversals and confidence were moderated by appraisals of the agent's intelligence. The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty.


Subject(s)
Artificial Intelligence , Robotics , Trust , Humans , Robotics/methods , Male , Female , Adult , Decision Making , Young Adult , Uncertainty
2.
Auton Robots ; 47(2): 249-265, 2023.
Article in English | MEDLINE | ID: mdl-36530466

ABSTRACT

Recognising intent in collaborative human robot tasks can improve team performance and human perception of robots. Intent can differ from the observed outcome in the presence of mistakes which are likely in physically dynamic tasks. We created a dataset of 1227 throws of a ball at a target from 10 participants and observed that 47% of throws were mistakes with 16% completely missing the target. Our research leverages facial images capturing the person's reaction to the outcome of a throw to predict when the resulting throw is a mistake and then we determine the actual intent of the throw. The approach we propose for outcome prediction performs 38% better than the two-stream architecture used previously for this task on front-on videos. In addition, we propose a 1D-CNN model which is used in conjunction with priors learned from the frequency of mistakes to provide an end-to-end pipeline for outcome and intent recognition in this throwing task.

3.
Front Robot AI ; 8: 701938, 2021.
Article in English | MEDLINE | ID: mdl-34336937

ABSTRACT

This paper conceptualizes the problem of emergency evacuation as a paradigm for investigating human-robot interaction. We argue that emergency evacuation offers unique and important perspectives on human-robot interaction while also demanding close attention to the ethical ramifications of the technologies developed. We present a series of approaches for developing emergency evacuation robots and detail several essential design considerations. This paper concludes with a discussion of the ethical implications of emergency evacuation robots and a roadmap for their development, implementation, and evaluation.

SELECTION OF CITATIONS
SEARCH DETAIL