Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38083754

ABSTRACT

In Human-Robot Collaboration setting a robot may be controlled by a user directly or through a Brain-Computer Interface that detects user intention, and it may act as an autonomous agent. As such interaction increases in complexity, conflicts become inevitable. Goal conflicts can arise from different sources, for instance, interface mistakes - related to misinterpretation of human's intention - or errors of the autonomous system to address task and human's expectations. Such conflicts evoke different spontaneous responses in the human's brain, which could be used to regulate intrinsic task parameters and to improve system response to errors - leading to improved transparency, performance, and safety. To study the possibility of detecting interface and agent errors, we designed a virtual pick and place task with sequential human and robot responsibility and recorded the electroencephalography (EEG) activity of six participants. In the virtual environment, the robot received a command from the participants through a computer keyboard or it moved as autonomous agent. In both cases, artificial errors were defined to occur in 20% - 25% of the trials. We found differences in the responses to interface and agent errors. From the EEG data, correct trials, interface errors, and agent errors were truly predicted for 51.62% ± 9.99% (chance level 38.21%) of the pick movements and 46.84%±6.62% (chance level 36.99%) for the place movements in a pseudo-asynchronous fashion. Our study suggests that in a human-robot collaboration setting one may improve the future performance of a system with intention detection and autonomous modes. Specific examples could be Neural Interfaces that replace and restore motor functions.


Subject(s)
Brain-Computer Interfaces , Robotics , Humans , Electroencephalography , Brain/physiology , Movement
2.
PLoS One ; 18(7): e0287958, 2023.
Article in English | MEDLINE | ID: mdl-37432954

ABSTRACT

Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning.


Subject(s)
Robotics , Humans , Brain , Learning , Algorithms , Computer Simulation
3.
Sci Rep ; 12(1): 20764, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36456595

ABSTRACT

When a human and machine collaborate on a shared task, ambiguous events might occur that could be perceived as an error by the human partner. In such events, spontaneous error-related potentials (ErrPs) are evoked in the human brain. Knowing whom the human perceived as responsible for the error would help a machine in co-adaptation and shared control paradigms to better adapt to human preferences. Therefore, we ask whether self- and agent-related errors evoke different ErrPs. Eleven subjects participated in an electroencephalography human-agent collaboration experiment with a collaborative trajectory-following task on two collaboration levels, where movement errors occurred as trajectory deviations. Independently of the collaboration level, we observed a higher amplitude of the responses on the midline central Cz electrode for self-related errors compared to observed errors made by the agent. On average, Support Vector Machines classified self- and agent-related errors with 72.64% accuracy using subject-specific features. These results demonstrate that ErrPs can tell if a person relates an error to themselves or an external autonomous agent during collaboration. Thus, the collaborative machine will receive more informed feedback for the error attribution that allows appropriate error identification, a possibility for correction, and avoidance in future actions.


Subject(s)
Brain-Computer Interfaces , Humans , Electroencephalography , Support Vector Machine , Movement , Acclimatization
SELECTION OF CITATIONS
SEARCH DETAIL
...