Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 63
Filter
Add more filters










Publication year range
1.
Sci Robot ; 9(91): eadj3665, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38924424

ABSTRACT

Sense of joint agency (SoJA) is the sense of control experienced by humans when acting with others to bring about changes in the shared environment. SoJA is proposed to arise from the sensorimotor predictive processes underlying action control and monitoring. Because SoJA is a ubiquitous phenomenon occurring when we perform actions with other humans, it is of great interest and importance to understand whether-and under what conditions-SoJA occurs in collaborative tasks with humanoid robots. In this study, using behavioral measures and neural responses measured by electroencephalography (EEG), we aimed to evaluate whether SoJA occurs in joint action with the humanoid robot iCub and whether its emergence is influenced by the perceived intentionality of the robot. Behavioral results show that participants experienced SoJA with the robot partner when it was presented as an intentional agent but not when it was presented as a mechanical artifact. EEG results show that the mechanism that influences the emergence of SoJA in the condition when the robot is presented as an intentional agent is the ability to form similarly accurate predictions about the sensory consequences of our own and others' actions, leading to similar modulatory activity over sensory processing. Together, our results shed light on the joint sensorimotor processing mechanisms underlying the emergence of SoJA in human-robot interaction and underscore the importance of attribution of intentionality to the robot in human-robot collaboration.


Subject(s)
Electroencephalography , Intention , Robotics , Humans , Robotics/instrumentation , Male , Female , Adult , Young Adult , Cooperative Behavior , Psychomotor Performance/physiology
2.
Behav Res Methods ; 2024 May 23.
Article in English | MEDLINE | ID: mdl-38782872

ABSTRACT

In the last decade, scientists investigating human social cognition have started bringing traditional laboratory paradigms more "into the wild" to examine how socio-cognitive mechanisms of the human brain work in real-life settings. As this implies transferring 2D observational paradigms to 3D interactive environments, there is a risk of compromising experimental control. In this context, we propose a methodological approach which uses humanoid robots as proxies of social interaction partners and embeds them in experimental protocols that adapt classical paradigms of cognitive psychology to interactive scenarios. This allows for a relatively high degree of "naturalness" of interaction and excellent experimental control at the same time. Here, we present two case studies where our methods and tools were applied and replicated across two different laboratories, namely the Italian Institute of Technology in Genova (Italy) and the Agency for Science, Technology and Research in Singapore. In the first case study, we present a replication of an interactive version of a gaze-cueing paradigm reported in Kompatsiari et al. (J Exp Psychol Gen 151(1):121-136, 2022). The second case study presents a replication of a "shared experience" paradigm reported in Marchesi et al. (Technol Mind Behav 3(3):11, 2022). As both studies replicate results across labs and different cultures, we argue that our methods allow for reliable and replicable setups, even though the protocols are complex and involve social interaction. We conclude that our approach can be of benefit to the research field of social cognition and grant higher replicability, for example, in cross-cultural comparisons of social cognition mechanisms.

3.
Psychophysiology ; : e14587, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38600626

ABSTRACT

Cognitive processes deal with contradictory demands in social contexts. On the one hand, social interactions imply a demand for cooperation, which requires processing social signals, and on the other, demands for selective attention require ignoring irrelevant signals, to avoid overload. We created a task with a humanoid robot displaying irrelevant social signals, imposing conflicting demands on selective attention. Participants interacted with the robot as a team (high social demand; n = 23) or a passive co-actor (low social demand; n = 19). We observed that theta oscillations indexed conflict processing of social signals. Subsequently, alpha oscillations were sensitive to the conflicting social signals and the mode of interaction. These findings suggest that brains have distinct mechanisms for dealing with the complexity of social interaction and that these mechanisms are activated differently depending on the mode of the interaction. Thus, how we process environmental stimuli depends on the beliefs held regarding our social context.

4.
Cogn Sci ; 47(12): e13393, 2023 12.
Article in English | MEDLINE | ID: mdl-38133602

ABSTRACT

In our daily lives, we are continually involved in decision-making situations, many of which take place in the context of social interaction. Despite the ubiquity of such situations, there remains a gap in our understanding of how decision-making unfolds in social contexts, and how communicative signals, such as social cues and feedback, impact the choices we make. Interestingly, there is a new social context to which humans are recently increasingly more frequently exposed-social interaction with not only other humans but also artificial agents, such as robots or avatars. Given these new technological developments, it is of great interest to address the question of whether-and in what way-social signals exhibited by non-human agents influence decision-making. The present study aimed to examine whether robot non-verbal communicative behavior has an effect on human decision-making. To this end, we implemented a two-alternative-choice task where participants were to guess which of two presented cups was covering a ball. This game was an adaptation of a "Shell Game." A robot avatar acted as a game partner producing social cues and feedback. We manipulated robot's cues (pointing toward one of the cups) before the participant's decision and the robot's feedback ("thumb up" or no feedback) after the decision. We found that participants were slower (compared to other conditions) when cues were mostly invalid and the robot reacted positively to wins. We argue that this was due to the incongruence of the signals (cue vs. feedback), and thus violation of expectations. In sum, our findings show that incongruence in pre- and post-decision social signals from a robot significantly influences task performance, highlighting the importance of understanding expectations toward social robots for effective human-robot interactions.


Subject(s)
Robotics , Humans , Motivation , Communication , Cues , Social Environment
5.
Cortex ; 169: 249-258, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37956508

ABSTRACT

Previous work shows that in some instances artificial agents, such as robots, can elicit higher-order socio-cognitive mechanisms, similar to those elicited by humans. This suggests that these socio-cognitive mechanisms, such as mentalizing processes, originally developed for interaction with other humans, might be flexibly (re-)used, or "hijacked", for approaching this new category of interaction partners (Wykowska, 2020). In this study, we set out to identify neural markers of such flexible reuse of socio-cognitive mechanisms. We focused on fronto-parietal theta synchronization, as it has been proposed to be a substrate of cognitive flexibility in general (Fries, 2005). We analyzed EEG data from two experiments (Bossi et al., 2020; Roselli et al., submitted), in which participants completed a test measuring their individual likelihood to adopt the intentional stance towards robots, the intentional stance (IST) test. Our results show that participants with higher scores on the IST, indicating that they had higher likelihood of adopting the intentional stance towards a robot, had a significantly higher theta synchronization value, relative to participants with lower scores on the IST. These results suggest that long-range synchronization in the theta band might be a marker socio-cognitive process that can be flexibly applied towards non-human agents, such as robots.


Subject(s)
Cognition , Theta Rhythm , Humans , Electroencephalography
6.
Sci Rep ; 13(1): 11689, 2023 07 19.
Article in English | MEDLINE | ID: mdl-37468517

ABSTRACT

Joint attention is a pivotal mechanism underlying human ability to interact with one another. The fundamental nature of joint attention in the context of social cognition has led researchers to develop tasks that address this mechanism and operationalize it in a laboratory setting, in the form of a gaze cueing paradigm. In the present study, we addressed the question of whether engaging in joint attention with a robot face is culture-specific. We adapted a classical gaze-cueing paradigm such that a robot avatar cued participants' gaze subsequent to either engaging participants in eye contact or not. Our critical question of interest was whether the gaze cueing effect (GCE) is stable across different cultures, especially if cognitive resources to exert top-down control are reduced. To achieve the latter, we introduced a mathematical stress task orthogonally to the gaze cueing protocol. Results showed larger GCE in the Singapore sample, relative to the Italian sample, independent of gaze type (eye contact vs. no eye contact) or amount of experienced stress, which translates to available cognitive resources. Moreover, since after each block, participants rated how engaged they felt with the robot avatar during the task, we observed that Italian participants rated as more engaging the avatar during the eye contact blocks, relative to no eye contact while Singaporean participants did not show any difference in engagement relative to the gaze. We discuss the results in terms of cultural differences in robot-induced joint attention, and engagement in eye contact, as well as the dissociation between implicit and explicit measures related to processing of gaze.


Subject(s)
Interpersonal Relations , Robotics , Humans , Attention , Cues , Emotions , Fixation, Ocular
7.
J Cogn Neurosci ; 35(10): 1670-1680, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37432740

ABSTRACT

Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used TMS to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze before shifting its gaze. Before the task, participants received either sham stimulation (baseline), stimulation of right TPJ (rTPJ), or dorsomedial prefrontal cortex (dmPFC). Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dmPFC stimulation eliminated the socially driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.


Subject(s)
Attention , Prefrontal Cortex , Humans , Reaction Time/physiology , Attention/physiology , Communication , Cues , Fixation, Ocular
8.
Sci Rep ; 13(1): 10113, 2023 06 21.
Article in English | MEDLINE | ID: mdl-37344497

ABSTRACT

Sense of Agency (SoA) is the feeling of control over one's actions and their outcomes. A well-established implicit measure of SoA is the temporal interval estimation paradigm, in which participants estimate the time interval between a voluntary action and its sensory consequence. In the present study, we aimed to investigate whether the valence of action outcome modulated implicit SoA. The valence was manipulated through interaction partner's (i) positive/negative facial expression, or (ii) type of gaze (gaze contact or averted gaze). The interaction partner was the humanoid robot iCub. In Experiment 1, participants estimated the time interval between the onset of their action (head movement towards the robot), and the robot's facial expression (happy vs. sad face). Experiment 2 was identical, but the outcome of participants' action was the type of robot's gaze (gaze contact vs. averted). In Experiment 3, we assessed-in a within-subject design-the combined effect of robot's type of facial expression and type of gaze. Results showed that, while the robot's facial expression did not affect participants' SoA (Experiment 1), the type of gaze affected SoA in both Experiment 2 and Experiment 3. Overall, our findings showed that the robot's gaze is a more potent factor than facial expression in modulating participants' implicit SoA.


Subject(s)
Communication , Emotions , Facial Expression , Fixation, Ocular , Psychological Theory , Robotics , Adolescent , Adult , Female , Humans , Male , Middle Aged , Young Adult , Emotions/physiology , Robotics/methods , Time Perception , Happiness , Sadness
9.
Autism Res ; 16(5): 997-1008, 2023 05.
Article in English | MEDLINE | ID: mdl-36847354

ABSTRACT

The concept of scaffolding refers to the support that the environment provides in the acquisition and consolidation of new abilities. Technological advancements allow for support in the acquisition of cognitive capabilities, such as second language acquisition using simple smartphone applications There is, however, one domain of cognition that has been scarcely addressed in the context of technologically assisted scaffolding: social cognition. We explored the possibility of supporting the acquisition of social competencies of a group of children with autism spectrum disorder engaged in a rehabilitation program (age = 5.8 ± 1.14, 10 females, 33 males) by designing two robot-assisted training protocols tailored to Theory of Mind competencies. One protocol was performed with a humanoid robot and the other (control) with a non-anthropomorphic robot. We analyzed changes in NEPSY-II scores before and after the training using mixed effects models. Our results showed that activities with the humanoid significantly improved NEPSY-II scores on the ToM scale. We claim that the motor repertoire of humanoids makes them ideal platforms for artificial scaffolding of social skills in individuals with autism, as they can evoke similar social mechanisms to those elicited in human-human interaction, without providing the same social pressure that another human might exert.


Subject(s)
Autism Spectrum Disorder , Robotics , Male , Child , Female , Humans , Autism Spectrum Disorder/psychology , Social Cognition , Robotics/methods , Interpersonal Relations , Cognition
10.
Sci Rep ; 12(1): 19073, 2022 11 09.
Article in English | MEDLINE | ID: mdl-36351956

ABSTRACT

In this paper, we investigate brain activity associated with complex visual tasks, showing that electroencephalography (EEG) data can help computer vision in reliably recognizing actions from video footage that is used to stimulate human observers. Notably, we consider not only typical "explicit" video action benchmarks, but also more complex data sequences in which action concepts are only referred to, implicitly. To this end, we consider a challenging action recognition benchmark dataset-Moments in Time-whose video sequences do not explicitly visualize actions, but only implicitly refer to them (e.g., fireworks in the sky as an extreme example of "flying"). We employ such videos as stimuli and involve a large sample of subjects to collect a high-definition, multi-modal EEG and video data, designed for understanding action concepts. We discover an agreement among brain activities of different subjects stimulated by the same video footage. We name it as subjects consensus, and we design a computational pipeline to transfer knowledge from EEG to video, sharply boosting the recognition performance.


Subject(s)
Electroencephalography , Recognition, Psychology , Humans , Consensus , Brain
11.
Sci Rep ; 12(1): 14924, 2022 Sep 02.
Article in English | MEDLINE | ID: mdl-36056165

ABSTRACT

How individuals interpret robots' actions is a timely question in the context of the general approach to increase robot's presence in human social environment in the decades to come. Facing robots, people might have a tendency to explain their actions in mentalistic terms, granting them intentions. However, how default or controllable this process is still under debate. In four experiments, we asked participants to choose between mentalistic (intentional) and mechanistic (non-intentional) descriptions to describe depicted actions of a robot in various scenarios. Our results show the primacy of mentalistic descriptions that are processed faster than mechanistic ones (experiment 1). This effect was even stronger under high vs low cognitive load when people had to decide between the two alternatives (experiment 2). Interestingly, while there was no effect of cognitive load at the later stages of the processing arguing for controllability (experiment 3), imposing cognitive load on participants at an early stage of observation resulted in a faster attribution of mentalistic properties to the robot (experiment 4). We discuss these results in the context of the idea that social cognition is a default system.


Subject(s)
Mentalization , Robotics , Cognition , Humans , Social Environment , Social Perception
12.
J Cogn ; 5(1): 2, 2022.
Article in English | MEDLINE | ID: mdl-36072111

ABSTRACT

Robots are a new category of social agents that, thanks to their embodiment, can be used to train and support cognitive skills such as cognitive control. Several studies showed that cognitive control mechanisms are sensitive to affective states induced by humor, mood, and symbolic feedback such as monetary rewards. In the present study, we investigated whether the social gaze of a humanoid robot can affect cognitive control mechanisms. To this end, in two experiments, we evaluated both the conflict resolution and trial-by-trial adaptations during an auditory Simon task, as a function of the type of feedback participants received in the previous trial from the iCub robot, namely, mutual or avoiding gaze behaviour. Across three experiments, we compared the effect of mutual, avoiding (Exp1 and Exp2), and neutral (Exp3) gaze feedback between screen-based (Exp1) and physically embodied setups (Exp2 and Exp3). Results showed that iCub's social gaze feedback modulated conflict resolution, but not conflict adaptations. Specifically, the Simon effect was increased following mutual gaze feedback from iCub. Moreover, the modulatory effect was observed for the embodied setup in which the robot could engage or avoid eye contact in real-time (Exp2) but not for the screen-based setting (Exp1). Our findings showed for the first time that social feedback in Human-Robot Interaction, such as social gaze, can be used to modulate cognitive control. The results highlight the advantage of using robots to evaluate and train complex cognitive skills in both healthy and clinical populations.

13.
Front Robot AI ; 9: 863319, 2022.
Article in English | MEDLINE | ID: mdl-36093211

ABSTRACT

Anthropomorphism describes the tendency to ascribe human characteristics to nonhuman agents. Due to the increased interest in social robotics, anthropomorphism has become a core concept of human-robot interaction (HRI) studies. However, the wide use of this concept resulted in an interchangeability of its definition. In the present study, we propose an integrative framework of anthropomorphism (IFA) encompassing three levels: cultural, individual general tendencies, and direct attributions of human-like characteristics to robots. We also acknowledge the Western bias of the state-of-the-art view of anthropomorphism and develop a cross-cultural approach. In two studies, participants from various cultures completed tasks and questionnaires assessing their animism beliefs, individual tendencies to endow robots with mental properties, spirit, and consider them as more or less human. We also evaluated their attributions of mental anthropomorphic characteristics towards robots (i.e., cognition, emotion, intention). Our results demonstrate, in both experiments, that a three-level model (as hypothesized in the IFA) reliably explains the collected data. We found an overall influence of animism (cultural level) on the two lower levels, and an influence of the individual tendencies to mentalize, spiritualize and humanize (individual level) on the attribution of cognition, emotion and intention. In addition, in Experiment 2, the analyses show a more anthropocentric view of the mind for Western than East-Asian participants. As such, Western perception of robots depends more on humanization while East-Asian on mentalization. We further discuss these results in relation to the anthropomorphism literature and argue for the use of integrative cross-cultural model in HRI research.

14.
Sci Rep ; 12(1): 13845, 2022 08 16.
Article in English | MEDLINE | ID: mdl-35974080

ABSTRACT

Sense of Agency (SoA) is the feeling of being in control of one's actions and their outcomes. In a social context, people can experience a "vicarious" SoA over another human's actions; however, it is still controversial whether the same occurs in Human-Robot Interaction (HRI). The present study aimed at understanding whether humanoid robots may elicit vicarious SoA in humans, and whether the emergence of this phenomenon depends on the attribution of intentionality towards robots. We asked adult participants to perform an Intentional Binding (IB) task alone and with the humanoid iCub robot, reporting the time of occurrence of both self- and iCub-generated actions. Before the experiment, participants' degree of attribution of intentionality towards robots was assessed. Results showed that participants experienced vicarious SoA over iCub-generated actions. Moreover, intentionality attribution positively predicted the magnitude of vicarious SoA. In conclusion, our results highlight the importance of factors such as human-likeness and attribution of intentionality for the emergence of vicarious SoA towards robots.


Subject(s)
Robotics , Adult , Emotions , Humans , Social Perception
15.
Acta Psychol (Amst) ; 228: 103660, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35779453

ABSTRACT

When we read fiction, we encounter characters that interact in the story. As such, we encode that information and comprehend the stories. Prior studies suggest that this comprehension process is facilitated by taking the perspective of characters during reading. Thus, two questions of interest are whether people take the perspective of characters that are not perceived as capable of experiencing perspectives (e.g., robots), and whether current models of language comprehension can explain these differences between human and nonhuman protagonists (or lack thereof) during reading. The study aims to (1) compare the situation model (i.e., a model that factors in a protagonist's perspective) and the RI-VAL model (which relies more on comparisons of newly acquired information with information stored in long term memory) and (2) investigate whether differences in accessibility of information differ based on adopting the intentional stance towards a robot. To address the aims of our study, we designed a preregistered experiment in which participants read stories about one of three protagonists (an intentional robot, a mechanistic robot and a human) and answered questions about objects that were either occluded or not occluded from the protagonist's view. Based on the situation model, we expected faster responses to items that were not occluded compared to those that were occluded (i.e., the occlusion effect). However, based on the RI-VAL model, we expected overall differences between the protagonists would arise due to inconsistency with general world knowledge. The results of the pre-registered analysis showed no differences between the protagonists, nor differences in occlusion. However, a post-hoc analysis showed that the occlusion effect was shown only for the intentional robot but not for the human, nor mechanistic robot. Results also showed that depending on the age of the readers, the RI-VAL or the situation model is able to explain the results such that older participants "simulated" the situation about which they read (situation model), while younger adults compared new information with information stored in long-term memory (RI-VAL model). This suggests that comparing to information in long term memory is cognitively more costly. Therefore, with older adults used less cognitively demanding strategy of simulation.


Subject(s)
Reading , Robotics , Aged , Comprehension/physiology , Humans
16.
Front Robot AI ; 9: 770165, 2022.
Article in English | MEDLINE | ID: mdl-35321344

ABSTRACT

Social robotics is an emerging field that is expected to grow rapidly in the near future. In fact, it is increasingly more frequent to have robots that operate in close proximity with humans or even collaborate with them in joint tasks. In this context, the investigation of how to endow a humanoid robot with social behavioral skills typical of human-human interactions is still an open problem. Among the countless social cues needed to establish a natural social attunement, this article reports our research toward the implementation of a mechanism for estimating the gaze direction, focusing in particular on mutual gaze as a fundamental social cue in face-to-face interactions. We propose a learning-based framework to automatically detect eye contact events in online interactions with human partners. The proposed solution achieved high performance both in silico and in experimental scenarios. Our work is expected to be the first step toward an attentive architecture able to endorse scenarios in which the robots are perceived as social partners.

17.
J Exp Psychol Gen ; 151(1): 121-136, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34323536

ABSTRACT

Eye contact constitutes a strong communicative signal in human interactions and has been shown to modulate various cognitive processes and states. However, little is known about its impact on gaze-mediated attentional orienting in the context of its interplay with strategic top-down control. Here, we aimed at investigating how the social component of eye contact interacts with the top-down strategic control. To this end, we designed a gaze cuing paradigm with the iCub humanoid robot, in which iCub either established eye contact with the participants before averting its gaze or avoided their eyes. Across four experiments, we manipulated gaze cue validity to either elicit strategic top-down inhibitory activity (25% validity) or to allow for relaxing the control mechanisms (50% validity). Also, we manipulated the stimulus-onset-asynchrony (SOA) to examine the dynamics of the top-down modulatory effects. Our results showed that eye contact influenced the gaze cuing effect when the strategic control was not required, by prolonging the prioritized processing of the gazed-at locations. Thus, the effect was observed only when the measurement was taken after a sufficient amount of time (1,000 ms SOA). However, when inhibitory control was necessary (25% validity), the social component was not potent enough to exert influence over the gaze cuing effect independently. Overall, we propose that strategic top-down control is the primary driving force over the gaze cuing effect and that the social aspect plays a modulatory effect by prolonging prioritized processing of gazed-at locations. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Attention , Fixation, Ocular , Cues , Eye , Humans , Nonverbal Communication
18.
Q J Exp Psychol (Hove) ; 75(4): 616-632, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34472397

ABSTRACT

Sense of Agency (SoA) is the feeling of control over one's actions and their consequences. In social contexts, people experience a "vicarious" SoA over other humans' actions; however, the phenomenon disappears when the other agent is a computer. This study aimed to investigate the factors that determine when humans experience vicarious SoA in Human-Robot Interaction (HRI). To this end, in two experiments, we disentangled two potential contributing factors: (1) the possibility of representing the robot's actions and (2) the adoption of Intentional Stance towards robots. Participants performed an Intentional Binding (IB) task reporting the time of occurrence for self- or robot-generated actions or sensory outcomes. To assess the role of action representation, the robot either performed a physical keypress (Experiment 1) or "acted" by sending a command via Bluetooth (Experiment 2). Before the experiment, attribution of intentionality to the robot was assessed. Results showed that when participants judged the occurrence of the action, vicarious SoA was predicted by the degree of attributed intentionality, but only when the robot's action was physical. Conversely, digital actions elicited the reversed effect of vicarious IB, suggesting that disembodied actions of robots are perceived as non-intentional. When participants judged the occurrence of the sensory outcome, vicarious SoA emerged only when the causing action was physical. Notably, intentionality attribution predicted vicarious SoA for sensory outcomes independently of the nature of the causing event, physical or digital. In conclusion, both intentionality attribution and action representation play a crucial role for vicarious SoA in HRI.


Subject(s)
Robotics , Emotions , Humans , Intention , Psychomotor Performance , Social Perception
19.
Front Neuroergon ; 3: 838136, 2022.
Article in English | MEDLINE | ID: mdl-38235447

ABSTRACT

As technological advances progress, we find ourselves in situations where we need to collaborate with artificial agents (e.g., robots, autonomous machines and virtual agents). For example, autonomous machines will be part of search and rescue missions, space exploration and decision aids during monitoring tasks (e.g., baggage-screening at the airport). Efficient communication in these scenarios would be crucial to interact fluently. While studies examined the positive and engaging effect of social signals (i.e., gaze communication) on human-robot interaction, little is known about the effects of conflicting robot signals on the human actor's cognitive load. Moreover, it is unclear from a social neuroergonomics perspective how different brain regions synchronize or communicate with one another to deal with the cognitive load induced by conflicting signals in social situations with robots. The present study asked if neural oscillations that correlate with conflict processing are observed between brain regions when participants view conflicting robot signals. Participants classified different objects based on their color after a robot (i.e., iCub), presented on a screen, simulated handing over the object to them. The robot proceeded to cue participants (with a head shift) to the correct or incorrect target location. Since prior work has shown that unexpected cues can interfere with oculomotor planning and induces conflict, we expected that conflicting robot social signals which would interfere with the execution of actions. Indeed, we found that conflicting social signals elicited neural correlates of cognitive conflict as measured by mid-brain theta oscillations. More importantly, we found higher coherence values between mid-frontal electrode locations and posterior occipital electrode locations in the theta-frequency band for incongruent vs. congruent cues, which suggests that theta-band synchronization between these two regions allows for communication between cognitive control systems and gaze-related attentional mechanisms. We also find correlations between coherence values and behavioral performance (Reaction Times), which are moderated by the congruency of the robot signal. In sum, the influence of irrelevant social signals during goal-oriented tasks can be indexed by behavioral, neural oscillation and brain connectivity patterns. These data provide insights about a new measure for cognitive load, which can also be used in predicting human interaction with autonomous machines.

20.
Front Robot AI ; 8: 666586, 2021.
Article in English | MEDLINE | ID: mdl-34692776

ABSTRACT

In human-robot interactions, people tend to attribute to robots mental states such as intentions or desires, in order to make sense of their behaviour. This cognitive strategy is termed "intentional stance". Adopting the intentional stance influences how one will consider, engage and behave towards robots. However, people differ in their likelihood to adopt intentional stance towards robots. Therefore, it seems crucial to assess these interindividual differences. In two studies we developed and validated the structure of a task aiming at evaluating to what extent people adopt intentional stance towards robot actions, the Intentional Stance task (IST). The Intentional Stance Task consists in a task that probes participants' stance by requiring them to choose the plausibility of a description (mentalistic vs. mechanistic) of behaviour of a robot depicted in a scenario composed of three photographs. Results showed a reliable psychometric structure of the IST. This paper therefore concludes with the proposal of using the IST as a proxy for assessing the degree of adoption of the intentional stance towards robots.

SELECTION OF CITATIONS
SEARCH DETAIL
...