Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Sci Robot ; 9(91): eadj3665, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38924424

ABSTRACT

Sense of joint agency (SoJA) is the sense of control experienced by humans when acting with others to bring about changes in the shared environment. SoJA is proposed to arise from the sensorimotor predictive processes underlying action control and monitoring. Because SoJA is a ubiquitous phenomenon occurring when we perform actions with other humans, it is of great interest and importance to understand whether-and under what conditions-SoJA occurs in collaborative tasks with humanoid robots. In this study, using behavioral measures and neural responses measured by electroencephalography (EEG), we aimed to evaluate whether SoJA occurs in joint action with the humanoid robot iCub and whether its emergence is influenced by the perceived intentionality of the robot. Behavioral results show that participants experienced SoJA with the robot partner when it was presented as an intentional agent but not when it was presented as a mechanical artifact. EEG results show that the mechanism that influences the emergence of SoJA in the condition when the robot is presented as an intentional agent is the ability to form similarly accurate predictions about the sensory consequences of our own and others' actions, leading to similar modulatory activity over sensory processing. Together, our results shed light on the joint sensorimotor processing mechanisms underlying the emergence of SoJA in human-robot interaction and underscore the importance of attribution of intentionality to the robot in human-robot collaboration.


Subject(s)
Electroencephalography , Intention , Robotics , Humans , Robotics/instrumentation , Male , Female , Adult , Young Adult , Cooperative Behavior , Psychomotor Performance/physiology
2.
Behav Res Methods ; 2024 May 23.
Article in English | MEDLINE | ID: mdl-38782872

ABSTRACT

In the last decade, scientists investigating human social cognition have started bringing traditional laboratory paradigms more "into the wild" to examine how socio-cognitive mechanisms of the human brain work in real-life settings. As this implies transferring 2D observational paradigms to 3D interactive environments, there is a risk of compromising experimental control. In this context, we propose a methodological approach which uses humanoid robots as proxies of social interaction partners and embeds them in experimental protocols that adapt classical paradigms of cognitive psychology to interactive scenarios. This allows for a relatively high degree of "naturalness" of interaction and excellent experimental control at the same time. Here, we present two case studies where our methods and tools were applied and replicated across two different laboratories, namely the Italian Institute of Technology in Genova (Italy) and the Agency for Science, Technology and Research in Singapore. In the first case study, we present a replication of an interactive version of a gaze-cueing paradigm reported in Kompatsiari et al. (J Exp Psychol Gen 151(1):121-136, 2022). The second case study presents a replication of a "shared experience" paradigm reported in Marchesi et al. (Technol Mind Behav 3(3):11, 2022). As both studies replicate results across labs and different cultures, we argue that our methods allow for reliable and replicable setups, even though the protocols are complex and involve social interaction. We conclude that our approach can be of benefit to the research field of social cognition and grant higher replicability, for example, in cross-cultural comparisons of social cognition mechanisms.

3.
Psychophysiology ; : e14587, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38600626

ABSTRACT

Cognitive processes deal with contradictory demands in social contexts. On the one hand, social interactions imply a demand for cooperation, which requires processing social signals, and on the other, demands for selective attention require ignoring irrelevant signals, to avoid overload. We created a task with a humanoid robot displaying irrelevant social signals, imposing conflicting demands on selective attention. Participants interacted with the robot as a team (high social demand; n = 23) or a passive co-actor (low social demand; n = 19). We observed that theta oscillations indexed conflict processing of social signals. Subsequently, alpha oscillations were sensitive to the conflicting social signals and the mode of interaction. These findings suggest that brains have distinct mechanisms for dealing with the complexity of social interaction and that these mechanisms are activated differently depending on the mode of the interaction. Thus, how we process environmental stimuli depends on the beliefs held regarding our social context.

4.
J Cogn Neurosci ; 35(10): 1670-1680, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37432740

ABSTRACT

Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used TMS to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze before shifting its gaze. Before the task, participants received either sham stimulation (baseline), stimulation of right TPJ (rTPJ), or dorsomedial prefrontal cortex (dmPFC). Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dmPFC stimulation eliminated the socially driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.


Subject(s)
Attention , Prefrontal Cortex , Humans , Reaction Time/physiology , Attention/physiology , Communication , Cues , Fixation, Ocular
5.
Autism Res ; 16(5): 997-1008, 2023 05.
Article in English | MEDLINE | ID: mdl-36847354

ABSTRACT

The concept of scaffolding refers to the support that the environment provides in the acquisition and consolidation of new abilities. Technological advancements allow for support in the acquisition of cognitive capabilities, such as second language acquisition using simple smartphone applications There is, however, one domain of cognition that has been scarcely addressed in the context of technologically assisted scaffolding: social cognition. We explored the possibility of supporting the acquisition of social competencies of a group of children with autism spectrum disorder engaged in a rehabilitation program (age = 5.8 ± 1.14, 10 females, 33 males) by designing two robot-assisted training protocols tailored to Theory of Mind competencies. One protocol was performed with a humanoid robot and the other (control) with a non-anthropomorphic robot. We analyzed changes in NEPSY-II scores before and after the training using mixed effects models. Our results showed that activities with the humanoid significantly improved NEPSY-II scores on the ToM scale. We claim that the motor repertoire of humanoids makes them ideal platforms for artificial scaffolding of social skills in individuals with autism, as they can evoke similar social mechanisms to those elicited in human-human interaction, without providing the same social pressure that another human might exert.


Subject(s)
Autism Spectrum Disorder , Robotics , Male , Child , Female , Humans , Autism Spectrum Disorder/psychology , Social Cognition , Robotics/methods , Interpersonal Relations , Cognition
6.
Sci Rep ; 12(1): 13845, 2022 08 16.
Article in English | MEDLINE | ID: mdl-35974080

ABSTRACT

Sense of Agency (SoA) is the feeling of being in control of one's actions and their outcomes. In a social context, people can experience a "vicarious" SoA over another human's actions; however, it is still controversial whether the same occurs in Human-Robot Interaction (HRI). The present study aimed at understanding whether humanoid robots may elicit vicarious SoA in humans, and whether the emergence of this phenomenon depends on the attribution of intentionality towards robots. We asked adult participants to perform an Intentional Binding (IB) task alone and with the humanoid iCub robot, reporting the time of occurrence of both self- and iCub-generated actions. Before the experiment, participants' degree of attribution of intentionality towards robots was assessed. Results showed that participants experienced vicarious SoA over iCub-generated actions. Moreover, intentionality attribution positively predicted the magnitude of vicarious SoA. In conclusion, our results highlight the importance of factors such as human-likeness and attribution of intentionality for the emergence of vicarious SoA towards robots.


Subject(s)
Robotics , Adult , Emotions , Humans , Social Perception
7.
Front Robot AI ; 9: 770165, 2022.
Article in English | MEDLINE | ID: mdl-35321344

ABSTRACT

Social robotics is an emerging field that is expected to grow rapidly in the near future. In fact, it is increasingly more frequent to have robots that operate in close proximity with humans or even collaborate with them in joint tasks. In this context, the investigation of how to endow a humanoid robot with social behavioral skills typical of human-human interactions is still an open problem. Among the countless social cues needed to establish a natural social attunement, this article reports our research toward the implementation of a mechanism for estimating the gaze direction, focusing in particular on mutual gaze as a fundamental social cue in face-to-face interactions. We propose a learning-based framework to automatically detect eye contact events in online interactions with human partners. The proposed solution achieved high performance both in silico and in experimental scenarios. Our work is expected to be the first step toward an attentive architecture able to endorse scenarios in which the robots are perceived as social partners.

8.
Sci Robot ; 6(58): eabc5044, 2021 Sep 08.
Article in English | MEDLINE | ID: mdl-34516747

ABSTRACT

In most everyday life situations, the brain needs to engage not only in making decisions but also in anticipating and predicting the behavior of others. In such contexts, gaze can be highly informative about others' intentions, goals, and upcoming decisions. Here, we investigated whether a humanoid robot's gaze (mutual or averted) influences the way people strategically reason in a social decision-making context. Specifically, participants played a strategic game with the robot iCub while we measured their behavior and neural activity by means of electroencephalography (EEG). Participants were slower to respond when iCub established mutual gaze before their decision, relative to averted gaze. This was associated with a higher decision threshold in the drift diffusion model and accompanied by more synchronized EEG alpha activity. In addition, we found that participants reasoned about the robot's actions in both conditions. However, those who mostly experienced the averted gaze were more likely to adopt a self-oriented strategy, and their neural activity showed higher sensitivity to outcomes. Together, these findings suggest that robot gaze acts as a strong social signal for humans, modulating response times, decision threshold, neural synchronization, as well as choice strategies and sensitivity to outcomes. This has strong implications for all contexts involving human-robot interaction, from robotics to clinical applications.


Subject(s)
Brain/physiology , Decision Making , Fixation, Ocular , Neurons/physiology , Adult , Behavior , Diffusion , Electroencephalography/methods , Equipment Design , Evoked Potentials , Female , Game Theory , Humans , Male , Man-Machine Systems , Reaction Time , Robotics , Signal Processing, Computer-Assisted , User-Computer Interface , Young Adult
9.
Front Robot AI ; 8: 653537, 2021.
Article in English | MEDLINE | ID: mdl-34222350

ABSTRACT

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants' pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.

10.
Front Robot AI ; 8: 642796, 2021.
Article in English | MEDLINE | ID: mdl-34124174

ABSTRACT

Artificial agents are on their way to interact with us daily. Thus, the design of embodied artificial agents that can easily cooperate with humans is crucial for their deployment in social scenarios. Endowing artificial agents with human-like behavior may boost individuals' engagement during the interaction. We tested this hypothesis in two screen-based experiments. In the first one, we compared attentional engagement displayed by participants while they observed the same set of behaviors displayed by an avatar of a humanoid robot and a human. In the second experiment, we assessed the individuals' tendency to attribute anthropomorphic traits towards the same agents displaying the same behaviors. The results of both experiments suggest that individuals need less effort to process and interpret an artificial agent's behavior when it closely resembles one of a human being. Our results support the idea that including subtle hints of human-likeness in artificial agents' behaviors would ease the communication between them and the human counterpart during interactive scenarios.

11.
Cognition ; 194: 104109, 2020 01.
Article in English | MEDLINE | ID: mdl-31675616

ABSTRACT

In the presence of others, sense of agency (SoA), i.e. the perceived relationship between our own actions and external events, is reduced. The present study aimed at investigating whether the phenomenon of reduced SoA is observed in human-robot interaction, similarly to human-human interaction. To this end, we tested SoA when people interacted with a robot (Experiment 1), with a passive, non-agentic air pump (Experiment 2), or when they interacted with both a robot and a human being (Experiment 3). Participants were asked to rate the perceived control they felt on the outcome of their action while performing a diffusion of responsibility task. Results showed that the intentional agency attributed to the artificial entity differently affect the performance and the perceived SoA on the outcome of the task. Experiment 1 showed that, when participants successfully performed an action, they rated SoA over the outcome as lower in trials in which the robot was also able to act (but did not), compared to when they were performing the task alone. However, this did not occur in Experiment 2, where the artificial entity was an air pump, which had the same influence on the task as the robot, but in a passive manner and thus lacked intentional agency. Results of Experiment 3 showed that SoA was reduced similarly for the human and robot agents, threby indicating that attribution of intentional agency plays a crucial role in reduction of SoA. Together, our results suggest that interacting with robotic agents affects SoA, similarly to interacting with other humans, but differently from interacting with non-agentic mechanical devices. This has important implications for the applied of social robotics, where a subjective decrease in SoA could have negative consequences, such as in robot-assisted care in hospitals.


Subject(s)
Intention , Motor Activity/physiology , Psychomotor Performance/physiology , Robotics , Social Interaction , Adult , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...