Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Psychophysiology ; 61(8): e14587, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38600626

RESUMEN

Cognitive processes deal with contradictory demands in social contexts. On the one hand, social interactions imply a demand for cooperation, which requires processing social signals, and on the other, demands for selective attention require ignoring irrelevant signals, to avoid overload. We created a task with a humanoid robot displaying irrelevant social signals, imposing conflicting demands on selective attention. Participants interacted with the robot as a team (high social demand; n = 23) or a passive co-actor (low social demand; n = 19). We observed that theta oscillations indexed conflict processing of social signals. Subsequently, alpha oscillations were sensitive to the conflicting social signals and the mode of interaction. These findings suggest that brains have distinct mechanisms for dealing with the complexity of social interaction and that these mechanisms are activated differently depending on the mode of the interaction. Thus, how we process environmental stimuli depends on the beliefs held regarding our social context.


Asunto(s)
Atención , Conflicto Psicológico , Conducta Cooperativa , Interacción Social , Humanos , Atención/fisiología , Masculino , Femenino , Adulto Joven , Adulto , Ritmo Teta/fisiología , Ritmo alfa/fisiología , Electroencefalografía , Percepción Social , Relaciones Interpersonales , Robótica
2.
Behav Res Methods ; 56(7): 7543-7560, 2024 10.
Artículo en Inglés | MEDLINE | ID: mdl-38782872

RESUMEN

In the last decade, scientists investigating human social cognition have started bringing traditional laboratory paradigms more "into the wild" to examine how socio-cognitive mechanisms of the human brain work in real-life settings. As this implies transferring 2D observational paradigms to 3D interactive environments, there is a risk of compromising experimental control. In this context, we propose a methodological approach which uses humanoid robots as proxies of social interaction partners and embeds them in experimental protocols that adapt classical paradigms of cognitive psychology to interactive scenarios. This allows for a relatively high degree of "naturalness" of interaction and excellent experimental control at the same time. Here, we present two case studies where our methods and tools were applied and replicated across two different laboratories, namely the Italian Institute of Technology in Genova (Italy) and the Agency for Science, Technology and Research in Singapore. In the first case study, we present a replication of an interactive version of a gaze-cueing paradigm reported in Kompatsiari et al. (J Exp Psychol Gen 151(1):121-136, 2022). The second case study presents a replication of a "shared experience" paradigm reported in Marchesi et al. (Technol Mind Behav 3(3):11, 2022). As both studies replicate results across labs and different cultures, we argue that our methods allow for reliable and replicable setups, even though the protocols are complex and involve social interaction. We conclude that our approach can be of benefit to the research field of social cognition and grant higher replicability, for example, in cross-cultural comparisons of social cognition mechanisms.


Asunto(s)
Cognición Social , Interacción Social , Humanos , Robótica/métodos , Masculino , Italia , Señales (Psicología) , Femenino , Adulto , Cognición/fisiología , Relaciones Interpersonales
3.
J Cogn Neurosci ; 35(10): 1670-1680, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-37432740

RESUMEN

Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used TMS to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze before shifting its gaze. Before the task, participants received either sham stimulation (baseline), stimulation of right TPJ (rTPJ), or dorsomedial prefrontal cortex (dmPFC). Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dmPFC stimulation eliminated the socially driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.


Asunto(s)
Atención , Corteza Prefrontal , Humanos , Tiempo de Reacción/fisiología , Atención/fisiología , Comunicación , Señales (Psicología) , Fijación Ocular
4.
J Cogn Neurosci ; 34(1): 108-126, 2021 12 06.
Artículo en Inglés | MEDLINE | ID: mdl-34705044

RESUMEN

Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. Although gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. In the present study, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot "moved" the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 ERP component of the EEG signals as well as higher event-related spectral perturbation amplitudes (Study 3) for incongruent trials compared with congruent trials. Our findings reveal that behavioral, ocular, and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.


Asunto(s)
Robótica , Cognición , Señales (Psicología) , Electroencefalografía , Humanos , Tiempo de Reacción
5.
Psychol Res ; 85(2): 491-502, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32705336

RESUMEN

Attentional orienting towards others' gaze direction or pointing has been well investigated in laboratory conditions. However, less is known about the operation of attentional mechanisms in online naturalistic social interaction scenarios. It is equally plausible that following social directional cues (gaze, pointing) occurs reflexively, and/or that it is influenced by top-down cognitive factors. In a mobile eye-tracking experiment, we show that under natural interaction conditions, overt attentional orienting is not necessarily reflexively triggered by pointing gestures or a combination of gaze shifts and pointing gestures. We found that participants conversing with an experimenter, who, during the interaction, would play out pointing gestures as well as directional gaze movements, continued to mostly focus their gaze on the face of the experimenter, demonstrating the significance of attending to the face of the interaction partner-in line with effective top-down control over reflexive orienting of attention in the direction of social cues.


Asunto(s)
Atención/fisiología , Señales (Psicología) , Cara , Gestos , Orientación Espacial/fisiología , Adulto , Femenino , Fijación Ocular/fisiología , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
6.
Psychol Res ; 83(1): 159-174, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30588545

RESUMEN

Distracting sensory events can capture attention, interfering with the performance of the task at hand. We asked: is our attention captured by such events if we cause them ourselves? To examine this, we employed a visual search task with an additional salient singleton distractor, where the distractor was predictable either by the participant's own (motor) action or by an endogenous cue; accordingly, the task was designed to isolate the influence of motor and non-motor predictive processes. We found both types of prediction, cue- and action-based, to attenuate the interference of the distractor-which is at odds with the "attentional white bear" hypothesis, which states that prediction of distracting stimuli mandatorily directs attention towards them. Further, there was no difference between the two types of prediction. We suggest this pattern of results may be better explained by theories postulating general predictive mechanisms, such as the framework of predictive processing, as compared to accounts proposing a special role of action-effect prediction, such as theories based on optimal motor control. However, rather than permitting a definitive decision between competing theories, our study highlights a number of open questions, to be answered by these theories, with regard to how exogenous attention is influenced by predictions deriving from the environment versus our own actions.


Asunto(s)
Atención/fisiología , Señales (Psicología) , Actividad Motora/fisiología , Tiempo de Reacción/fisiología , Adolescente , Adulto , Toma de Decisiones/fisiología , Femenino , Humanos , Masculino , Adulto Joven
7.
Sci Robot ; 9(91): eadj3665, 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38924424

RESUMEN

Sense of joint agency (SoJA) is the sense of control experienced by humans when acting with others to bring about changes in the shared environment. SoJA is proposed to arise from the sensorimotor predictive processes underlying action control and monitoring. Because SoJA is a ubiquitous phenomenon occurring when we perform actions with other humans, it is of great interest and importance to understand whether-and under what conditions-SoJA occurs in collaborative tasks with humanoid robots. In this study, using behavioral measures and neural responses measured by electroencephalography (EEG), we aimed to evaluate whether SoJA occurs in joint action with the humanoid robot iCub and whether its emergence is influenced by the perceived intentionality of the robot. Behavioral results show that participants experienced SoJA with the robot partner when it was presented as an intentional agent but not when it was presented as a mechanical artifact. EEG results show that the mechanism that influences the emergence of SoJA in the condition when the robot is presented as an intentional agent is the ability to form similarly accurate predictions about the sensory consequences of our own and others' actions, leading to similar modulatory activity over sensory processing. Together, our results shed light on the joint sensorimotor processing mechanisms underlying the emergence of SoJA in human-robot interaction and underscore the importance of attribution of intentionality to the robot in human-robot collaboration.


Asunto(s)
Electroencefalografía , Intención , Robótica , Humanos , Robótica/instrumentación , Masculino , Femenino , Adulto , Adulto Joven , Conducta Cooperativa , Desempeño Psicomotor/fisiología
8.
Cogn Sci ; 47(12): e13393, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-38133602

RESUMEN

In our daily lives, we are continually involved in decision-making situations, many of which take place in the context of social interaction. Despite the ubiquity of such situations, there remains a gap in our understanding of how decision-making unfolds in social contexts, and how communicative signals, such as social cues and feedback, impact the choices we make. Interestingly, there is a new social context to which humans are recently increasingly more frequently exposed-social interaction with not only other humans but also artificial agents, such as robots or avatars. Given these new technological developments, it is of great interest to address the question of whether-and in what way-social signals exhibited by non-human agents influence decision-making. The present study aimed to examine whether robot non-verbal communicative behavior has an effect on human decision-making. To this end, we implemented a two-alternative-choice task where participants were to guess which of two presented cups was covering a ball. This game was an adaptation of a "Shell Game." A robot avatar acted as a game partner producing social cues and feedback. We manipulated robot's cues (pointing toward one of the cups) before the participant's decision and the robot's feedback ("thumb up" or no feedback) after the decision. We found that participants were slower (compared to other conditions) when cues were mostly invalid and the robot reacted positively to wins. We argue that this was due to the incongruence of the signals (cue vs. feedback), and thus violation of expectations. In sum, our findings show that incongruence in pre- and post-decision social signals from a robot significantly influences task performance, highlighting the importance of understanding expectations toward social robots for effective human-robot interactions.


Asunto(s)
Robótica , Humanos , Motivación , Comunicación , Señales (Psicología) , Medio Social
9.
Sci Rep ; 13(1): 11689, 2023 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-37468517

RESUMEN

Joint attention is a pivotal mechanism underlying human ability to interact with one another. The fundamental nature of joint attention in the context of social cognition has led researchers to develop tasks that address this mechanism and operationalize it in a laboratory setting, in the form of a gaze cueing paradigm. In the present study, we addressed the question of whether engaging in joint attention with a robot face is culture-specific. We adapted a classical gaze-cueing paradigm such that a robot avatar cued participants' gaze subsequent to either engaging participants in eye contact or not. Our critical question of interest was whether the gaze cueing effect (GCE) is stable across different cultures, especially if cognitive resources to exert top-down control are reduced. To achieve the latter, we introduced a mathematical stress task orthogonally to the gaze cueing protocol. Results showed larger GCE in the Singapore sample, relative to the Italian sample, independent of gaze type (eye contact vs. no eye contact) or amount of experienced stress, which translates to available cognitive resources. Moreover, since after each block, participants rated how engaged they felt with the robot avatar during the task, we observed that Italian participants rated as more engaging the avatar during the eye contact blocks, relative to no eye contact while Singaporean participants did not show any difference in engagement relative to the gaze. We discuss the results in terms of cultural differences in robot-induced joint attention, and engagement in eye contact, as well as the dissociation between implicit and explicit measures related to processing of gaze.


Asunto(s)
Relaciones Interpersonales , Robótica , Humanos , Atención , Señales (Psicología) , Emociones , Fijación Ocular
10.
Cortex ; 169: 249-258, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37956508

RESUMEN

Previous work shows that in some instances artificial agents, such as robots, can elicit higher-order socio-cognitive mechanisms, similar to those elicited by humans. This suggests that these socio-cognitive mechanisms, such as mentalizing processes, originally developed for interaction with other humans, might be flexibly (re-)used, or "hijacked", for approaching this new category of interaction partners (Wykowska, 2020). In this study, we set out to identify neural markers of such flexible reuse of socio-cognitive mechanisms. We focused on fronto-parietal theta synchronization, as it has been proposed to be a substrate of cognitive flexibility in general (Fries, 2005). We analyzed EEG data from two experiments (Bossi et al., 2020; Roselli et al., submitted), in which participants completed a test measuring their individual likelihood to adopt the intentional stance towards robots, the intentional stance (IST) test. Our results show that participants with higher scores on the IST, indicating that they had higher likelihood of adopting the intentional stance towards a robot, had a significantly higher theta synchronization value, relative to participants with lower scores on the IST. These results suggest that long-range synchronization in the theta band might be a marker socio-cognitive process that can be flexibly applied towards non-human agents, such as robots.


Asunto(s)
Cognición , Ritmo Teta , Humanos , Electroencefalografía
11.
Sci Rep ; 13(1): 10113, 2023 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-37344497

RESUMEN

Sense of Agency (SoA) is the feeling of control over one's actions and their outcomes. A well-established implicit measure of SoA is the temporal interval estimation paradigm, in which participants estimate the time interval between a voluntary action and its sensory consequence. In the present study, we aimed to investigate whether the valence of action outcome modulated implicit SoA. The valence was manipulated through interaction partner's (i) positive/negative facial expression, or (ii) type of gaze (gaze contact or averted gaze). The interaction partner was the humanoid robot iCub. In Experiment 1, participants estimated the time interval between the onset of their action (head movement towards the robot), and the robot's facial expression (happy vs. sad face). Experiment 2 was identical, but the outcome of participants' action was the type of robot's gaze (gaze contact vs. averted). In Experiment 3, we assessed-in a within-subject design-the combined effect of robot's type of facial expression and type of gaze. Results showed that, while the robot's facial expression did not affect participants' SoA (Experiment 1), the type of gaze affected SoA in both Experiment 2 and Experiment 3. Overall, our findings showed that the robot's gaze is a more potent factor than facial expression in modulating participants' implicit SoA.


Asunto(s)
Comunicación , Emociones , Expresión Facial , Fijación Ocular , Teoría Psicológica , Robótica , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven , Emociones/fisiología , Robótica/métodos , Percepción del Tiempo , Felicidad , Tristeza
12.
Autism Res ; 16(5): 997-1008, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36847354

RESUMEN

The concept of scaffolding refers to the support that the environment provides in the acquisition and consolidation of new abilities. Technological advancements allow for support in the acquisition of cognitive capabilities, such as second language acquisition using simple smartphone applications There is, however, one domain of cognition that has been scarcely addressed in the context of technologically assisted scaffolding: social cognition. We explored the possibility of supporting the acquisition of social competencies of a group of children with autism spectrum disorder engaged in a rehabilitation program (age = 5.8 ± 1.14, 10 females, 33 males) by designing two robot-assisted training protocols tailored to Theory of Mind competencies. One protocol was performed with a humanoid robot and the other (control) with a non-anthropomorphic robot. We analyzed changes in NEPSY-II scores before and after the training using mixed effects models. Our results showed that activities with the humanoid significantly improved NEPSY-II scores on the ToM scale. We claim that the motor repertoire of humanoids makes them ideal platforms for artificial scaffolding of social skills in individuals with autism, as they can evoke similar social mechanisms to those elicited in human-human interaction, without providing the same social pressure that another human might exert.


Asunto(s)
Trastorno del Espectro Autista , Robótica , Masculino , Niño , Femenino , Humanos , Trastorno del Espectro Autista/psicología , Cognición Social , Robótica/métodos , Relaciones Interpersonales , Cognición
13.
J Cogn ; 5(1): 2, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36072111

RESUMEN

Robots are a new category of social agents that, thanks to their embodiment, can be used to train and support cognitive skills such as cognitive control. Several studies showed that cognitive control mechanisms are sensitive to affective states induced by humor, mood, and symbolic feedback such as monetary rewards. In the present study, we investigated whether the social gaze of a humanoid robot can affect cognitive control mechanisms. To this end, in two experiments, we evaluated both the conflict resolution and trial-by-trial adaptations during an auditory Simon task, as a function of the type of feedback participants received in the previous trial from the iCub robot, namely, mutual or avoiding gaze behaviour. Across three experiments, we compared the effect of mutual, avoiding (Exp1 and Exp2), and neutral (Exp3) gaze feedback between screen-based (Exp1) and physically embodied setups (Exp2 and Exp3). Results showed that iCub's social gaze feedback modulated conflict resolution, but not conflict adaptations. Specifically, the Simon effect was increased following mutual gaze feedback from iCub. Moreover, the modulatory effect was observed for the embodied setup in which the robot could engage or avoid eye contact in real-time (Exp2) but not for the screen-based setting (Exp1). Our findings showed for the first time that social feedback in Human-Robot Interaction, such as social gaze, can be used to modulate cognitive control. The results highlight the advantage of using robots to evaluate and train complex cognitive skills in both healthy and clinical populations.

14.
J Exp Psychol Gen ; 151(1): 121-136, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34323536

RESUMEN

Eye contact constitutes a strong communicative signal in human interactions and has been shown to modulate various cognitive processes and states. However, little is known about its impact on gaze-mediated attentional orienting in the context of its interplay with strategic top-down control. Here, we aimed at investigating how the social component of eye contact interacts with the top-down strategic control. To this end, we designed a gaze cuing paradigm with the iCub humanoid robot, in which iCub either established eye contact with the participants before averting its gaze or avoided their eyes. Across four experiments, we manipulated gaze cue validity to either elicit strategic top-down inhibitory activity (25% validity) or to allow for relaxing the control mechanisms (50% validity). Also, we manipulated the stimulus-onset-asynchrony (SOA) to examine the dynamics of the top-down modulatory effects. Our results showed that eye contact influenced the gaze cuing effect when the strategic control was not required, by prolonging the prioritized processing of the gazed-at locations. Thus, the effect was observed only when the measurement was taken after a sufficient amount of time (1,000 ms SOA). However, when inhibitory control was necessary (25% validity), the social component was not potent enough to exert influence over the gaze cuing effect independently. Overall, we propose that strategic top-down control is the primary driving force over the gaze cuing effect and that the social aspect plays a modulatory effect by prolonging prioritized processing of gazed-at locations. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Atención , Fijación Ocular , Señales (Psicología) , Ojo , Humanos , Comunicación no Verbal
15.
Q J Exp Psychol (Hove) ; 75(4): 616-632, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34472397

RESUMEN

Sense of Agency (SoA) is the feeling of control over one's actions and their consequences. In social contexts, people experience a "vicarious" SoA over other humans' actions; however, the phenomenon disappears when the other agent is a computer. This study aimed to investigate the factors that determine when humans experience vicarious SoA in Human-Robot Interaction (HRI). To this end, in two experiments, we disentangled two potential contributing factors: (1) the possibility of representing the robot's actions and (2) the adoption of Intentional Stance towards robots. Participants performed an Intentional Binding (IB) task reporting the time of occurrence for self- or robot-generated actions or sensory outcomes. To assess the role of action representation, the robot either performed a physical keypress (Experiment 1) or "acted" by sending a command via Bluetooth (Experiment 2). Before the experiment, attribution of intentionality to the robot was assessed. Results showed that when participants judged the occurrence of the action, vicarious SoA was predicted by the degree of attributed intentionality, but only when the robot's action was physical. Conversely, digital actions elicited the reversed effect of vicarious IB, suggesting that disembodied actions of robots are perceived as non-intentional. When participants judged the occurrence of the sensory outcome, vicarious SoA emerged only when the causing action was physical. Notably, intentionality attribution predicted vicarious SoA for sensory outcomes independently of the nature of the causing event, physical or digital. In conclusion, both intentionality attribution and action representation play a crucial role for vicarious SoA in HRI.


Asunto(s)
Robótica , Emociones , Humanos , Intención , Desempeño Psicomotor , Percepción Social
16.
Acta Psychol (Amst) ; 228: 103660, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35779453

RESUMEN

When we read fiction, we encounter characters that interact in the story. As such, we encode that information and comprehend the stories. Prior studies suggest that this comprehension process is facilitated by taking the perspective of characters during reading. Thus, two questions of interest are whether people take the perspective of characters that are not perceived as capable of experiencing perspectives (e.g., robots), and whether current models of language comprehension can explain these differences between human and nonhuman protagonists (or lack thereof) during reading. The study aims to (1) compare the situation model (i.e., a model that factors in a protagonist's perspective) and the RI-VAL model (which relies more on comparisons of newly acquired information with information stored in long term memory) and (2) investigate whether differences in accessibility of information differ based on adopting the intentional stance towards a robot. To address the aims of our study, we designed a preregistered experiment in which participants read stories about one of three protagonists (an intentional robot, a mechanistic robot and a human) and answered questions about objects that were either occluded or not occluded from the protagonist's view. Based on the situation model, we expected faster responses to items that were not occluded compared to those that were occluded (i.e., the occlusion effect). However, based on the RI-VAL model, we expected overall differences between the protagonists would arise due to inconsistency with general world knowledge. The results of the pre-registered analysis showed no differences between the protagonists, nor differences in occlusion. However, a post-hoc analysis showed that the occlusion effect was shown only for the intentional robot but not for the human, nor mechanistic robot. Results also showed that depending on the age of the readers, the RI-VAL or the situation model is able to explain the results such that older participants "simulated" the situation about which they read (situation model), while younger adults compared new information with information stored in long-term memory (RI-VAL model). This suggests that comparing to information in long term memory is cognitively more costly. Therefore, with older adults used less cognitively demanding strategy of simulation.


Asunto(s)
Lectura , Robótica , Anciano , Comprensión/fisiología , Humanos
17.
Front Robot AI ; 9: 863319, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36093211

RESUMEN

Anthropomorphism describes the tendency to ascribe human characteristics to nonhuman agents. Due to the increased interest in social robotics, anthropomorphism has become a core concept of human-robot interaction (HRI) studies. However, the wide use of this concept resulted in an interchangeability of its definition. In the present study, we propose an integrative framework of anthropomorphism (IFA) encompassing three levels: cultural, individual general tendencies, and direct attributions of human-like characteristics to robots. We also acknowledge the Western bias of the state-of-the-art view of anthropomorphism and develop a cross-cultural approach. In two studies, participants from various cultures completed tasks and questionnaires assessing their animism beliefs, individual tendencies to endow robots with mental properties, spirit, and consider them as more or less human. We also evaluated their attributions of mental anthropomorphic characteristics towards robots (i.e., cognition, emotion, intention). Our results demonstrate, in both experiments, that a three-level model (as hypothesized in the IFA) reliably explains the collected data. We found an overall influence of animism (cultural level) on the two lower levels, and an influence of the individual tendencies to mentalize, spiritualize and humanize (individual level) on the attribution of cognition, emotion and intention. In addition, in Experiment 2, the analyses show a more anthropocentric view of the mind for Western than East-Asian participants. As such, Western perception of robots depends more on humanization while East-Asian on mentalization. We further discuss these results in relation to the anthropomorphism literature and argue for the use of integrative cross-cultural model in HRI research.

18.
Sci Rep ; 12(1): 14924, 2022 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-36056165

RESUMEN

How individuals interpret robots' actions is a timely question in the context of the general approach to increase robot's presence in human social environment in the decades to come. Facing robots, people might have a tendency to explain their actions in mentalistic terms, granting them intentions. However, how default or controllable this process is still under debate. In four experiments, we asked participants to choose between mentalistic (intentional) and mechanistic (non-intentional) descriptions to describe depicted actions of a robot in various scenarios. Our results show the primacy of mentalistic descriptions that are processed faster than mechanistic ones (experiment 1). This effect was even stronger under high vs low cognitive load when people had to decide between the two alternatives (experiment 2). Interestingly, while there was no effect of cognitive load at the later stages of the processing arguing for controllability (experiment 3), imposing cognitive load on participants at an early stage of observation resulted in a faster attribution of mentalistic properties to the robot (experiment 4). We discuss these results in the context of the idea that social cognition is a default system.


Asunto(s)
Mentalización , Robótica , Cognición , Humanos , Medio Social , Percepción Social
19.
Sci Rep ; 12(1): 13845, 2022 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-35974080

RESUMEN

Sense of Agency (SoA) is the feeling of being in control of one's actions and their outcomes. In a social context, people can experience a "vicarious" SoA over another human's actions; however, it is still controversial whether the same occurs in Human-Robot Interaction (HRI). The present study aimed at understanding whether humanoid robots may elicit vicarious SoA in humans, and whether the emergence of this phenomenon depends on the attribution of intentionality towards robots. We asked adult participants to perform an Intentional Binding (IB) task alone and with the humanoid iCub robot, reporting the time of occurrence of both self- and iCub-generated actions. Before the experiment, participants' degree of attribution of intentionality towards robots was assessed. Results showed that participants experienced vicarious SoA over iCub-generated actions. Moreover, intentionality attribution positively predicted the magnitude of vicarious SoA. In conclusion, our results highlight the importance of factors such as human-likeness and attribution of intentionality for the emergence of vicarious SoA towards robots.


Asunto(s)
Robótica , Adulto , Emociones , Humanos , Percepción Social
20.
Front Neuroergon ; 3: 838136, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-38235447

RESUMEN

As technological advances progress, we find ourselves in situations where we need to collaborate with artificial agents (e.g., robots, autonomous machines and virtual agents). For example, autonomous machines will be part of search and rescue missions, space exploration and decision aids during monitoring tasks (e.g., baggage-screening at the airport). Efficient communication in these scenarios would be crucial to interact fluently. While studies examined the positive and engaging effect of social signals (i.e., gaze communication) on human-robot interaction, little is known about the effects of conflicting robot signals on the human actor's cognitive load. Moreover, it is unclear from a social neuroergonomics perspective how different brain regions synchronize or communicate with one another to deal with the cognitive load induced by conflicting signals in social situations with robots. The present study asked if neural oscillations that correlate with conflict processing are observed between brain regions when participants view conflicting robot signals. Participants classified different objects based on their color after a robot (i.e., iCub), presented on a screen, simulated handing over the object to them. The robot proceeded to cue participants (with a head shift) to the correct or incorrect target location. Since prior work has shown that unexpected cues can interfere with oculomotor planning and induces conflict, we expected that conflicting robot social signals which would interfere with the execution of actions. Indeed, we found that conflicting social signals elicited neural correlates of cognitive conflict as measured by mid-brain theta oscillations. More importantly, we found higher coherence values between mid-frontal electrode locations and posterior occipital electrode locations in the theta-frequency band for incongruent vs. congruent cues, which suggests that theta-band synchronization between these two regions allows for communication between cognitive control systems and gaze-related attentional mechanisms. We also find correlations between coherence values and behavioral performance (Reaction Times), which are moderated by the congruency of the robot signal. In sum, the influence of irrelevant social signals during goal-oriented tasks can be indexed by behavioral, neural oscillation and brain connectivity patterns. These data provide insights about a new measure for cognitive load, which can also be used in predicting human interaction with autonomous machines.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA