Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(26): e2402282121, 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38885383

RESUMEN

Goal-directed actions are characterized by two main features: the content (i.e., the action goal) and the form, called vitality forms (VF) (i.e., how actions are executed). It is well established that both the action content and the capacity to understand the content of another's action are mediated by a network formed by a set of parietal and frontal brain areas. In contrast, the neural bases of action forms (e.g., gentle or rude actions) have not been characterized. However, there are now studies showing that the observation and execution of actions endowed with VF activate, in addition to the parieto-frontal network, the dorso-central insula (DCI). In the present study, we established-using dynamic causal modeling (DCM)-the direction of information flow during observation and execution of actions endowed with gentle and rude VF in the human brain. Based on previous fMRI studies, the selected nodes for the DCM comprised the posterior superior temporal sulcus (pSTS), the inferior parietal lobule (IPL), the premotor cortex (PM), and the DCI. Bayesian model comparison showed that, during action observation, two streams arose from pSTS: one toward IPL, concerning the action goal, and one toward DCI, concerning the action vitality forms. During action execution, two streams arose from PM: one toward IPL, concerning the action goal and one toward DCI concerning action vitality forms. This last finding opens an interesting question concerning the possibility to elicit VF in two distinct ways: cognitively (from PM to DCI) and affectively (from DCI to PM).


Asunto(s)
Mapeo Encefálico , Objetivos , Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Adulto , Red Nerviosa/fisiología , Teorema de Bayes , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Lóbulo Parietal/fisiología , Modelos Neurológicos , Adulto Joven
2.
Cereb Cortex ; 33(7): 4164-4172, 2023 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-36089830

RESUMEN

As cold actions (i.e. actions devoid of an emotional content), also emotions are expressed with different vitality forms. For example, when an individual experiences a positive emotion, such as laughing as expression of happiness, this emotion can be conveyed to others by different intensities of face expressions and body postures. In the present study, we investigated whether the observation of emotions, expressed with different vitality forms, activates the same neural structures as those involved in cold action vitality forms processing. To this purpose, we carried out a functional magnetic resonance imaging study in which participants were tested in 2 conditions: emotional and non-emotional laughing both conveying different vitality forms. There are 3 main results. First, the observation of emotional and non-emotional laughing conveying different vitality forms activates the insula. Second, the observation of emotional laughing activates a series of subcortical structures known to be related to emotions. Furthermore, a region of interest analysis carried out in these structures reveals a significant modulation of the blood-oxygen-leveldependent (BOLD) signal during the processing of different vitality forms exclusively in the right amygdala, right anterior thalamus/hypothalamus, and periaqueductal gray. Third, in a subsequent electromyography study, we found a correlation between the zygomatic muscles activity and BOLD signal in the right amygdala only.


Asunto(s)
Emociones , Risa , Humanos , Emociones/fisiología , Risa/fisiología , Amígdala del Cerebelo/fisiología , Imagen por Resonancia Magnética/métodos , Mapeo Encefálico/métodos
3.
J Exp Child Psychol ; 238: 105774, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-37703720

RESUMEN

Cross-sectioning is a shape understanding task where the participants must infer and interpret the spatial features of three-dimensional (3D) solids by depicting their internal two-dimensional (2D) arrangement. An increasing body of research provides evidence of the crucial role of sensorimotor experience in acquiring these complex geometrical concepts. Here, we focused on how cross-sectioning ability emerges in young children and the influence of multisensory visuo-haptic experience in geometrical learning through two experiments. In Experiment 1, we compared the 3D printed version of the Santa Barbara Solids Test (SBST) with its classical paper version; in Experiment 2, we contrasted the children's performance in the SBST before and after the visual or visuo-haptic experience. In Experiment 1, we did not identify an advantage in visualizing 3D shapes over the classical 2D paper test. In contrast, in Experiment 2, we found that children who had the experience of a combination of visual and tactile information during the exploration phase improved their performance in the SBST compared with children who were limited to visual exploration. Our study demonstrates how practicing novel multisensory strategies improves children's understanding of complex geometrical concepts. This outcome highlights the importance of introducing multisensory experience in educational training and the need to make way for developing new technologies that could improve learning abilities in children.


Asunto(s)
Percepción del Tacto , Percepción Visual , Niño , Humanos , Preescolar , Tecnología Háptica , Tacto , Aprendizaje
4.
J Neurophysiol ; 113(6): 1885-95, 2015 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-25505105

RESUMEN

Humans perform vertical and horizontal arm motions with different temporal patterns. The specific velocity profiles are chosen by the central nervous system by integrating the gravitational force field to minimize energy expenditure. However, what happens when a visuomotor rotation is applied, so that a motion performed in the horizontal plane is perceived as vertical? We investigated the dynamic of the adaptation of the spatial and temporal properties of a pointing motion during prolonged exposure to a 90° visuomotor rotation, where a horizontal movement was associated with a vertical visual feedback. We found that participants immediately adapted the spatial parameters of motion to the conflicting visual scene in order to keep their arm trajectory straight. In contrast, the initial symmetric velocity profiles specific for a horizontal motion were progressively modified during the conflict exposure, becoming more asymmetric and similar to those appropriate for a vertical motion. Importantly, this visual effect that increased with repetitions was not followed by a consistent aftereffect when the conflicting visual feedback was absent (catch and washout trials). In a control experiment we demonstrated that an intrinsic representation of the temporal structure of perceived vertical motions could provide the error signal allowing for this progressive adaptation of motion timing. These findings suggest that gravity strongly constrains motor learning and the reweighting process between visual and proprioceptive sensory inputs, leading to the selection of a motor plan that is suboptimal in terms of energy expenditure.


Asunto(s)
Adaptación Fisiológica , Gravitación , Percepción de Movimiento , Desempeño Psicomotor , Rotación , Adulto , Brazo/fisiología , Retroalimentación Fisiológica , Femenino , Humanos , Masculino
5.
Exp Brain Res ; 232(12): 3965-76, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25183158

RESUMEN

Perception is a complex process, where prior knowledge exerts a fundamental influence over what we see. The use of priors is at the basis of the well-known phenomenon of central tendency: Judgments of almost all quantities (such as length, duration, and number) tend to gravitate toward their mean magnitude. Although such context dependency is universal in adult perceptual judgments, how it develops with age remains unknown. We asked children from 7 to 14 years of age and adults to reproduce lengths of stimuli drawn from different distributions and evaluated whether judgments were influenced by stimulus context. All participants reproduced the presented length differently depending on the context: The same stimulus was reproduced as shorter, when on average stimuli were short, and as longer, when on average stimuli were long. Interestingly, the relative importance given to the current sensory signal and to priors was almost constant during childhood. This strategy, which in adults is optimal in Bayesian terms, is apparently successful in holding the sensory noise at bay even during development. Hence, the influence of previous knowledge on perception is present already in young children, suggesting that context dependency is established early in the developing brain.


Asunto(s)
Juicio/fisiología , Percepción Espacial/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Teorema de Bayes , Niño , Femenino , Humanos , Masculino , Estimulación Luminosa
6.
Front Comput Neurosci ; 18: 1349408, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38585280

RESUMEN

The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of cyber-physical systems, similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of Bodyware and Cogniware. We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.

7.
Brain ; 135(Pt 11): 3371-9, 2012 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-23169922

RESUMEN

This study investigated how Parkinson's disease alters haptic perception and the underlying mechanisms of somatosensory and sensorimotor integration. Changes in haptic sensitivity and acuity (the abilities to detect and to discriminate between haptic stimuli) due to Parkinson's disease were systematically quantified and contrasted to the performance of healthy older and young adults. Using a robotic force environment, virtual contours of various curvatures were presented. Participants explored these contours with their hands and indicated verbally whether they could detect or discriminate between two contours. To understand what aspects of sensory or sensorimotor integration are altered by ageing and disease, we manipulated the sensorimotor aspect of the task: the robot either guided the hand along the contour or the participant actively moved the hand. Active exploration relies on multimodal sensory and sensorimotor integration, while passive guidance only requires sensory integration of proprioceptive and tactile information. The main findings of the study are as follows: first, a decline in haptic precision can already be observed in adults before the age of 70 years. Parkinson's disease may lead to an additional decrease in haptic sensitivity well beyond the levels typically seen in middle-aged and older adults. Second, the haptic deficit in Parkinson's disease is general in nature. It becomes manifest as a decrease in sensitivity and acuity (i.e. a smaller perceivable range and a diminished ability to discriminate between two perceivable haptic stimuli). Third, thresholds during both active and passive exploration are elevated, but not significantly different from each other. That is, active exploration did not enhance the haptic deficit when compared to passive hand motion. This implies that Parkinson's disease affects early stages of somatosensory integration that ultimately have an impact on processes of sensorimotor integration. Our results suggest that the known motor problems in Parkinson's disease that are generally characterized as a failure of sensorimotor integration may, in fact, have a sensory origin.


Asunto(s)
Envejecimiento/fisiología , Percepción de Forma/fisiología , Enfermedad de Parkinson/fisiopatología , Percepción del Tacto/fisiología , Adulto , Anciano , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Persona de Mediana Edad , Desempeño Psicomotor , Robótica/métodos , Umbral Sensorial/fisiología
8.
J Neurophysiol ; 107(2): 544-50, 2012 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-22031771

RESUMEN

Humans routinely use both of their hands to gather information about shape and texture of objects. Yet, the mechanisms of how the brain combines haptic information from the two hands to achieve a unified percept are unclear. This study systematically measured the haptic precision of humans exploring a virtual curved object contour with one or both hands to understand if the brain integrates haptic information from the two hemispheres. Bayesian perception theory predicts that redundant information from both hands should improve haptic estimates. Thus exploring an object with two hands should yield haptic precision that is superior to unimanual exploration. A bimanual robotic manipulandum passively moved the hands of 20 blindfolded, right-handed adult participants along virtual curved contours. Subjects indicated which contour was more "curved" (forced choice) between two stimuli of different curvature. Contours were explored uni- or bimanually at two orientations (toward or away from the body midline). Respective psychophysical discrimination thresholds were computed. First, subjects showed a tendency for one hand to be more sensitive than the other with most of the subjects exhibiting a left-hand bias. Second, bimanual thresholds were mostly within the range of the corresponding unimanual thresholds and were not predicted by a maximum-likelihood estimation (MLE) model. Third, bimanual curvature perception tended to be biased toward the motorically dominant hand, not toward the haptically more sensitive left hand. Two-handed exploration did not necessarily improve haptic sensitivity. We found no evidence that haptic information from both hands is integrated using a MLE mechanism. Rather, results are indicative of a process of "sensory selection", where information from the dominant right hand is used, although the left, nondominant hand may yield more precise haptic estimates.


Asunto(s)
Encéfalo/fisiología , Lateralidad Funcional/fisiología , Mano/fisiología , Desempeño Psicomotor/fisiología , Percepción del Tacto/fisiología , Adulto , Análisis de Varianza , Discriminación en Psicología/fisiología , Femenino , Percepción de Forma/fisiología , Humanos , Masculino , Valor Predictivo de las Pruebas , Psicometría , Adulto Joven
9.
J Neurophysiol ; 107(12): 3433-45, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22442569

RESUMEN

When submitted to a visuomotor rotation, subjects show rapid adaptation of visually guided arm reaching movements, indicated by a progressive reduction in reaching errors. In this study, we wanted to make a step forward by investigating to what extent this adaptation also implies changes into the motor plan. Up to now, classical visuomotor rotation paradigms have been performed on the horizontal plane, where the reaching motor plan in general requires the same kinematics (i.e., straight path and symmetric velocity profile). To overcome this limitation, we considered vertical and horizontal movement directions requiring specific velocity profiles. This way, a change in the motor plan due to the visuomotor conflict would be measurable in terms of a modification in the velocity profile of the reaching movement. Ten subjects performed horizontal and vertical reaching movements while observing a rotated visual feedback of their motion. We found that adaptation to a visuomotor rotation produces a significant change in the motor plan, i.e., changes to the symmetry of velocity profiles. This suggests that the central nervous system takes into account the visual information to plan a future motion, even if this causes the adoption of nonoptimal motor plans in terms of energy consumption. However, the influence of vision on arm movement planning is not fixed, but rather changes as a function of the visual orientation of the movement. Indeed, a clear influence on motion planning can be observed only when the movement is visually presented as oriented along the vertical direction. Thus vision contributes differently to the planning of arm pointing movements depending on motion orientation in space.


Asunto(s)
Brazo/fisiología , Movimiento/fisiología , Desempeño Psicomotor/fisiología , Adaptación Fisiológica , Adulto , Fenómenos Biomecánicos/fisiología , Femenino , Gravitación , Humanos , Masculino , Orientación/fisiología , Rotación , Visión Ocular/fisiología , Adulto Joven
10.
Exp Brain Res ; 223(1): 149-57, 2012 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-23064882

RESUMEN

Neural processes of sensory-motor- and motor-sensory integration link perception and action, forming the basis for human interaction with the environment. Haptic perception, the ability to extract object features through action, is based on these processes. To study the development of motor-sensory integration, children judged the curvature of virtual objects after exploring them actively or while guided passively by a robot. Haptic acuity reached adult levels only at early adolescence. Unlike in adults, haptic precision in children was consistently lower during active exploration when compared to passive motion. Thus, the exploratory movements themselves constitute a form of noise for the developing haptic system that younger brains cannot compensate until mid-adolescence. Computationally, this is consistent with a noisy efference copy mechanism producing imprecise predicted sensory feedback, which compromises haptic precision in children, while the mature mechanism aids the adult brain to account for the effect of self-generated motion on perception.


Asunto(s)
Percepción de Forma/fisiología , Lateralidad Funcional/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Envejecimiento/fisiología , Envejecimiento/psicología , Algoritmos , Niño , Conducta Exploratoria/fisiología , Retroalimentación , Femenino , Humanos , Masculino , Desempeño Psicomotor/fisiología , Robótica , Interfaz Usuario-Computador
11.
Neural Netw ; 150: 364-376, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35358886

RESUMEN

In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time. Besides dealing with the increased dynamics of the scenarios due to the opponents' actions, they usually have to understand how to overcome the opponent's strategies. Most of the common solutions, usually based on continual learning or centralized multi-agent experiences, however, do not allow the development of personalized strategies to face individual opponents. In this paper, we propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them. The entire model is trained online, using a composed loss based on a contrastive optimization, to learn competitive and multiplayer games. We evaluate our model on a pokemon duel scenario and the four-player competitive Chef's Hat card game. Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times. We also present a discussion on the impact of our model, in particular on how well it deals with on specific strategy learning for each of the two scenarios.


Asunto(s)
Conducta Competitiva , Refuerzo en Psicología , Aprendizaje
12.
IEEE Trans Haptics ; PP2022 Dec 19.
Artículo en Inglés | MEDLINE | ID: mdl-37015607

RESUMEN

We investigate the recognition of the affective states of a person performing an action with an object, by processing the object-sensed data. We focus on sequences of basic actions such as grasping and rotating, which are constituents of daily-life interactions. iCube, a 5 cm cube, was used to collect tactile and kinematics data that consist of tactile maps (without information on the pressure applied to the surface), and rotations. We conduct two studies: classification of i) emotions and ii) the vitality forms. In both, the participants perform a semi-structured task composed of basic actions. For emotion recognition, 237 trials by 11 participants associated with anger, sadness, excitement, and gratitude were used to train models using 10 hand-crafted features. The classifier accuracy reaches up to 82.7%. Interestingly, the same classifier when learned exclusively with the tactile data performs on par with its counterpart modeled with all 10 features. For the second study, 1135 trials by 10 participants were used to classify two vitality forms. The best-performing model differentiated gentle actions from rude ones with an accuracy of 84.85%. The results also confirm that people touch objects differently when performing these basic actions with different affective states and attitudes.

13.
Front Hum Neurosci ; 16: 988644, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36466622

RESUMEN

Visual perception of space and time has been shown to rely on context dependency, an inferential process by which the average magnitude of a series of stimuli previously experienced acts as a prior during perception. This article aims to investigate the presence and evolution of this phenomenon in early aging. Two groups of participants belonging to two different age ranges (Young Adults: average age 28.8 years old; Older Adults: average age 62.8 years old) participated in the study performing a discrimination and a reproduction task, both in a spatial and temporal conditions. In particular, they were asked to evaluate lengths in the spatial domain and interval durations in the temporal one. Early aging resulted to be associated to a general decline of the perceptual acuity, which is particularly evident in the temporal condition. The context dependency phenomenon was preserved also during aging, maintaining similar levels as those exhibited by the younger group in both space and time perception. However, the older group showed a greater variability in context dependency among participants, perhaps due to different strategies used to face a higher uncertainty in the perceptual process.

14.
IEEE Trans Haptics ; 15(2): 339-350, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35344495

RESUMEN

Haptic exploration strategies have been traditionally studied focusing on hand movements and neglecting how objects are moved in space. However, in daily life situations touch and movement cannot be disentangled. Furthermore, the relation between object manipulation as well as performance in haptic tasks and spatial skill is still little understood. In this study, we used iCube, a sensorized cube recording its orientation in space as well as the location of the points of contact on its faces. Participants had to explore the cube faces where little pins were positioned in varying number and count the number of pins on the faces with either even or odd number of pins. At the end of this task, they also completed a standard visual mental rotation test (MRT). Results showed that higher MRT scores were associated with better performance in the task with iCube both in term of accuracy and exploration speed and exploration strategies associated with better performance were identified. High performers tended to rotate the cube so that the explored face had the same spatial orientation (i.e., they preferentially explored the upward face and rotated iCube to explore the next face in the same orientation). They also explored less often twice the same face and were faster and more systematic in moving from one face to the next. These findings indicate that iCube could be used to infer subjects' spatial skill in a more natural and unobtrusive fashion than with standard MRTs.


Asunto(s)
Tecnología Háptica , Percepción del Tacto , Mano , Humanos , Percepción Espacial , Tacto
15.
Front Robot AI ; 9: 733954, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35783020

RESUMEN

Partners have to build a shared understanding of their environment in everyday collaborative tasks by aligning their perceptions and establishing a common ground. This is one of the aims of shared perception: revealing characteristics of the individual perception to others with whom we share the same environment. In this regard, social cognitive processes, such as joint attention and perspective-taking, form a shared perception. From a Human-Robot Interaction (HRI) perspective, robots would benefit from the ability to establish shared perception with humans and a common understanding of the environment with their partners. In this work, we wanted to assess whether a robot, considering the differences in perception between itself and its partner, could be more effective in its helping role and to what extent this improves task completion and the interaction experience. For this purpose, we designed a mathematical model for a collaborative shared perception that aims to maximise the collaborators' knowledge of the environment when there are asymmetries in perception. Moreover, we instantiated and tested our model via a real HRI scenario. The experiment consisted of a cooperative game in which participants had to build towers of Lego bricks, while the robot took the role of a suggester. In particular, we conducted experiments using two different robot behaviours. In one condition, based on shared perception, the robot gave suggestions by considering the partners' point of view and using its inference about their common ground to select the most informative hint. In the other condition, the robot just indicated the brick that would have yielded a higher score from its individual perspective. The adoption of shared perception in the selection of suggestions led to better performances in all the instances of the game where the visual information was not a priori common to both agents. However, the subjective evaluation of the robot's behaviour did not change between conditions.

16.
Front Hum Neurosci ; 16: 941593, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36158621

RESUMEN

Haptic object recognition is usually an efficient process although slower and less accurate than its visual counterpart. The early loss of vision imposes a greater reliance on haptic perception for recognition compared to the sighted. Therefore, we may expect that congenitally blind persons could recognize objects through touch more quickly and accurately than late blind or sighted people. However, the literature provided mixed results. Furthermore, most of the studies on haptic object recognition focused on performance, devoting little attention to the exploration procedures that conducted to that performance. In this study, we used iCube, an instrumented cube recording its orientation in space as well as the location of the points of contact on its faces. Three groups of congenitally blind, late blind and age and gender-matched blindfolded sighted participants were asked to explore the cube faces where little pins were positioned in varying number. Participants were required to explore the cube twice, reporting whether the cube was the same or it differed in pins disposition. Results showed that recognition accuracy was not modulated by the level of visual ability. However, congenitally blind touched more cells simultaneously while exploring the faces and changed more the pattern of touched cells from one recording sample to the next than late blind and sighted. Furthermore, the number of simultaneously touched cells negatively correlated with exploration duration. These findings indicate that early blindness shapes haptic exploration of objects that can be held in hands.

17.
PLoS One ; 17(8): e0273643, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36040911

RESUMEN

Human perception and behavior are affected by the situational context, in particular during social interactions. A recent study demonstrated that humans perceive visual stimuli differently depending on whether they do the task by themselves or together with a robot. Specifically, it was found that the central tendency effect is stronger in social than in non-social task settings. The particular nature of such behavioral changes induced by social interaction, and their underlying cognitive processes in the human brain are, however, still not well understood. In this paper, we address this question by training an artificial neural network inspired by the predictive coding theory on the above behavioral data set. Using this computational model, we investigate whether the change in behavior that was caused by the situational context in the human experiment could be explained by continuous modifications of a parameter expressing how strongly sensory and prior information affect perception. We demonstrate that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals, indicating that social and non-social task settings might in fact exist on a continuum. At the same time, an analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions. Our results emphasize the importance of computational replications of behavioral data for generating hypotheses on the underlying cognitive mechanisms of shared perception and may provide inspiration for follow-up studies in the field of neuroscience.


Asunto(s)
Encéfalo , Estudios de Seguimiento , Humanos
18.
PLoS One ; 17(7): e0270787, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35881625

RESUMEN

Across three experiments (N = 302), we explored whether people cognitively elaborate humanoid robots as human- or object-like. In doing so, we relied on the inversion paradigm, which is an experimental procedure extensively used by cognitive research to investigate the elaboration of social (vs. non-social) stimuli. Overall, mixed-model analyses revealed that full-bodies of humanoid robots were subjected to the inversion effect (body-inversion effect) and, thus, followed a configural processing similar to that activated for human beings. Such a pattern of finding emerged regardless of the similarity of the considered humanoid robots to human beings. That is, it occurred when considering bodies of humanoid robots with medium (Experiment 1), high and low (Experiment 2) levels of human likeness. Instead, Experiment 3 revealed that only faces of humanoid robots with high (vs. low) levels of human likeness were subjected to the inversion effects and, thus, cognitively anthropomorphized. Theoretical and practical implications of these findings for robotic and psychological research are discussed.


Asunto(s)
Robótica , Cognición , Humanos
19.
R Soc Open Sci ; 8(8): 202124, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34457324

RESUMEN

Human decisions are often influenced by others' opinions. This process is regulated by social norms: for instance, we tend to reciprocate the consideration received from others, independently of their reliability as information sources. Nonetheless, no study to date has investigated whether and how reciprocity modulates social influence in child-adult interaction. We tested 6-, 8- and 10-year-old children in a novel joint perceptual task. A child and an adult experimenter made perceptual estimates and then took turns in making a final decision, choosing between their own and partner's response. We manipulated the final choices of the adult partner, who in one condition chose often the child's estimates, whereas in another condition tended to confirm her own response. Results reveal that 10-year-old children reciprocated the consideration received from the partner, increasing their level of conformity to the adult's judgements when the partner had shown high consideration towards them. At the same time, 10-year-old children employed more elaborate decision criteria in choosing when trusting the adult partner compared to younger children and did not show egocentric biases in their final decisions. Our results shed light on the development of the cognitive and normative mechanisms modulating reciprocal social influence in child-adult interaction.

20.
iScience ; 24(12): 103424, 2021 Dec 17.
Artículo en Inglés | MEDLINE | ID: mdl-34877490

RESUMEN

Humans are constantly influenced by others' behavior and opinions. Of importance, social influence among humans is shaped by reciprocity: we follow more the advice of someone who has been taking into consideration our opinions. In the current work, we investigate whether reciprocal social influence can emerge while interacting with a social humanoid robot. In a joint task, a human participant and a humanoid robot made perceptual estimates and then could overtly modify them after observing the partner's judgment. Results show that endowing the robot with the ability to express and modulate its own level of susceptibility to the human's judgments represented a double-edged sword. On the one hand, participants lost confidence in the robot's competence when the robot was following their advice; on the other hand, participants were unwilling to disclose their lack of confidence to the susceptible robot, suggesting the emergence of reciprocal mechanisms of social influence supporting human-robot collaboration.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA