Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Hum Brain Mapp ; 44(6): 2307-2322, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36661194

ABSTRACT

Facial expression and body posture recognition have protracted developmental trajectories. Interactions between face and body perception, such as the influence of body posture on facial expression perception, also change with development. While the brain regions underpinning face and body processing are well-defined, little is known about how white-matter tracts linking these regions relate to perceptual development. Here, we obtained complementary diffusion magnetic resonance imaging (MRI) measures (fractional anisotropy [FA], spherical mean Sµ ), and a quantitative MRI myelin-proxy measure (R1), within white-matter tracts of face- and body-selective networks in children and adolescents and related these to perceptual development. In tracts linking occipital and fusiform face areas, facial expression perception was predicted by age-related maturation, as measured by Sµ and R1, as well as age-independent individual differences in microstructure, captured by FA and R1. Tract microstructure measures linking posterior superior temporal sulcus body region with anterior temporal lobe (ATL) were related to the influence of body on facial expression perception, supporting ATL as a site of face and body network convergence. Overall, our results highlight age-dependent and age-independent constraints that white-matter microstructure poses on perceptual abilities during development and the importance of complementary microstructural measures in linking brain structure and behaviour.


Subject(s)
White Matter , Child , Adolescent , Humans , White Matter/diagnostic imaging , Facial Expression , Diffusion Tensor Imaging/methods , Brain/pathology , Diffusion Magnetic Resonance Imaging , Perception , Anisotropy
2.
Proc Natl Acad Sci U S A ; 115(29): 7515-7520, 2018 07 17.
Article in English | MEDLINE | ID: mdl-29967149

ABSTRACT

A hallmark of human social behavior is the effortless ability to relate one's own actions to that of the interaction partner, e.g., when stretching out one's arms to catch a tripping child. What are the behavioral properties of the neural substrates that support this indispensable human skill? Here we examined the processes underlying the ability to relate actions to each other, namely the recognition of spatiotemporal contingencies between actions (e.g., a "giving" that is followed by a "taking"). We used a behavioral adaptation paradigm to examine the response properties of perceptual mechanisms at a behavioral level. In contrast to the common view that action-sensitive units are primarily selective for one action (i.e., primary action, e.g., 'throwing"), we demonstrate that these processes also exhibit sensitivity to a matching contingent action (e.g., "catching"). Control experiments demonstrate that the sensitivity of action recognition processes to contingent actions cannot be explained by lower-level visual features or amodal semantic adaptation. Moreover, we show that action recognition processes are sensitive only to contingent actions, but not to noncontingent actions, demonstrating their selective sensitivity to contingent actions. Our findings show the selective coding mechanism for action contingencies by action-sensitive processes and demonstrate how the representations of individual actions in social interactions can be linked in a unified representation.


Subject(s)
Adaptation, Psychological , Social Behavior , Female , Humans , Male
3.
Psychol Sci ; 29(8): 1257-1269, 2018 08.
Article in English | MEDLINE | ID: mdl-29874156

ABSTRACT

Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.


Subject(s)
Facial Expression , Facial Recognition/physiology , Social Perception , Visual Perception , Emotions , Humans , Psychomotor Performance
4.
J Vis ; 16(3): 33, 2016.
Article in English | MEDLINE | ID: mdl-26913625

ABSTRACT

Recognizing whether the gestures of somebody mean a greeting or a threat is crucial for social interactions. In real life, action recognition occurs over the entire visual field. In contrast, much of the previous research on action recognition has primarily focused on central vision. Here our goal is to examine what can be perceived about an action outside of foveal vision. Specifically, we probed the valence as well as first level and second level recognition of social actions (handshake, hugging, waving, punching, slapping, and kicking) at 0° (fovea/fixation), 15°, 30°, 45°, and 60° of eccentricity with dynamic (Experiment 1) and dynamic and static (Experiment 2) actions. To assess peripheral vision under conditions of good ecological validity, these actions were carried out by a life-size human stick figure on a large screen. In both experiments, recognition performance was surprisingly high (more than 66% correct) up to 30° of eccentricity for all recognition tasks and followed a nonlinear decline with increasing eccentricities.


Subject(s)
Pattern Recognition, Visual/physiology , Visual Fields/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Middle Aged , Reaction Time , Young Adult
5.
Exp Brain Res ; 233(5): 1471-9, 2015 May.
Article in English | MEDLINE | ID: mdl-25678309

ABSTRACT

Accurate knowledge about size and shape of the body derived from somatosensation is important to locate one's own body in space. The internal representation of these body metrics (body model) has been assessed by contrasting the distortions of participants' body estimates across two types of tasks (localization task vs. template matching task). Here, we examined to which extent this contrast is linked to the human body. We compared participants' shape estimates of their own hand and non-corporeal objects (rake, post-it pad, CD-box) between a localization task and a template matching task. While most items were perceived accurately in the visual template matching task, they appeared to be distorted in the localization task. All items' distortions were characterized by larger length underestimation compared to width. This pattern of distortion was maintained across orientation for the rake item only, suggesting that the biases measured on the rake were bound to an item-centric reference frame. This was previously assumed to be the case only for the hand. Although similar results can be found between non-corporeal items and the hand, the hand appears significantly more distorted than other items in the localization task. Therefore, we conclude that the magnitude of the distortions measured in the localization task is specific to the hand. Our results are in line with the idea that the localization task for the hand measures contributions of both an implicit body model that is not utilized in landmark localization with objects and other factors that are common to objects and the hand.


Subject(s)
Body Image , Orientation , Pattern Recognition, Visual/physiology , Perceptual Distortion/physiology , Space Perception/physiology , Adult , Analysis of Variance , Female , Functional Laterality , Hand , Humans , Male , Proprioception , Young Adult
6.
Behav Brain Sci ; 37(2): 197-8, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24775153

ABSTRACT

Cook et al. suggest that motor-visual neurons originate from associative learning. This suggestion has interesting implications for the processing of socially relevant visual information in social interactions. Here, we discuss two aspects of the associative learning account that seem to have particular relevance for visual recognition of social information in social interactions - namely, context-specific and contingency based learning.


Subject(s)
Biological Evolution , Brain/physiology , Learning/physiology , Mirror Neurons/physiology , Social Perception , Animals , Humans
7.
Atten Percept Psychophys ; 86(2): 536-558, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37188862

ABSTRACT

We evaluate the actions of other individuals based upon a variety of movements that reveal critical information to guide decision making and behavioural responses. These signals convey a range of information about the actor, including their goals, intentions and internal mental states. Although progress has been made to identify cortical regions involved in action processing, the organising principles underlying our representation of actions still remains unclear. In this paper we investigated the conceptual space that underlies action perception by assessing which qualities are fundamental to the perception of human actions. We recorded 240 different actions using motion-capture and used these data to animate a volumetric avatar that performed the different actions. 230 participants then viewed these actions and rated the extent to which each action demonstrated 23 different action characteristics (e.g., avoiding-approaching, pulling-pushing, weak-powerful). We analysed these data using Exploratory Factor Analysis to examine the latent factors underlying visual action perception. The best fitting model was a four-dimensional model with oblique rotation. We named the factors: friendly-unfriendly, formidable-feeble, planned-unplanned, and abduction-adduction. The first two factors of friendliness and formidableness explained approximately 22% of the variance each, compared to planned and abduction, which explained approximately 7-8% of the variance each; as such we interpret this representation of action space as having 2 + 2 dimensions. A closer examination of the first two factors suggests a similarity to the principal factors underlying our evaluation of facial traits and emotions, whilst the last two factors of planning and abduction appear unique to actions.


Subject(s)
Intention , Visual Perception , Humans , Visual Perception/physiology , Emotions , Movement/physiology
8.
J Vis ; 13(1): 23, 2013 Jan 18.
Article in English | MEDLINE | ID: mdl-23335323

ABSTRACT

Probing emotional facial expression recognition with the adaptation paradigm is one way to investigate the processes underlying emotional face recognition. Previous research suggests that these processes are tuned to dynamic facial information (facial movement). Here we examined the tuning of processes involved in the recognition of emotional facial expressions to different sources of facial movement information. Specifically we investigated the effect of the availability of rigid head movement and intrinsic facial movements (e.g., movement of facial features) on the size of the emotional facial expression adaptation effect. Using a three-dimensional (3D) morphable model that allowed the manipulation of the availability of each of the two factors (intrinsic facial movement, head movement) individually, we examined emotional facial expression adaptation with happy and disgusted faces. Our results show that intrinsic facial movement is necessary for the emergence of an emotional facial expression adaptation effect with dynamic adaptors. The presence of rigid head motion modulates the emotional facial expression adaptation effect only in the presence of intrinsic facial motion. In a second experiment we show these adaptation effects are difficult to explain by merely the perceived intensity and clarity (uniqueness) of the adaptor expressions. Together these results suggest that processes encoding facial expressions are differently tuned to different sources of facial movements.


Subject(s)
Cues , Emotions/physiology , Face/physiology , Facial Expression , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Adult , Female , Humans , Male , Mental Processes/physiology
9.
Exp Brain Res ; 214(2): 273-84, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21863262

ABSTRACT

Social context modulates action kinematics. Less is known about whether social context also affects the use of task relevant visual information. We tested this hypothesis by examining whether the instruction to play table tennis competitively or cooperatively affected the kind of visual cues necessary for successful table tennis performance. In two experiments, participants played table tennis in a dark room with only the ball, net, and table visible. Visual information about both players' actions was manipulated by means of self-glowing markers. We recorded the number of successful passes for each player individually. The results showed that participants' performance increased when their own body was rendered visible in both the cooperative and the competitive condition. However, social context modulated the importance of different sources of visual information about the other player. In the cooperative condition, seeing the other player's racket had the largest effects on performance increase, whereas in the competitive condition, seeing the other player's body resulted in the largest performance increase. These results suggest that social context selectively modulates the use of visual information about others' actions in social interactions.


Subject(s)
Competitive Behavior/physiology , Motion Perception/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Social Behavior , Adult , Female , Humans , Male , Visual Perception/physiology , Young Adult
10.
Acta Psychol (Amst) ; 210: 103168, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32919093

ABSTRACT

The goal of new adaptive technologies is to allow humans to interact with technical devices, such as robots, in natural ways akin to human interaction. Essential for achieving this goal, is the understanding of the factors that support natural interaction. Here, we examined whether human motor control is linked to the visual appearance of the interaction partner. Motor control theories consider kinematic-related information but not visual appearance as important for the control of motor movements (Flash & Hogan, 1985; Harris & Wolpert, 1998; Viviani & Terzuolo, 1982). We investigated the sensitivity of motor control to visual appearance during the execution of a social interaction, i.e. a high-five. In a novel mixed reality setup participants executed a high-five with a three-dimensional life-size human- or a robot-looking avatar. Our results demonstrate that movement trajectories and adjustments to perturbations depended on the visual appearance of the avatar despite both avatars carrying out identical movements. Moreover, two well-known motor theories (minimum jerk, two-thirds power law) better predict robot than human interaction trajectories. The dependence of motor control on the human likeness of the interaction partner suggests that different motor control principles might be at work in object and human directed interactions.


Subject(s)
Movement , Social Interaction , Biomechanical Phenomena , Humans
11.
Can J Exp Psychol ; 62(3): 150-5, 2008 Sep.
Article in English | MEDLINE | ID: mdl-18778143

ABSTRACT

Nakayama and Silverman (1986) proposed that, when searching for a target defined by a conjunction of color and stereoscopic depth, observers partition 3D space into separate depth planes and then rapidly search each such plane in turn, thereby turning a conjunctive search into a "feature" search. In their study, they found, consistent with their hypothesis, shallow search slopes when searching depth planes separated by large binocular disparities. Here, the authors investigated whether the search slope depends upon the extent of the stereoscopically induced separation between the planes to be searched (i.e., upon the magnitude of the binocular disparity. The obtained slope shows that (1) a rapid search only occurs with disparities greater than 6 min of arc, a value that vastly exceeds the stereo threshold, and that (2) the steepness of this slope increases in a major way at lower disparities. The ability to implement the search mode envisaged by Nakayama and Silverman is thus clearly limited to large disparities; less efficient search strategies are mandated by lower disparity values, as under such conditions items from one depth plane may be more likely to "intrude" upon the other.


Subject(s)
Attention , Color Perception , Depth Perception , Vision, Binocular , Humans , Reaction Time , Students , Visual Perception
12.
Br J Psychol ; 109(3): 427-430, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29748966

ABSTRACT

One major challenge of social interaction research is to achieve high experimental control over social interactions to allow for rigorous scientific reasoning. Virtual reality (VR) promises this level of control. Pan and Hamilton guide us with a detailed review on existing and future possibilities and challenges of using VR for social interaction research. Here, we extend the discussion to methodological and practical implications when using VR.


Subject(s)
Behavioral Research/methods , Interpersonal Relations , Psychology/methods , Virtual Reality , Female , Humans , Male
13.
J Exp Psychol Hum Percept Perform ; 43(7): 1444-1453, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28368168

ABSTRACT

Recent studies have shown the presence of distortions in proprioceptive hand localization tasks. Those results were interpreted as reflecting specific perceptual distortions bound to a body model. It was especially suggested that hand distortions could be related to distortions of somatotopic cortical maps. In this study, we show that hand distortions measured in localization tasks might be partly driven by a general false belief about hand landmark locations (conceptual biases). We especially demonstrate that hand and object distortions are present in similar magnitude when correcting for the conceptual bias of the knuckles (Experiment 1) or when asking participants to directly locate spatially well-represented landmarks (i.e., without conceptual biases) on their hand (Experiment 2). Altogether our results suggest that localization task distortions are nonspecific to the body and that similar perceptual distortions could underlie localization performance measured on objects and hands. (PsycINFO Database Record


Subject(s)
Body Image , Hand , Perceptual Distortion/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
14.
Vision Res ; 135: 10-15, 2017 06.
Article in English | MEDLINE | ID: mdl-28435124

ABSTRACT

Recognizing actions of others across the whole visual field is required for social interactions. In a previous study, we have shown that recognition is very good even when life-size avatars who were facing the observer carried out actions (e.g. waving) and were presented very far away from the fovea (Fademrecht, Bülthoff, & de la Rosa, 2016). We explored the possibility whether this remarkable performance was owed to life-size avatars facing the observer, which - according to some social cognitive theories (e.g. Schilbach et al., 2013) - could potentially activate different social perceptual processes as profile facing avatars. Participants therefore viewed a life-size stick figure avatar that carried out motion-captured social actions (greeting actions: handshake, hugging, waving; attacking actions: slapping, punching and kicking) in frontal and profile view. Participants' task was to identify the actions as 'greeting' or as 'attack' or to assess the emotional valence of the actions. While recognition accuracy for frontal and profile views did not differ, reaction times were significantly faster in general for profile views (i.e. the moving avatar was seen profile on) than for frontal views (i.e. the action was directed toward the observer). Our results suggest that the remarkable well action recognition performance in the visual periphery was not owed to a more socially engaging front facing view. Although action recognition seems to depend on viewpoint, action recognition in general remains remarkable accurate even far into the visual periphery.


Subject(s)
Retina/physiology , Visual Fields/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Pattern Recognition, Visual/physiology , Photic Stimulation , Young Adult
15.
Cognition ; 166: 201-206, 2017 09.
Article in English | MEDLINE | ID: mdl-28582683

ABSTRACT

Recognizing who is carrying out an action is essential for successful human interaction. The cognitive mechanisms underlying this ability are little understood and have been subject of discussions in embodied approaches to action recognition. Here we examine one solution, that visual action recognition processes are at least partly sensitive to the actor's identity. We investigated the dependency between identity information and action related processes by testing the sensitivity of neural action recognition processes to clothing and facial identity information with a behavioral adaptation paradigm. Our results show that action adaptation effects are in fact modulated by both clothing information and the actor's facial identity. The finding demonstrates that neural processes underlying action recognition are sensitive to identity information (including facial identity) and thereby not exclusively tuned to actions. We suggest that such response properties are useful to help humans in knowing who carried out an action.


Subject(s)
Attention/physiology , Cognition/physiology , Recognition, Psychology/physiology , Social Perception , Humans , Photic Stimulation
16.
PLoS One ; 12(4): e0176115, 2017.
Article in English | MEDLINE | ID: mdl-28426729

ABSTRACT

Cultural differences in spatial perception have been little investigated, which gives rise to the impression that spatial cognitive processes might be universal. Contrary to this idea, we demonstrate cultural differences in spatial volume perception of computer generated rooms between Germans and South Koreans. We used a psychophysical task in which participants had to judge whether a rectangular room was larger or smaller than a square room of reference. We systematically varied the room rectangularity (depth to width aspect ratio) and the viewpoint (middle of the short wall vs. long wall) from which the room was viewed. South Koreans were significantly less biased by room rectangularity and viewpoint than their German counterparts. These results are in line with previous notions of general cognitive processing strategies being more context dependent in East Asian societies than Western ones. We point to the necessity of considering culturally-specific cognitive processing strategies in visual spatial cognition research.


Subject(s)
Cultural Characteristics , Size Perception , Adult , Female , Germany , Humans , Male , Psychophysics , Republic of Korea , Young Adult
17.
Iperception ; 8(6): 2041669517743521, 2017.
Article in English | MEDLINE | ID: mdl-29308177

ABSTRACT

So far, action recognition has been mainly examined with small point-light human stimuli presented alone within a narrow central area of the observer's visual field. Yet, we need to recognize the actions of life-size humans viewed alone or surrounded by bystanders, whether they are seen in central or peripheral vision. Here, we examined the mechanisms in central vision and far periphery (40° eccentricity) involved in the recognition of the actions of a life-size actor (target) and their sensitivity to the presence of a crowd surrounding the target. In Experiment 1, we used an action adaptation paradigm to probe whether static or idly moving crowds might interfere with the recognition of a target's action (hug or clap). We found that this type of crowds whose movements were dissimilar to the target action hardly affected action recognition in central and peripheral vision. In Experiment 2, we examined whether crowd actions that were more similar to the target actions affected action recognition. Indeed, the presence of that crowd diminished adaptation aftereffects in central vision as wells as in the periphery. We replicated Experiment 2 using a recognition task instead of an adaptation paradigm. With this task, we found evidence of decreased action recognition accuracy, but this was significant in peripheral vision only. Our results suggest that the presence of a crowd carrying out actions similar to that of the target affects its recognition. We outline how these results can be understood in terms of high-level crowding effects that operate on action-sensitive perceptual channels.

18.
Sci Rep ; 6: 23829, 2016 Mar 31.
Article in English | MEDLINE | ID: mdl-27029781

ABSTRACT

A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically - akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information.


Subject(s)
Adaptation, Physiological , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Adult , Female , Humans , Interpersonal Relations , Male , Motor Cortex/physiology , Photic Stimulation
19.
Front Hum Neurosci ; 10: 56, 2016.
Article in English | MEDLINE | ID: mdl-26941633

ABSTRACT

The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms.

20.
Front Hum Neurosci ; 10: 78, 2016.
Article in English | MEDLINE | ID: mdl-26973496

ABSTRACT

Mirror neurons (MNs) are considered to be the supporting neural mechanism for action understanding. MNs have been identified in monkey's area F5. The identification of MNs in the human homolog of monkeys' area F5 Broadmann Area 44/45 (BA 44/45) has been proven methodologically difficult. Cross-modal functional MRI (fMRI) adaptation studies supporting the existence of MNs restricted their analysis to a priori candidate regions, whereas studies that failed to find evidence used non-object-directed (NDA) actions. We tackled these limitations by using object-directed actions (ODAs) differing only in terms of their object directedness in combination with a cross-modal adaptation paradigm and a whole-brain analysis. Additionally, we tested voxels' blood oxygenation level-dependent (BOLD) response patterns for several properties previously reported as typical MN response properties. Our results revealed 52 voxels in left inferior frontal gyrus (IFG; particularly BA 44/45), which respond to both motor and visual stimulation and exhibit cross-modal adaptation between the execution and observation of the same action. These results demonstrate that part of human IFG, specifically BA 44/45, has BOLD response characteristics very similar to monkey's area F5.

SELECTION OF CITATIONS
SEARCH DETAIL