Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
Add more filters










Publication year range
1.
Eur J Neurosci ; 60(1): 3557-3571, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38706370

ABSTRACT

Extensive research has shown that observers are able to efficiently extract summary information from groups of people. However, little is known about the cues that determine whether multiple people are represented as a social group or as independent individuals. Initial research on this topic has primarily focused on the role of static cues. Here, we instead investigate the role of dynamic cues. In two experiments with male and female human participants, we use EEG frequency tagging to investigate the influence of two fundamental Gestalt principles - synchrony and common fate - on the grouping of biological movements. In Experiment 1, we find that brain responses coupled to four point-light figures walking together are enhanced when they move in sync vs. out of sync, but only when they are presented upright. In contrast, we found no effect of movement direction (i.e., common fate). In Experiment 2, we rule out that synchrony takes precedence over common fate by replicating the null effect of movement direction while keeping synchrony constant. These results suggest that synchrony plays an important role in the processing of biological group movements. In contrast, the role of common fate is less clear and will require further research.


Subject(s)
Electroencephalography , Motion Perception , Humans , Male , Female , Adult , Electroencephalography/methods , Motion Perception/physiology , Young Adult , Cues , Movement/physiology , Brain/physiology , Photic Stimulation/methods
2.
Trends Cogn Sci ; 28(5): 390-391, 2024 May.
Article in English | MEDLINE | ID: mdl-38632008
3.
Psychol Sci ; 35(6): 681-693, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38683657

ABSTRACT

As a powerful social signal, a body, face, or gaze facing toward oneself holds an individual's attention. We asked whether, going beyond an egocentric stance, facingness between others has a similar effect and why. In a preferential-looking time paradigm, human adults showed spontaneous preference to look at two bodies facing toward (vs. away from) each other (Experiment 1a, N = 24). Moreover, facing dyads were rated higher on social semantic dimensions, showing that facingness adds social value to stimuli (Experiment 1b, N = 138). The same visual preference was found in juvenile macaque monkeys (Experiment 2, N = 21). Finally, on the human development timescale, this preference emerged by 5 years, although young infants by 7 months of age already discriminate visual scenes on the basis of body positioning (Experiment 3, N = 120). We discuss how the preference for facing dyads-shared by human adults, young children, and macaques-can signal a new milestone in social cognition development, supporting processing and learning from third-party social interactions.


Subject(s)
Visual Perception , Humans , Animals , Male , Female , Adult , Infant , Visual Perception/physiology , Young Adult , Social Perception , Attention/physiology , Child, Preschool , Social Cognition , Space Perception/physiology , Social Interaction
4.
iScience ; 27(2): 108793, 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38299110

ABSTRACT

Sensitivity to face-to-face stimuli configurations, which likely indicates interaction, seems to appear early in infants' development, and recently a preference for face-to-face (vs. other spatial configurations) has been shown to occur in macaque monkeys. It is unknown, however, whether such a preference is acquired through experience or as an evolutionary-given biological predisposition. Here, we exploited a precocial social animal, the domestic chick, as a model system to address this question. Visually naive chicks were tested for their spontaneous preferences for face-to-face vs. back-to-back hen dyads of point-light displays depicting biological motion. We found that female chicks have a spontaneous preference for the facing interactive configuration. Males showed no preference, as expected due to the well-known low social motivation of males in this highly polygynous species. These findings support the idea of an innate and sex-dependent predisposition toward social and interacting stimuli in a vertebrate brain such as that of chicks.

5.
Curr Biol ; 34(2): 343-351.e5, 2024 01 22.
Article in English | MEDLINE | ID: mdl-38181794

ABSTRACT

Navigating our complex social world requires processing the interactions we observe. Recent psychophysical and neuroimaging studies provide parallel evidence that the human visual system may be attuned to efficiently perceive dyadic interactions. This work implies, but has not yet demonstrated, that activity in body-selective cortical regions causally supports efficient visual perception of interactions. We adopt a multi-method approach to close this important gap. First, using a large fMRI dataset (n = 92), we found that the left hemisphere extrastriate body area (EBA) responds more to face-to-face than non-facing dyads. Second, we replicated a behavioral marker of visual sensitivity to interactions: categorization of facing dyads is more impaired by inversion than non-facing dyads. Third, in a pre-registered experiment, we used fMRI-guided transcranial magnetic stimulation to show that online stimulation of the left EBA, but not a nearby control region, abolishes this selective inversion effect. Activity in left EBA, thus, causally supports the efficient perception of social interactions.


Subject(s)
Visual Cortex , Humans , Visual Cortex/physiology , Social Interaction , Photic Stimulation , Transcranial Magnetic Stimulation , Visual Perception/physiology , Magnetic Resonance Imaging , Brain Mapping
6.
J Neurosci ; 44(5)2024 01 31.
Article in English | MEDLINE | ID: mdl-38124013

ABSTRACT

Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.


Subject(s)
Visual Cortex , Humans , Male , Female , Photic Stimulation/methods , Visual Cortex/physiology , Magnetic Resonance Imaging/methods , Human Body , Pattern Recognition, Visual/physiology , Brain Mapping/methods , Visual Perception/physiology
7.
Cortex ; 165: 129-140, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37279640

ABSTRACT

People are often seen among other people, relating to and interacting with one another. Recent studies suggest that socially relevant spatial relations between bodies, such as the face-to-face positioning, or facingness, change the visual representation of those bodies, relative to when the same items appear unrelated (e.g., back-to-back) or in isolation. The current study addresses the hypothesis that face-to-face bodies give rise to a new whole, an integrated representation of individual bodies in a new perceptual unit. Using frequency-tagging EEG, we targeted, as a measure of integration, an EEG correlate of the non-linear combination of the neural responses to each of two individual bodies presented either face-to-face as if interacting, or back-to-back. During EEG recording, participants (N = 32) viewed two bodies, either face-to-face or back-to-back, flickering at two different frequencies (F1 and F2), yielding two distinctive responses in the EEG signal. Spectral analysis examined the responses at the intermodulation frequencies (nF1±mF2), signaling integration of individual responses. An anterior intermodulation response was observed for face-to-face bodies, but not for back-to-back bodies, nor for face-to-face chairs and machines. These results show that interacting bodies are integrated into a representation that is more than the sum of its parts. This effect, specific to body dyads, may mark an early step in the transformation towards an integrated representation of a social event, from the visual representation of individual participants in that event.


Subject(s)
Electroencephalography , Human Body , Humans , Electroencephalography/methods , Pattern Recognition, Visual/physiology , Photic Stimulation
8.
Neuropsychologia ; 177: 108395, 2022 12 15.
Article in English | MEDLINE | ID: mdl-36272677

ABSTRACT

Detecting biological motion is essential for adaptive social behavior. Previous research has revealed the brain processes underlying this ability. However, brain activity during biological motion perception captures a multitude of processes. As a result, it is often unclear which processes reflect movement processing and which processes reflect secondary processes that build on movement processing. To address this issue, we developed a new approach to measure brain responses directly coupled to observed movements. Specifically, we showed 30 male and female adults a point-light walker moving at a pace of 2.4 Hz and used EEG frequency tagging to measure the brain response coupled to that pace ('movement tagging'). The results revealed a reliable response at the walking frequency that was reduced by two manipulations known to disrupt biological motion perception: phase scrambling and inversion. Interestingly, we also identified a brain response at half the walking frequency (i.e., 1.2 Hz), corresponding to the rate at which the individual dots completed a cycle. In contrast to the 2.4 Hz response, the response at 1.2 Hz was increased for scrambled (vs. unscrambled) walkers. These results show that frequency tagging can be used to capture the visual processing of biological movements and can dissociate between global (2.4 Hz) and local (1.2 Hz) processes involved in biological motion perception, at different frequencies of the brain signal.


Subject(s)
Motion Perception , Movement , Adult , Male , Humans , Female , Movement/physiology , Brain/physiology , Motion Perception/physiology , Cognition/physiology , Electroencephalography
9.
Neuroimage ; 260: 119506, 2022 10 15.
Article in English | MEDLINE | ID: mdl-35878724

ABSTRACT

Research on face perception has revealed highly specialized visual mechanisms such as configural processing, and provided markers of interindividual differences -including disease risks and alterations- in visuo-perceptual abilities that traffic in social cognition. Is face perception unique in degree or kind of mechanisms, and in its relevance for social cognition? Combining functional MRI and behavioral methods, we address the processing of an uncharted class of socially relevant stimuli: minimal social scenes involving configurations of two bodies spatially close and face-to-face as if interacting (hereafter, facing dyads). We report category-specific activity for facing (vs. non-facing) dyads in visual cortex. That activity shows face-like signatures of configural processing -i.e., stronger response to facing (vs. non-facing) dyads, and greater susceptibility to stimulus inversion for facing (vs. non-facing) dyads-, and is predicted by performance-based measures of configural processing in visual perception of body dyads. Moreover, we observe that the individual performance in body-dyad perception is reliable, stable-over-time and correlated with the individual social sensitivity, coarsely captured by the Autism-Spectrum Quotient. Further analyses clarify the relationship between single-body and body-dyad perception. We propose that facing dyads are processed through highly specialized mechanisms -and brain areas-, analogously to other biologically and socially relevant stimuli such as faces. Like face perception, facing-dyad perception can reveal basic (visual) processes that lay the foundations for understanding others, their relationships and interactions.


Subject(s)
Facial Recognition , Visual Cortex , Brain/physiology , Humans , Pattern Recognition, Visual/physiology , Social Perception , Visual Perception/physiology
10.
Proc Natl Acad Sci U S A ; 119(8)2022 02 22.
Article in English | MEDLINE | ID: mdl-35169072

ABSTRACT

Humans make sense of the world by organizing things into categories. When and how does this process begin? We investigated whether real-world object categories that spontaneously emerge in the first months of life match categorical representations of objects in the human visual cortex. Using eye tracking, we measured the differential looking time of 4-, 10-, and 19-mo-olds as they looked at pairs of pictures belonging to eight animate or inanimate categories (human/nonhuman, faces/bodies, real-world size big/small, natural/artificial). Taking infants' looking times as a measure of similarity, for each age group, we defined a representational space where each object was defined in relation to others of the same or of a different category. This space was compared with hypothesis-based and functional MRI-based models of visual object categorization in the adults' visual cortex. Analyses across different age groups showed that, as infants grow older, their looking behavior matches neural representations in ever-larger portions of the adult visual cortex, suggesting progressive recruitment and integration of more and more feature spaces distributed over the visual cortex. Moreover, the results characterize infants' visual categorization as an incremental process with two milestones. Between 4 and 10 mo, visual exploration guided by saliency gives way to an organization according to the animate-inanimate distinction. Between 10 and 19 mo, a category spurt leads toward a mature organization. We propose that these changes underlie the coupling between seeing and thinking in the developing mind.


Subject(s)
Cognition/physiology , Pattern Recognition, Visual/physiology , Adult , Age Factors , Brain Mapping/methods , Eye-Tracking Technology , Female , Fixation, Ocular/physiology , Humans , Infant , Magnetic Resonance Imaging/methods , Male , Photic Stimulation , Thinking/physiology , Vision, Ocular/physiology , Visual Cortex/physiology , Visual Perception/physiology
11.
Infancy ; 27(2): 210-231, 2022 03.
Article in English | MEDLINE | ID: mdl-35064958

ABSTRACT

Social life is inherently relational, entailing the ability to recognize and monitor social entities and the relationships between them. Very young infants privilege socially relevant entities in the visual world, such as faces and bodies. Here, we show that six-month-old infants also discriminate between configurations of multiple human bodies, based on the internal visuo-spatial relations between bodies, which could cue-or not-social interaction. We measured the differential looking times for two images, each featuring two identical bodies, but in different spatial relations. Infants discriminated between face-to-face and back-to-back body dyads (Experiment 1), and treated face-to-face dyads with higher efficiency (i.e., processing speed), relative to back-to-back dyads (Experiment 2). Looking times for dyads in an asymmetrical relation (i.e., one body facing another without reciprocation) were comparable to looking times for face-to-face dyads, and differed from looking times to back-to-back dyads, suggesting general discrimination between the presence versus absence of relation (Experiment 3). Infants' discrimination of images based on relative positioning of items did not generalize to body-object pairs (Experiment 4). Early sensitivity to the relative positioning of bodies in a scene may be a building block of social cognition, preparing the discovery of the keel and backbone of social life: relations.


Subject(s)
Social Cognition , Visual Perception , Cognition , Humans , Infant
12.
J Cogn Neurosci ; 33(7): 1343-1353, 2021 06 01.
Article in English | MEDLINE | ID: mdl-34496405

ABSTRACT

To navigate the social world, humans must represent social entities and the relationships between those entities, starting with spatial relationships. Recent research suggests that two bodies are processed with particularly high efficiency in visual perception, when they are in a spatial positioning that cues interaction, that is, close and face-to-face. Socially relevant spatial relations such as facingness may facilitate visual perception by triggering grouping of bodies into a new integrated percept, which would make the stimuli more visible and easier to process. We used EEG and a frequency-tagging paradigm to measure a neural correlate of grouping (or visual binding), while female and male participants saw images of two bodies face-to-face or back-to-back. The two bodies in a dyad flickered at frequency F1 and F2, respectively, and appeared together at a third frequency Fd (dyad frequency). This stimulation should elicit a periodic neural response for each body at F1 and F2, and a third response at Fd, which would be larger for face-to-face (vs. back-to-back) bodies, if those stimuli yield additional integrative processing. Results showed that responses at F1 and F2 were higher for upright than for inverted bodies, demonstrating that our paradigm could capture neural activity associated with viewing bodies. Crucially, the response to dyads at Fd was larger for face-to-face (vs. back-to-back) dyads, suggesting integration mediated by grouping. We propose that spatial relations that recur in social interaction (i.e., facingness) promote binding of multiple bodies into a new representation. This mechanism can explain how the visual system contributes to integrating and transforming the representation of disconnected body shapes into structured representations of social events.


Subject(s)
Human Body , Visual Perception , Cues , Female , Humans , Male , Orientation, Spatial , Pattern Recognition, Visual , Photic Stimulation , Social Interaction
13.
Cereb Cortex ; 31(5): 2670-2685, 2021 03 31.
Article in English | MEDLINE | ID: mdl-33401307

ABSTRACT

Representing multiple agents and their mutual relations is a prerequisite to understand social events such as interactions. Using functional magnetic resonance imaging on human adults, we show that visual areas dedicated to body form and body motion perception contribute to processing social events, by holding the representation of multiple moving bodies and encoding the spatial relations between them. In particular, seeing animations of human bodies facing and moving toward (vs. away from) each other increased neural activity in the body-selective cortex [extrastriate body area (EBA)] and posterior superior temporal sulcus (pSTS) for biological motion perception. In those areas, representation of body postures and movements, as well as of the overall scene, was more accurate for facing body (vs. nonfacing body) stimuli. Effective connectivity analysis with dynamic causal modeling revealed increased coupling between EBA and pSTS during perception of facing body stimuli. The perceptual enhancement of multiple-body scenes featuring cues of interaction (i.e., face-to-face positioning, spatial proximity, and approaching signals) was supported by the participants' better performance in a recognition task with facing body versus nonfacing body stimuli. Thus, visuospatial cues of interaction in multiple-person scenarios affect the perceptual representation of body and body motion and, by promoting functional integration, streamline the process from body perception to action representation.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Social Cognition , Social Perception , Spatial Processing/physiology , Temporal Lobe/diagnostic imaging , Visual Cortex/diagnostic imaging , Adult , Cues , Female , Functional Neuroimaging , Human Body , Humans , Magnetic Resonance Imaging , Male , Temporal Lobe/physiology , Visual Cortex/physiology , Young Adult
14.
Cortex ; 132: 473-478, 2020 11.
Article in English | MEDLINE | ID: mdl-32698947

ABSTRACT

Human vision serves the social function of detecting and discriminating with high efficiency conspecifics and other animals. The social world is made of social entities as much as the relations between those entities. Recent work demonstrates that vision encodes visuo-spatial relations between bodies with the same efficiency and high specialization of face/body perception. Specifically, perception of face-to-face (vs. non-facing) bodies evokes effects compatible with the most robust markers of face-specificity such as the behavioral inversion effect and increased activity in selective visual areas. Another set of results suggests that face-to-face bodies are processed as a grouped unit, analogously to facial features in a face. The facing dyad in the visual cortex may be the earliest rudimentary representation of social interaction.


Subject(s)
Facial Recognition , Visual Cortex , Animals , Humans , Pattern Recognition, Visual , Social Perception , Visual Perception
15.
J Neurosci ; 40(4): 852-863, 2020 01 22.
Article in English | MEDLINE | ID: mdl-31801812

ABSTRACT

Human social nature has shaped visual perception. A signature of the relationship between vision and sociality is a particular visual sensitivity to social entities such as faces and bodies. We asked whether human vision also exhibits a special sensitivity to spatial relations that reliably correlate with social relations. In general, interacting people are more often situated face-to-face than back-to-back. Using functional MRI and behavioral measures in female and male human participants, we show that visual sensitivity to social stimuli extends to images including two bodies facing toward (vs away from) each other. In particular, the inferior lateral occipital cortex, which is involved in visual-object perception, is organized such that the inferior portion encodes the number of bodies (one vs two) and the superior portion is selectively sensitive to the spatial relation between bodies (facing vs nonfacing). Moreover, functionally localized, body-selective visual cortex responded to facing bodies more strongly than identical, but nonfacing, bodies. In this area, multivariate pattern analysis revealed an accurate representation of body dyads with sharpening of the representation of single-body postures in facing dyads, which demonstrates an effect of visual context on the perceptual analysis of a body. Finally, the cost of body inversion (upside-down rotation) on body recognition, a behavioral signature of a specialized mechanism for body perception, was larger for facing versus nonfacing dyads. Thus, spatial relations between multiple bodies are encoded in regions for body perception and affect the way in which bodies are processed.SIGNIFICANCE STATEMENT Human social nature has shaped visual perception. Here, we show that human vision is not only attuned to socially relevant entities, such as bodies, but also to socially relevant spatial relations between those entities. Body-selective regions of visual cortex respond more strongly to multiple bodies that appear to be interacting (i.e., face-to-face), relative to unrelated bodies, and more accurately represent single body postures in interacting scenarios. Moreover, recognition of facing bodies is particularly susceptible to perturbation by upside-down rotation, indicative of a particular visual sensitivity to the canonical appearance of facing bodies. This encoding of relations between multiple bodies in areas for body-shape recognition suggests that the visual context in which a body is encountered deeply affects its perceptual analysis.


Subject(s)
Recognition, Psychology/physiology , Social Perception , Visual Cortex/diagnostic imaging , Visual Perception/physiology , Adult , Female , Functional Neuroimaging , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation , Young Adult
16.
Psychol Sci ; 30(10): 1483-1496, 2019 10.
Article in English | MEDLINE | ID: mdl-31532709

ABSTRACT

Humans can effectively search visual scenes by spatial location, visual feature, or whole object. Here, we showed that visual search can also benefit from fast appraisal of relations between individuals in human groups. Healthy adults searched for a facing (seemingly interacting) body dyad among nonfacing dyads or a nonfacing dyad among facing dyads. We varied the task parameters to emphasize processing of targets or distractors. Facing-dyad targets were more likely to recruit attention than nonfacing-dyad targets (Experiments 1, 2, and 4). Facing-dyad distractors were checked and rejected more efficiently than nonfacing-dyad distractors (Experiment 3). Moreover, search for an individual body was more difficult when it was embedded in a facing dyad than in a nonfacing dyad (Experiment 5). We propose that fast grouping of interacting bodies in one attentional unit is the mechanism that accounts for efficient processing of dyads within human groups and for the inefficient access to individual parts within a dyad.


Subject(s)
Attention , Human Body , Pattern Recognition, Visual/physiology , Social Perception , Adult , Female , Humans , Male , Reaction Time , Young Adult
17.
J Neurosci ; 39(30): 5966-5974, 2019 07 24.
Article in English | MEDLINE | ID: mdl-31126999

ABSTRACT

The middle temporal gyrus (MTG) has been shown to be recruited during the processing of words, but also during the observation of actions. Here we investigated how information related to words and gestures is organized along the MTG. To this aim, we measured the BOLD response in the MTG to video clips of gestures and spoken words in 17 healthy human adults (male and female). Gestures consisted of videos of an actress performing object-use pantomimes (iconic representations of object-directed actions; e.g., playing guitar), emblems (conventional gestures, e.g., thumb up), and meaningless gestures. Word stimuli (verbs, nouns) consisted of video clips of the same actress pronouncing words. We found a stronger response to meaningful compared with meaningless gestures along the whole left and large portions of the right MTG. Importantly, we observed a gradient, with posterior regions responding more strongly to gestures (pantomimes and emblems) than words and anterior regions showing a stronger response to words than gestures. In an intermediate region in the left hemisphere, the response was significantly higher to words and emblems (i.e., items with a greater arbitrariness of the sign-to-meaning mapping) than to pantomimes. These results show that the large-scale organization of information in the MTG is driven by the input modality and may also reflect the arbitrariness of the relationship between sign and meaning.SIGNIFICANCE STATEMENT Here we investigated the organizing principle of information in the middle temporal gyrus, taking into consideration the input-modality and the arbitrariness of the relationship between a sign and its meaning. We compared the middle temporal gyrus response during the processing of pantomimes, emblems, and spoken words. We found that posterior regions responded more strongly to pantomimes and emblems than to words, whereas anterior regions responded more strongly to words than to pantomimes and emblems. In an intermediate region, only in the left hemisphere, words and emblems evoked a stronger response than pantomimes. Our results identify two organizing principles of neural representation: the modality of communication (gestural or verbal) and the (arbitrariness of the) relationship between sign and meanings.


Subject(s)
Gestures , Language , Speech/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Photic Stimulation/methods , Random Allocation , Young Adult
18.
J Exp Psychol Hum Percept Perform ; 45(7): 877-888, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30998069

ABSTRACT

Detection and recognition of social interactions unfolding in the surroundings is as vital as detection and recognition of faces, bodies, and animate entities in general. We have demonstrated that the visual system is particularly sensitive to a configuration with two bodies facing each other as if interacting. In four experiments using backward masking on healthy adults, we investigated the properties of this dyadic visual representation. We measured the inversion effect (IE), the cost on recognition, of seeing bodies upside-down as opposed to upright, as an index of visual sensitivity: the greater the visual sensitivity, the greater the IE. The IE was increased for facing (vs. nonfacing) dyads, whether the head/face direction was visible or not, which implies that visual sensitivity concerns two bodies, not just two faces/heads. Moreover, the difference in IE for facing versus nonfacing dyads disappeared when one body was replaced by another object. This implies selective sensitivity to a body facing another body, as opposed to a body facing anything. Finally, the IE was reduced when reciprocity was eliminated (one body faced another, but the latter faced away). Thus, the visual system is sensitive selectively to dyadic configurations that approximate a prototypical social exchange with two bodies spatially close and mutually accessible to one another. These findings reveal visual configural representations encompassing multiple objects, which could provide fast and automatic parsing of complex relationships beyond individual faces or bodies. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Interpersonal Relations , Visual Perception , Facial Recognition , Female , Humans , Male , Photic Stimulation , Recognition, Psychology , Young Adult
19.
Behav Res Methods ; 51(6): 2817-2826, 2019 12.
Article in English | MEDLINE | ID: mdl-30542913

ABSTRACT

Recent years have witnessed a growing interest in behavioral and neuroimaging studies on the processing of symbolic communicative gestures, such as pantomimes and emblems, but well-controlled stimuli have been scarce. This study describes a dataset of more than 200 video clips of an actress performing pantomimes (gestures that mimic object-directed/object-use actions; e.g., playing guitar), emblems (conventional gestures; e.g., thumbs up), and meaningless gestures. Gestures were divided into four lists. For each of these four lists, 50 Italian and 50 American raters judged the meaningfulness of the gestures and provided names and descriptions for them. The results of these rating and norming measures are reported separately for the Italian and American raters, offering the first normed set of meaningful and meaningless gestures for experimental studies. The stimuli are available for download via the Figshare database.


Subject(s)
Comprehension , Emblems and Insignia , Gestures , Female , Humans
20.
Sci Rep ; 7(1): 14040, 2017 10 25.
Article in English | MEDLINE | ID: mdl-29070901

ABSTRACT

How do humans recognize humans among other creatures? Recent studies suggest that a preference for conspecifics may emerge already in perceptual processing, in regions such as the right posterior superior temporal sulcus (pSTS), implicated in visual perception of biological motion. In the current functional MRI study, participants viewed point-light displays of human and nonhuman creatures moving in their typical bipedal (man and chicken) or quadrupedal mode (crawling-baby and cat). Stronger activity for man and chicken versus baby and cat was found in the right pSTS responsive to biological motion. The novel effect of pedalism suggests that, if right pSTS contributes to recognizing of conspecifics, it does so by detecting perceptual features (e.g. bipedal motion) that reliably correlate with their appearance. A searchlight multivariate pattern analysis could decode humans and nonhumans across pedalism in the left pSTS and bilateral posterior cingulate cortex. This result implies a categorical human-nonhuman distinction, independent from within-category physical/perceptual variation. Thus, recognizing conspecifics involves visual classification based on perceptual features that most frequently co-occur with humans, such as bipedalism, and retrieval of information that determines category membership above and beyond visual appearance. The current findings show that these processes are at work in separate brain networks.


Subject(s)
Gait , Visual Perception , Adult , Animals , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Motion Perception , Photic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL