Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 113
Filter
1.
Cortex ; 169: 35-49, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37852041

ABSTRACT

Humans rely heavily on facial expressions for social communication to convey their thoughts and emotions and to understand them in others. One prominent but controversial view is that humans learn to recognize the significance of facial expressions by mimicking the expressions of others. This view predicts that an inability to make facial expressions (e.g., facial paralysis) would result in reduced perceptual sensitivity to others' facial expressions. To test this hypothesis, we developed a diverse battery of sensitive emotion recognition tasks to characterize expression perception in individuals with Moebius Syndrome (MBS), a congenital neurological disorder that causes facial palsy. Using computer-based detection tasks we systematically assessed expression perception thresholds for static and dynamic face and body expressions. We found that while MBS individuals were able to perform challenging perceptual control tasks and body expression tasks, they were less efficient at extracting emotion from facial expressions, compared to matched controls. Exploratory analyses of fMRI data from a small group of MBS participants suggested potentially reduced engagement of the amygdala in MBS participants during expression processing relative to matched controls. Collectively, these results suggest a role for facial mimicry and consequent facial feedback and motor experience in the perception of others' facial expressions.


Subject(s)
Facial Paralysis , Facial Recognition , Mobius Syndrome , Humans , Facial Expression , Emotions , Mobius Syndrome/complications , Facial Paralysis/etiology , Facial Paralysis/psychology , Perception , Social Perception
2.
bioRxiv ; 2023 Sep 27.
Article in English | MEDLINE | ID: mdl-37886588

ABSTRACT

Functional magnetic resonance imaging (fMRI) studies have identified a network of face-selective regions distributed across the human brain. In the present study, we analyzed data from a large group of gender-balanced participants to investigate how reliably these face-selective regions could be identified across both cerebral hemispheres. Participants ( N =52) were scanned with fMRI while viewing short videos of faces, bodies, and objects. Results revealed that five face-selective regions: the fusiform face area (FFA), posterior superior temporal sulcus (pSTS), anterior superior temporal sulcus (aSTS), inferior frontal gyrus (IFG) and the amygdala were all larger in the right than in the left hemisphere. The occipital face area (OFA) was larger in the right hemisphere as well, but the difference between the hemispheres was not significant. The neural response to moving faces was also greater in face-selective regions in the right than in the left hemisphere. An additional analysis revealed that the pSTS and IFG were significantly larger in the right hemisphere compared to other face-selective regions. This pattern of results demonstrates that moving faces are preferentially processed in the right hemisphere and that the pSTS and IFG appear to be the strongest drivers of this laterality. An analysis of gender revealed that face-selective regions were typically larger in females ( N =26) than males ( N =26), but this gender difference was not statistically significant.

3.
Neuroimage ; 273: 120067, 2023 06.
Article in English | MEDLINE | ID: mdl-36997134

ABSTRACT

Both the primate visual system and artificial deep neural network (DNN) models show an extraordinary ability to simultaneously classify facial expression and identity. However, the neural computations underlying the two systems are unclear. Here, we developed a multi-task DNN model that optimally classified both monkey facial expressions and identities. By comparing the fMRI neural representations of the macaque visual cortex with the best-performing DNN model, we found that both systems: (1) share initial stages for processing low-level face features which segregate into separate branches at later stages for processing facial expression and identity respectively, and (2) gain more specificity for the processing of either facial expression or identity as one progresses along each branch towards higher stages. Correspondence analysis between the DNN and monkey visual areas revealed that the amygdala and anterior fundus face patch (AF) matched well with later layers of the DNN's facial expression branch, while the anterior medial face patch (AM) matched well with later layers of the DNN's facial identity branch. Our results highlight the anatomical and functional similarities between macaque visual system and DNN model, suggesting a common mechanism between the two systems.


Subject(s)
Facial Expression , Macaca , Animals , Neural Networks, Computer , Primates , Magnetic Resonance Imaging/methods , Pattern Recognition, Visual
4.
J Neurosci ; 43(4): 621-634, 2023 01 25.
Article in English | MEDLINE | ID: mdl-36639892

ABSTRACT

Humans can label and categorize objects in a visual scene with high accuracy and speed, a capacity well characterized with studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed by the brain in the absence of luminance-defined form information, we created a novel stimulus set of "object kinematograms" to isolate motion-defined signals from other sources of visual information. Object kinematograms were generated by extracting motion information from videos of 6 object categories and applying the motion to limited-lifetime random dot patterns. Using functional magnetic resonance imaging (fMRI) (n = 15, 40% women), we investigated whether category information from the object kinematograms could be decoded within the occipitotemporal and parietal cortex and evaluated whether the information overlapped with category responses to static images from the original videos. We decoded object category for both stimulus formats in all higher-order regions of interest (ROIs). More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition, while more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Further, decoding across the two stimulus formats was possible in all regions. These results demonstrate that motion cues can elicit widespread and robust category responses on par with those elicited by static luminance cues, even in ventral regions of visual cortex that have traditionally been associated with primarily image-defined form processing.SIGNIFICANCE STATEMENT Much research on visual object recognition has focused on recognizing objects in static images. However, motion is a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic motion cues. Our study shows that, while higher-order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.


Subject(s)
Pattern Recognition, Visual , Visual Cortex , Humans , Female , Male , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Brain/physiology , Visual Cortex/physiology , Magnetic Resonance Imaging , Brain Mapping , Photic Stimulation
5.
Nat Commun ; 13(1): 6787, 2022 11 09.
Article in English | MEDLINE | ID: mdl-36351907

ABSTRACT

Although the presence of face patches in primate inferotemporal (IT) cortex is well established, the functional and causal relationships among these patches remain elusive. In two monkeys, muscimol was infused sequentially into each patch or pair of patches to assess their respective influence on the remaining IT face network and the amygdala, as determined using fMRI. The results revealed that anterior face patches required input from middle face patches for their responses to both faces and objects, while the face selectivity in middle face patches arose, in part, from top-down input from anterior face patches. Moreover, we uncovered a parallel fundal-lateral functional organization in the IT face network, supporting dual routes (dorsal-ventral) in face processing within IT cortex as well as between IT cortex and the amygdala. Our findings of the causal relationship among the face patches demonstrate that the IT face circuit is organized into multiple functional compartments.


Subject(s)
Brain Mapping , Magnetic Resonance Imaging , Animals , Photic Stimulation/methods , Macaca mulatta , Cerebral Cortex/physiology , Pattern Recognition, Visual/physiology
7.
Sci Adv ; 8(47): eadd6865, 2022 Nov 25.
Article in English | MEDLINE | ID: mdl-36427322

ABSTRACT

Body language is a powerful tool that we use to communicate how we feel, but it is unclear whether other primates also communicate in this way. Here, we use functional magnetic resonance imaging to show that the body-selective patches in macaques are activated by affective body language. Unexpectedly, we found these regions to be tolerant of naturalistic variation in posture as well as species; the bodies of macaques, humans, and domestic cats all evoked a stronger response when they conveyed fear than when they conveyed no affect. Multivariate analyses confirmed that the neural representation of fear-related body expressions was species-invariant. Collectively, these findings demonstrate that, like humans, macaques have body-selective brain regions in the ventral visual pathway for processing affective body language. These data also indicate that representations of body stimuli in these regions are built on the basis of emergent properties, such as socio-affective meaning, and not just putative image properties.

8.
Nat Commun ; 13(1): 6302, 2022 10 22.
Article in English | MEDLINE | ID: mdl-36273204

ABSTRACT

Viewing faces that are perceived as emotionally expressive evokes enhanced neural responses in multiple brain regions, a phenomenon thought to depend critically on the amygdala. This emotion-related modulation is evident even in primary visual cortex (V1), providing a potential neural substrate by which emotionally salient stimuli can affect perception. How does emotional valence information, computed in the amygdala, reach V1? Here we use high-resolution functional MRI to investigate the layer profile and retinotopic distribution of neural activity specific to emotional facial expressions. Across three experiments, human participants viewed centrally presented face stimuli varying in emotional expression and performed a gender judgment task. We found that facial valence sensitivity was evident only in superficial cortical layers and was not restricted to the retinotopic location of the stimuli, consistent with diffuse feedback-like projections from the amygdala. Together, our results provide a feedback mechanism by which the amygdala directly modulates activity at the earliest stage of visual processing.


Subject(s)
Facial Expression , Visual Cortex , Humans , Visual Cortex/physiology , Emotions/physiology , Amygdala/physiology , Visual Perception/physiology , Brain Mapping , Magnetic Resonance Imaging
9.
Cereb Cortex Commun ; 3(3): tgac036, 2022.
Article in English | MEDLINE | ID: mdl-36159205

ABSTRACT

Neuroimaging studies identify multiple face-selective areas in the human brain. In the current study, we compared the functional response of the face area in the lateral prefrontal cortex to that of other face-selective areas. In Experiment 1, participants (n = 32) were scanned viewing videos containing faces, bodies, scenes, objects, and scrambled objects. We identified a face-selective area in the right inferior frontal gyrus (rIFG). In Experiment 2, participants (n = 24) viewed the same videos or static images. Results showed that the rIFG, right posterior superior temporal sulcus (rpSTS), and right occipital face area (rOFA) exhibited a greater response to moving than static faces. In Experiment 3, participants (n = 18) viewed face videos in the contralateral and ipsilateral visual fields. Results showed that the rIFG and rpSTS showed no visual field bias, while the rOFA and right fusiform face area (rFFA) showed a contralateral bias. These experiments suggest two conclusions; firstly, in all three experiments, the face area in the IFG was not as reliably identified as face areas in the occipitotemporal cortex. Secondly, the similarity of the response profiles in the IFG and pSTS suggests the areas may perform similar cognitive functions, a conclusion consistent with prior neuroanatomical and functional connectivity evidence.

10.
Soc Cogn Affect Neurosci ; 17(11): 965-976, 2022 11 02.
Article in English | MEDLINE | ID: mdl-35445247

ABSTRACT

Face detection is a foundational social skill for primates. This vital function is thought to be supported by specialized neural mechanisms; however, although several face-selective regions have been identified in both humans and nonhuman primates, there is no consensus about which region(s) are involved in face detection. Here, we used naturally occurring errors of face detection (i.e. objects with illusory facial features referred to as examples of 'face pareidolia') to identify regions of the macaque brain implicated in face detection. Using whole-brain functional magnetic resonance imaging to test awake rhesus macaques, we discovered that a subset of face-selective patches in the inferior temporal cortex, on the lower lateral edge of the superior temporal sulcus, and the amygdala respond more to objects with illusory facial features than matched non-face objects. Multivariate analyses of the data revealed differences in the representation of illusory faces across the functionally defined regions of interest. These differences suggest that the cortical and subcortical face-selective regions contribute uniquely to the detection of facial features. We conclude that face detection is supported by a multiplexed system in the primate brain.


Subject(s)
Brain Mapping , Illusions , Animals , Humans , Pattern Recognition, Visual , Macaca mulatta , Magnetic Resonance Imaging/methods , Temporal Lobe
11.
Brain Struct Funct ; 227(4): 1423-1438, 2022 May.
Article in English | MEDLINE | ID: mdl-34792643

ABSTRACT

Faces and bodies are often treated as distinct categories that are processed separately by face- and body-selective brain regions in the primate visual system. These regions occupy distinct regions of visual cortex and are often thought to constitute independent functional networks. Yet faces and bodies are part of the same object and their presence inevitably covary in naturalistic settings. Here, we re-evaluate both the evidence supporting the independent processing of faces and bodies and the organizational principles that have been invoked to explain this distinction. We outline four hypotheses ranging from completely separate networks to a single network supporting the perception of whole people or animals. The current evidence, especially in humans, is compatible with all of these hypotheses, making it presently unclear how the representation of faces and bodies is organized in the cortex.


Subject(s)
Brain Mapping , Visual Cortex , Animals , Humans , Magnetic Resonance Imaging , Pattern Recognition, Visual , Photic Stimulation , Primates , Visual Perception
12.
J Vis ; 21(4): 3, 2021 04 01.
Article in English | MEDLINE | ID: mdl-33798259

ABSTRACT

The current experiment investigated the extent to which perceptual categorization of animacy (i.e., the ability to discriminate animate and inanimate objects) is facilitated by image-based features that distinguish the two object categories. We show that, with nominal training, naïve macaques could classify a trial-unique set of 1000 novel images with high accuracy. To test whether image-based features that naturally differ between animate and inanimate objects, such as curvilinear and rectilinear information, contribute to the monkeys' accuracy, we created synthetic images using an algorithm that distorted the global shape of the original animate/inanimate images while maintaining their intermediate features (Portilla & Simoncelli, 2000). Performance on the synthesized images was significantly above chance and was predicted by the amount of curvilinear information in the images. Our results demonstrate that, without training, macaques can use an intermediate image feature, curvilinearity, to facilitate their categorization of animate and inanimate objects.


Subject(s)
Macaca , Animals
13.
Neuroimage ; 235: 117997, 2021 07 15.
Article in English | MEDLINE | ID: mdl-33789138

ABSTRACT

Functional neuroimaging research in the non-human primate (NHP) has been advancing at a remarkable rate. The increase in available data establishes a need for robust analysis pipelines designed for NHP neuroimaging and accompanying template spaces to standardize the localization of neuroimaging results. Our group recently developed the NIMH Macaque Template (NMT), a high-resolution population average anatomical template and associated neuroimaging resources, providing researchers with a standard space for macaque neuroimaging . Here, we release NMT v2, which includes both symmetric and asymmetric templates in stereotaxic orientation, with improvements in spatial contrast, processing efficiency, and segmentation. We also introduce the Cortical Hierarchy Atlas of the Rhesus Macaque (CHARM), a hierarchical parcellation of the macaque cerebral cortex with varying degrees of detail. These tools have been integrated into the neuroimaging analysis software AFNI to provide a comprehensive and robust pipeline for fMRI processing, visualization and analysis of NHP data. AFNI's new @animal_warper program can be used to efficiently align anatomical scans to the NMT v2 space, and afni_proc.py integrates these results with full fMRI processing using macaque-specific parameters: from motion correction through regression modeling. Taken together, the NMT v2 and AFNI represent an all-in-one package for macaque functional neuroimaging analysis, as demonstrated with available demos for both task and resting state fMRI.


Subject(s)
Atlases as Topic , Brain/diagnostic imaging , Brain/physiology , Functional Neuroimaging , Macaca mulatta/physiology , Magnetic Resonance Imaging , Animals , Female , Male
14.
Trends Cogn Sci ; 25(2): 100-110, 2021 02.
Article in English | MEDLINE | ID: mdl-33334693

ABSTRACT

Existing models propose that primate visual cortex is divided into two functionally distinct pathways. The ventral pathway computes the identity of an object; the dorsal pathway computes the location of an object, and the actions related to that object. Despite remaining influential, the two visual pathways model requires revision. Both human and non-human primate studies reveal the existence of a third visual pathway on the lateral brain surface. This third pathway projects from early visual cortex, via motion-selective areas, into the superior temporal sulcus (STS). Studies demonstrating that the STS computes the actions of moving faces and bodies (e.g., expressions, eye-gaze, audio-visual integration, intention, and mood) show that the third visual pathway is specialized for the dynamic aspects of social perception.


Subject(s)
Visual Cortex , Visual Pathways , Brain Mapping , Face , Magnetic Resonance Imaging , Photic Stimulation , Social Perception , Visual Perception
15.
Neuroimage ; 227: 117622, 2021 02 15.
Article in English | MEDLINE | ID: mdl-33301944

ABSTRACT

The MNI CIVET pipeline for automated extraction of cortical surfaces and evaluation of cortical thickness from in-vivo human MRI has been extended for processing macaque brains. Processing is performed based on the NIMH Macaque Template (NMT), as the reference template, with the anatomical parcellation of the surface following the D99 and CHARM atlases. The modifications needed to adapt CIVET to the macaque brain are detailed. Results have been obtained using CIVET-macaque to process the anatomical scans of the 31 macaques used to generate the NMT and another 95 macaques from the PRIME-DE initiative. It is anticipated that the open usage of CIVET-macaque will promote collaborative efforts in data collection and processing, sharing, and automated analyses from which the non-human primate brain imaging field will advance.


Subject(s)
Brain Cortical Thickness , Cerebral Cortex/diagnostic imaging , Image Processing, Computer-Assisted/methods , Animals , Macaca mulatta , Magnetic Resonance Imaging , Software
16.
Proc Natl Acad Sci U S A ; 117(48): 30836-30847, 2020 12 01.
Article in English | MEDLINE | ID: mdl-33199608

ABSTRACT

Figure-ground modulation, i.e., the enhancement of neuronal responses evoked by the figure relative to the background, has three complementary components: edge modulation (boundary detection), center modulation (region filling), and background modulation (background suppression). However, the neuronal mechanisms mediating these three modulations and how they depend on awareness remain unclear. For each modulation, we compared both the cueing effect produced in a Posner paradigm and fMRI blood oxygen-level dependent (BOLD) signal in primary visual cortex (V1) evoked by visible relative to invisible orientation-defined figures. We found that edge modulation was independent of awareness, whereas both center and background modulations were strongly modulated by awareness, with greater modulations in the visible than the invisible condition. Effective-connectivity analysis further showed that the awareness-dependent region-filling and background-suppression processes in V1 were not derived through intracortical interactions within V1, but rather by feedback from the frontal eye field (FEF) and dorsolateral prefrontal cortex (DLPFC), respectively. These results indicate a source for an awareness-dependent figure-ground segregation in human prefrontal cortex.


Subject(s)
Awareness , Prefrontal Cortex/physiology , Visual Perception , Brain Mapping , Connectome , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Pattern Recognition, Visual , Photic Stimulation , Prefrontal Cortex/diagnostic imaging , Visual Cortex/diagnostic imaging , Visual Cortex/physiology
17.
Prog Neurobiol ; 195: 101880, 2020 12.
Article in English | MEDLINE | ID: mdl-32918972

ABSTRACT

In the 1970s Charlie Gross was among the first to identify neurons that respond selectively to faces, in the macaque inferior temporal (IT) cortex. This seminal finding has been followed by numerous studies quantifying the visual features that trigger a response from face cells in order to answer the question; what do face cells want? However, the connection between face-selective activity in IT cortex and visual perception remains only partially understood. Here we present fMRI results in the macaque showing that some face patches respond to illusory facial features in objects. We argue that to fully understand the functional role of face cells, we need to develop approaches that test the extent to which their response explains what we see.


Subject(s)
Brain Mapping , Facial Recognition/physiology , Illusions/physiology , Neurons/physiology , Prefrontal Cortex/physiology , Temporal Lobe/physiology , Animals , Behavior, Animal/physiology , Macaca mulatta , Magnetic Resonance Imaging , Prefrontal Cortex/diagnostic imaging , Temporal Lobe/diagnostic imaging
18.
Netw Neurosci ; 4(3): 746-760, 2020.
Article in English | MEDLINE | ID: mdl-32885124

ABSTRACT

Humans process faces by using a network of face-selective regions distributed across the brain. Neuropsychological patient studies demonstrate that focal damage to nodes in this network can impair face recognition, but such patients are rare. We approximated the effects of damage to the face network in neurologically normal human participants by using theta burst transcranial magnetic stimulation (TBS). Multi-echo functional magnetic resonance imaging (fMRI) resting-state data were collected pre- and post-TBS delivery over the face-selective right superior temporal sulcus (rpSTS), or a control site in the right motor cortex. Results showed that TBS delivered over the rpSTS reduced resting-state connectivity across the extended face processing network. This connectivity reduction was observed not only between the rpSTS and other face-selective areas, but also between nonstimulated face-selective areas across the ventral, medial, and lateral brain surfaces (e.g., between the right amygdala and bilateral fusiform face areas and occipital face areas). TBS delivered over the motor cortex did not produce significant changes in resting-state connectivity across the face processing network. These results demonstrate that, even without task-induced fMRI signal changes, disrupting a single node in a brain network can decrease the functional connectivity between nodes in that network that have not been directly stimulated.

19.
J Neurosci ; 40(42): 8119-8131, 2020 10 14.
Article in English | MEDLINE | ID: mdl-32928886

ABSTRACT

When we move the features of our face, or turn our head, we communicate changes in our internal state to the people around us. How this information is encoded and used by an observer's brain is poorly understood. We investigated this issue using a functional MRI adaptation paradigm in awake male macaques. Among face-selective patches of the superior temporal sulcus (STS), we found a double dissociation of areas processing facial expression and those processing head orientation. The face-selective patches in the STS fundus were most sensitive to facial expression, as was the amygdala, whereas those on the lower, lateral edge of the sulcus were most sensitive to head orientation. The results of this study reveal a new dimension of functional organization, with face-selective patches segregating within the STS. The findings thus force a rethinking of the role of the face-processing system in representing subject-directed actions and supporting social cognition.SIGNIFICANCE STATEMENT When we are interacting with another person, we make inferences about their emotional state based on visual signals. For example, when a person's facial expression changes, we are given information about their feelings. While primates are thought to have specialized cortical mechanisms for analyzing the identity of faces, less is known about how these mechanisms unpack transient signals, like expression, that can change from one moment to the next. Here, using an fMRI adaptation paradigm, we demonstrate that while the identity of a face is held constant, there are separate mechanisms in the macaque brain for processing transient changes in the face's expression and orientation. These findings shed new light on the function of the face-processing system during social exchanges.


Subject(s)
Facial Expression , Motion Perception/physiology , Orientation , Social Perception , Amygdala/diagnostic imaging , Amygdala/physiology , Animals , Cognition , Head , Image Processing, Computer-Assisted , Macaca mulatta , Magnetic Resonance Imaging , Male , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology
20.
Neuroimage ; 222: 117295, 2020 11 15.
Article in English | MEDLINE | ID: mdl-32835823

ABSTRACT

Curvature is one of many visual features shown to be important for visual perception. We recently showed that curvilinear features provide sufficient information for categorizing animate vs. inanimate objects, while rectilinear features do not (Zachariou et al., 2018). Results from our fMRI study in rhesus monkeys (Yue et al., 2014) have shed light on some of the neural substrates underlying curvature processing by revealing a network of visual cortical patches with a curvature response preference. However, it is unknown whether a similar network exists in human visual cortex. Thus, the current study was designed to investigate cortical areas with a preference for curvature in the human brain using fMRI at 7T. Consistent with our monkey fMRI results, we found a network of curvature preferring cortical patches-some of which overlapped well-known face-selective areas. Moreover, principal component analysis (PCA) using all visually-responsive voxels indicated that curvilinear features of visual stimuli were associated with specific retinotopic regions in visual cortex. Regions associated with positive curvilinear PC values encompassed the central visual field representation of early visual areas and the lateral surface of temporal cortex, while those associated with negative curvilinear PC values encompassed the peripheral visual field representation of early visual areas and the medial surface of temporal cortex. Thus, we found that broad areas of curvature preference, which encompassed face-selective areas, were bound by central visual field representations. Our results support the hypothesis that curvilinearity preference interacts with central-peripheral processing biases as primary features underlying the organization of temporal cortex topography in the adult human brain.


Subject(s)
Face/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Photic Stimulation/methods , Temporal Lobe/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL