Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 108
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Neurosci ; 43(4): 621-634, 2023 01 25.
Artículo en Inglés | MEDLINE | ID: mdl-36639892

RESUMEN

Humans can label and categorize objects in a visual scene with high accuracy and speed, a capacity well characterized with studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed by the brain in the absence of luminance-defined form information, we created a novel stimulus set of "object kinematograms" to isolate motion-defined signals from other sources of visual information. Object kinematograms were generated by extracting motion information from videos of 6 object categories and applying the motion to limited-lifetime random dot patterns. Using functional magnetic resonance imaging (fMRI) (n = 15, 40% women), we investigated whether category information from the object kinematograms could be decoded within the occipitotemporal and parietal cortex and evaluated whether the information overlapped with category responses to static images from the original videos. We decoded object category for both stimulus formats in all higher-order regions of interest (ROIs). More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition, while more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Further, decoding across the two stimulus formats was possible in all regions. These results demonstrate that motion cues can elicit widespread and robust category responses on par with those elicited by static luminance cues, even in ventral regions of visual cortex that have traditionally been associated with primarily image-defined form processing.SIGNIFICANCE STATEMENT Much research on visual object recognition has focused on recognizing objects in static images. However, motion is a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic motion cues. Our study shows that, while higher-order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.


Asunto(s)
Reconocimiento Visual de Modelos , Corteza Visual , Humanos , Femenino , Masculino , Reconocimiento Visual de Modelos/fisiología , Percepción Visual/fisiología , Encéfalo/fisiología , Corteza Visual/fisiología , Imagen por Resonancia Magnética , Mapeo Encefálico , Estimulación Luminosa
2.
Neuroimage ; 273: 120067, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36997134

RESUMEN

Both the primate visual system and artificial deep neural network (DNN) models show an extraordinary ability to simultaneously classify facial expression and identity. However, the neural computations underlying the two systems are unclear. Here, we developed a multi-task DNN model that optimally classified both monkey facial expressions and identities. By comparing the fMRI neural representations of the macaque visual cortex with the best-performing DNN model, we found that both systems: (1) share initial stages for processing low-level face features which segregate into separate branches at later stages for processing facial expression and identity respectively, and (2) gain more specificity for the processing of either facial expression or identity as one progresses along each branch towards higher stages. Correspondence analysis between the DNN and monkey visual areas revealed that the amygdala and anterior fundus face patch (AF) matched well with later layers of the DNN's facial expression branch, while the anterior medial face patch (AM) matched well with later layers of the DNN's facial identity branch. Our results highlight the anatomical and functional similarities between macaque visual system and DNN model, suggesting a common mechanism between the two systems.


Asunto(s)
Expresión Facial , Macaca , Animales , Redes Neurales de la Computación , Primates , Imagen por Resonancia Magnética/métodos , Reconocimiento Visual de Modelos
3.
Proc Natl Acad Sci U S A ; 117(48): 30836-30847, 2020 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-33199608

RESUMEN

Figure-ground modulation, i.e., the enhancement of neuronal responses evoked by the figure relative to the background, has three complementary components: edge modulation (boundary detection), center modulation (region filling), and background modulation (background suppression). However, the neuronal mechanisms mediating these three modulations and how they depend on awareness remain unclear. For each modulation, we compared both the cueing effect produced in a Posner paradigm and fMRI blood oxygen-level dependent (BOLD) signal in primary visual cortex (V1) evoked by visible relative to invisible orientation-defined figures. We found that edge modulation was independent of awareness, whereas both center and background modulations were strongly modulated by awareness, with greater modulations in the visible than the invisible condition. Effective-connectivity analysis further showed that the awareness-dependent region-filling and background-suppression processes in V1 were not derived through intracortical interactions within V1, but rather by feedback from the frontal eye field (FEF) and dorsolateral prefrontal cortex (DLPFC), respectively. These results indicate a source for an awareness-dependent figure-ground segregation in human prefrontal cortex.


Asunto(s)
Concienciación , Corteza Prefrontal/fisiología , Percepción Visual , Mapeo Encefálico , Conectoma , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Reconocimiento Visual de Modelos , Estimulación Luminosa , Corteza Prefrontal/diagnóstico por imagen , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología
4.
J Neurosci ; 40(42): 8119-8131, 2020 10 14.
Artículo en Inglés | MEDLINE | ID: mdl-32928886

RESUMEN

When we move the features of our face, or turn our head, we communicate changes in our internal state to the people around us. How this information is encoded and used by an observer's brain is poorly understood. We investigated this issue using a functional MRI adaptation paradigm in awake male macaques. Among face-selective patches of the superior temporal sulcus (STS), we found a double dissociation of areas processing facial expression and those processing head orientation. The face-selective patches in the STS fundus were most sensitive to facial expression, as was the amygdala, whereas those on the lower, lateral edge of the sulcus were most sensitive to head orientation. The results of this study reveal a new dimension of functional organization, with face-selective patches segregating within the STS. The findings thus force a rethinking of the role of the face-processing system in representing subject-directed actions and supporting social cognition.SIGNIFICANCE STATEMENT When we are interacting with another person, we make inferences about their emotional state based on visual signals. For example, when a person's facial expression changes, we are given information about their feelings. While primates are thought to have specialized cortical mechanisms for analyzing the identity of faces, less is known about how these mechanisms unpack transient signals, like expression, that can change from one moment to the next. Here, using an fMRI adaptation paradigm, we demonstrate that while the identity of a face is held constant, there are separate mechanisms in the macaque brain for processing transient changes in the face's expression and orientation. These findings shed new light on the function of the face-processing system during social exchanges.


Asunto(s)
Expresión Facial , Percepción de Movimiento/fisiología , Orientación , Percepción Social , Amígdala del Cerebelo/diagnóstico por imagen , Amígdala del Cerebelo/fisiología , Animales , Cognición , Cabeza , Procesamiento de Imagen Asistido por Computador , Macaca mulatta , Imagen por Resonancia Magnética , Masculino , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología
5.
Neuroimage ; 235: 117997, 2021 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-33789138

RESUMEN

Functional neuroimaging research in the non-human primate (NHP) has been advancing at a remarkable rate. The increase in available data establishes a need for robust analysis pipelines designed for NHP neuroimaging and accompanying template spaces to standardize the localization of neuroimaging results. Our group recently developed the NIMH Macaque Template (NMT), a high-resolution population average anatomical template and associated neuroimaging resources, providing researchers with a standard space for macaque neuroimaging . Here, we release NMT v2, which includes both symmetric and asymmetric templates in stereotaxic orientation, with improvements in spatial contrast, processing efficiency, and segmentation. We also introduce the Cortical Hierarchy Atlas of the Rhesus Macaque (CHARM), a hierarchical parcellation of the macaque cerebral cortex with varying degrees of detail. These tools have been integrated into the neuroimaging analysis software AFNI to provide a comprehensive and robust pipeline for fMRI processing, visualization and analysis of NHP data. AFNI's new @animal_warper program can be used to efficiently align anatomical scans to the NMT v2 space, and afni_proc.py integrates these results with full fMRI processing using macaque-specific parameters: from motion correction through regression modeling. Taken together, the NMT v2 and AFNI represent an all-in-one package for macaque functional neuroimaging analysis, as demonstrated with available demos for both task and resting state fMRI.


Asunto(s)
Atlas como Asunto , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Neuroimagen Funcional , Macaca mulatta/fisiología , Imagen por Resonancia Magnética , Animales , Femenino , Masculino
6.
PLoS Biol ; 16(6): e2005399, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29939981

RESUMEN

Feature-based attention has a spatially global effect, i.e., responses to stimuli that share features with an attended stimulus are enhanced not only at the attended location but throughout the visual field. However, how feature-based attention modulates cortical neural responses at unattended locations remains unclear. Here we used functional magnetic resonance imaging (fMRI) to examine this issue as human participants performed motion- (Experiment 1) and color- (Experiment 2) based attention tasks. Results indicated that, in both experiments, the respective visual processing areas (middle temporal area [MT+] for motion and V4 for color) as well as early visual, parietal, and prefrontal areas all showed the classic feature-based attention effect, with neural responses to the unattended stimulus significantly elevated when it shared the same feature with the attended stimulus. Effective connectivity analysis using dynamic causal modeling (DCM) showed that this spatially global effect in the respective visual processing areas (MT+ for motion and V4 for color), intraparietal sulcus (IPS), frontal eye field (FEF), medial frontal gyrus (mFG), and primary visual cortex (V1) was derived by feedback from the inferior frontal junction (IFJ). Complementary effective connectivity analysis using Granger causality modeling (GCM) confirmed that, in both experiments, the node with the highest outflow and netflow degree was IFJ, which was thus considered to be the source of the network. These results indicate a source for the spatially global effect of feature-based attention in the human prefrontal cortex.


Asunto(s)
Corteza Prefrontal/fisiología , Adulto , Atención/fisiología , Mapeo Encefálico , Percepción de Color/fisiología , Conectoma , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Modelos Neurológicos , Modelos Psicológicos , Percepción de Movimiento/fisiología , Estimulación Luminosa , Corteza Visual/fisiología , Campos Visuales/fisiología , Adulto Joven
7.
Cereb Cortex ; 30(2): 778-785, 2020 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-31264693

RESUMEN

Neuroimaging studies show that ventral face-selective regions, including the fusiform face area (FFA) and occipital face area (OFA), preferentially respond to faces presented in the contralateral visual field (VF). In the current study we measured the VF response of the face-selective posterior superior temporal sulcus (pSTS). Across 3 functional magnetic resonance imaging experiments, participants viewed face videos presented in different parts of the VF. Consistent with prior results, we observed a contralateral VF bias in bilateral FFA, right OFA (rOFA), and bilateral human motion-selective area MT+. Intriguingly, this contralateral VF bias was absent in the bilateral pSTS. We then delivered transcranial magnetic stimulation (TMS) over right pSTS (rpSTS) and rOFA, while participants matched facial expressions in both hemifields. TMS delivered over the rpSTS disrupted performance in both hemifields, but TMS delivered over the rOFA disrupted performance in the contralateral hemifield only. These converging results demonstrate that the contralateral bias for faces observed in ventral face-selective areas is absent in the pSTS. This difference in VF response is consistent with face processing models proposing 2 functionally distinct pathways. It further suggests that these models should account for differences in interhemispheric connections between the face-selective areas across these 2 pathways.


Asunto(s)
Reconocimiento Facial/fisiología , Lóbulo Temporal/fisiología , Mapeo Encefálico , Expresión Facial , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Magnética Transcraneal , Campos Visuales
8.
Proc Natl Acad Sci U S A ; 115(31): 8043-8048, 2018 07 31.
Artículo en Inglés | MEDLINE | ID: mdl-30012600

RESUMEN

In free-viewing experiments, primates orient preferentially toward faces and face-like stimuli. To investigate the neural basis of this behavior, we measured the spontaneous viewing preferences of monkeys with selective bilateral amygdala lesions. The results revealed that when faces and nonface objects were presented simultaneously, monkeys with amygdala lesions had no viewing preference for either conspecific faces or illusory facial features in everyday objects. Instead of directing eye movements toward socially relevant features in natural images, we found that, after amygdala loss, monkeys are biased toward features with increased low-level salience. We conclude that the amygdala has a role in our earliest specialized response to faces, a behavior thought to be a precursor for efficient social communication and essential for the development of face-selective cortex.


Asunto(s)
Amígdala del Cerebelo/fisiología , Reconocimiento Visual de Modelos , Percepción Visual , Animales , Movimientos Oculares , Cara , Femenino , Macaca mulatta , Masculino
9.
J Vis ; 21(4): 3, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-33798259

RESUMEN

The current experiment investigated the extent to which perceptual categorization of animacy (i.e., the ability to discriminate animate and inanimate objects) is facilitated by image-based features that distinguish the two object categories. We show that, with nominal training, naïve macaques could classify a trial-unique set of 1000 novel images with high accuracy. To test whether image-based features that naturally differ between animate and inanimate objects, such as curvilinear and rectilinear information, contribute to the monkeys' accuracy, we created synthetic images using an algorithm that distorted the global shape of the original animate/inanimate images while maintaining their intermediate features (Portilla & Simoncelli, 2000). Performance on the synthesized images was significantly above chance and was predicted by the amount of curvilinear information in the images. Our results demonstrate that, without training, macaques can use an intermediate image feature, curvilinearity, to facilitate their categorization of animate and inanimate objects.


Asunto(s)
Macaca , Animales
10.
Neuroimage ; 222: 117295, 2020 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-32835823

RESUMEN

Curvature is one of many visual features shown to be important for visual perception. We recently showed that curvilinear features provide sufficient information for categorizing animate vs. inanimate objects, while rectilinear features do not (Zachariou et al., 2018). Results from our fMRI study in rhesus monkeys (Yue et al., 2014) have shed light on some of the neural substrates underlying curvature processing by revealing a network of visual cortical patches with a curvature response preference. However, it is unknown whether a similar network exists in human visual cortex. Thus, the current study was designed to investigate cortical areas with a preference for curvature in the human brain using fMRI at 7T. Consistent with our monkey fMRI results, we found a network of curvature preferring cortical patches-some of which overlapped well-known face-selective areas. Moreover, principal component analysis (PCA) using all visually-responsive voxels indicated that curvilinear features of visual stimuli were associated with specific retinotopic regions in visual cortex. Regions associated with positive curvilinear PC values encompassed the central visual field representation of early visual areas and the lateral surface of temporal cortex, while those associated with negative curvilinear PC values encompassed the peripheral visual field representation of early visual areas and the medial surface of temporal cortex. Thus, we found that broad areas of curvature preference, which encompassed face-selective areas, were bound by central visual field representations. Our results support the hypothesis that curvilinearity preference interacts with central-peripheral processing biases as primary features underlying the organization of temporal cortex topography in the adult human brain.


Asunto(s)
Cara/fisiología , Reconocimiento Visual de Modelos/fisiología , Corteza Visual/fisiología , Vías Visuales/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Estimulación Luminosa/métodos , Lóbulo Temporal/fisiología , Adulto Joven
11.
Neuroimage ; 218: 116878, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32360168

RESUMEN

Facial motion plays a fundamental role in the recognition of facial expressions in primates, but the neural substrates underlying this special type of biological motion are not well understood. Here, we used fMRI to investigate the extent to which the specialization for facial motion is represented in the visual system and compared the neural mechanisms for the processing of non-rigid facial motion in macaque monkeys and humans. We defined the areas specialized for facial motion as those significantly more activated when subjects perceived the motion caused by dynamic faces (dynamic faces â€‹> â€‹static faces) than when they perceived the motion caused by dynamic non-face objects (dynamic objects â€‹> â€‹static objects). We found that, in monkeys, significant activations evoked by facial motion were in the fundus of anterior superior temporal sulcus (STS), which overlapped the anterior fundus face patch. In humans, facial motion activated three separate foci in the right STS: posterior, middle, and anterior STS, with the anterior STS location showing the most selectivity for facial motion compared with other facial motion areas. In both monkeys and humans, facial motion shows a gradient preference as one progresses anteriorly along the STS. Taken together, our results indicate that monkeys and humans share similar neural substrates within the anterior temporal lobe specialized for the processing of non-rigid facial motion.


Asunto(s)
Expresión Facial , Reconocimiento Facial/fisiología , Lóbulo Temporal/fisiología , Adulto , Animales , Emociones/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Macaca , Imagen por Resonancia Magnética/métodos , Masculino , Movimiento (Física)
12.
PLoS Biol ; 14(11): e1002578, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27870851

RESUMEN

The normalization model of attention proposes that attention can affect performance by response- or contrast-gain changes, depending on the size of the stimulus and attention field. Here, we manipulated the attention field by emotional valence, negative faces versus positive faces, while holding stimulus size constant in a spatial cueing task. We observed changes in the cueing effect consonant with changes in response gain for negative faces and contrast gain for positive faces. Neuroimaging experiments confirmed that subjects' attention fields were narrowed for negative faces and broadened for positive faces. Importantly, across subjects, the self-reported emotional strength of negative faces and positive faces correlated, respectively, both with response- and contrast-gain changes and with primary visual cortex (V1) narrowed and broadened attention fields. Effective connectivity analysis showed that the emotional valence-dependent attention field was closely associated with feedback from the dorsolateral prefrontal cortex (DLPFC) to V1. These findings indicate a crucial involvement of DLPFC in the normalization processes of emotional attention.


Asunto(s)
Atención/fisiología , Emociones/fisiología , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Corteza Prefrontal/diagnóstico por imagen , Corteza Prefrontal/fisiología , Psicofísica
13.
J Vis ; 19(7): 16, 2019 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-31355865

RESUMEN

Humans have a remarkable ability to predict the actions of others. To address what information enables this prediction and how the information is modulated by social context, we used videos collected during an interactive reaching game. Two participants (an "initiator" and a "responder") sat on either side of a plexiglass screen on which two targets were affixed. The initiator was directed to tap one of the two targets, and the responder had to either beat the initiator to the target (competition) or arrive at the same time (cooperation). In a psychophysics experiment, new observers predicted the direction of the initiators' reach from brief clips, which were clipped relative to when the initiator began reaching. A machine learning classifier performed the same task. Both humans and the classifier were able to determine the direction of movement before the finger lift-off in both social conditions. Further, using an information mapping technique, the relevant information was found to be distributed throughout the body of the initiator in both social conditions. Our results indicate that we reveal our intentions during cooperation, in which communicating the future course of actions is beneficial, and also during competition despite the social motivation to reveal less information.


Asunto(s)
Conducta Competitiva/fisiología , Conducta Cooperativa , Intención , Adulto , Femenino , Humanos , Masculino , Movimiento/fisiología , Desempeño Psicomotor/fisiología , Psicofísica , Grabación en Video , Adulto Joven
14.
J Neurosci ; 37(5): 1156-1161, 2017 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-28011742

RESUMEN

Nonhuman primate neuroanatomical studies have identified a cortical pathway from the superior temporal sulcus (STS) projecting into dorsal subregions of the amygdala, but whether this same pathway exists in humans is unknown. Here, we addressed this question by combining theta burst transcranial magnetic stimulation (TBS) with fMRI to test the prediction that the STS and amygdala are functionally connected during face perception. Human participants (N = 17) were scanned, over two sessions, while viewing 3 s video clips of moving faces, bodies, and objects. During these sessions, TBS was delivered over the face-selective right posterior STS (rpSTS) or over the vertex control site. A region-of-interest analysis revealed results consistent with our hypothesis. Namely, TBS delivered over the rpSTS reduced the neural response to faces (but not to bodies or objects) in the rpSTS, right anterior STS (raSTS), and right amygdala, compared with TBS delivered over the vertex. By contrast, TBS delivered over the rpSTS did not significantly reduce the neural response to faces in the right fusiform face area or right occipital face area. This pattern of results is consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models. SIGNIFICANCE STATEMENT: Neuroimaging studies have identified multiple face-selective regions in the brain, but the functional connections between these regions are unknown. In the present study, participants were scanned with fMRI while viewing movie clips of faces, bodies, and objects before and after transient disruption of the face-selective right posterior superior temporal sulcus (rpSTS). Results showed that TBS disruption reduced the neural response to faces, but not to bodies or objects, in the rpSTS, right anterior STS (raSTS), and right amygdala. These results are consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models.


Asunto(s)
Amígdala del Cerebelo/anatomía & histología , Lóbulo Temporal/anatomía & histología , Adulto , Mapeo Encefálico , Cara , Femenino , Lateralidad Funcional/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Vías Nerviosas/anatomía & histología , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa , Estimulación Magnética Transcraneal , Percepción Visual/fisiología
15.
J Cogn Neurosci ; 30(10): 1499-1516, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29877768

RESUMEN

The fusiform and occipital face areas (FFA and OFA) are functionally defined brain regions in human ventral occipitotemporal cortex associated with face perception. There is an ongoing debate, however, whether these regions are face-specific or whether they also facilitate the perception of nonface object categories. Here, we present evidence that, under certain conditions, bilateral FFA and OFA respond to a nonface category equivalently to faces. In two fMRI sessions, participants performed same-different judgments on two object categories (faces and chairs). In one session, participants differentiated between distinct exemplars of each category, and in the other session, participants differentiated between exemplars that differed only in the shape or spatial configuration of their features (featural/configural differences). During the latter session, the within-category similarity was comparable for both object categories. When differentiating between distinct exemplars of each category, bilateral FFA and OFA responded more strongly to faces than to chairs. In contrast, during featural/configural difference judgments, bilateral FFA and OFA responded equivalently to both object categories. Importantly, during featural/configural difference judgments, the magnitude of activity within FFA and OFA evoked by the chair task predicted the participants' behavioral performance. In contrast, when participants differentiated between distinct chair exemplars, activity within these face regions did not predict the behavioral performance of the chair task. We conclude that, when the within-category similarity of a face and a nonface category is comparable and when the same cognitive strategies used to process a face are applied to a nonface category, the FFA and OFA respond equivalently to that nonface category and faces.


Asunto(s)
Reconocimiento Facial/fisiología , Lóbulo Occipital/diagnóstico por imagen , Lóbulo Occipital/fisiología , Estimulación Luminosa/métodos , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Tiempo de Reacción/fisiología , Adulto Joven
16.
Cereb Cortex ; 27(8): 4124-4138, 2017 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-27522076

RESUMEN

Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces.


Asunto(s)
Encéfalo/fisiología , Reconocimiento Facial/fisiología , Vías Visuales/fisiología , Adulto , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa , Tiempo de Reacción , Estimulación Magnética Transcraneal , Vías Visuales/diagnóstico por imagen , Adulto Joven
17.
Cereb Cortex ; 27(2): 1524-1531, 2017 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-26759479

RESUMEN

In humans and monkeys, face perception activates a distributed cortical network that includes extrastriate, limbic, and prefrontal regions. Within face-responsive regions, emotional faces evoke stronger responses than neutral faces ("valence effect"). We used fMRI and Dynamic Causal Modeling (DCM) to test the hypothesis that emotional faces differentially alter the functional coupling among face-responsive regions. Three monkeys viewed conspecific faces with neutral, threatening, fearful, and appeasing expressions. Using Bayesian model selection, various models of neural interactions between the posterior (TEO) and anterior (TE) portions of inferior temporal (IT) cortex, the amygdala, the orbitofrontal (OFC), and ventrolateral prefrontal cortex (VLPFC) were tested. The valence effect was mediated by feedback connections from the amygdala to TE and TEO, and feedback connections from VLPFC to the amygdala and TE. Emotional faces were associated with differential effective connectivity: Fearful faces evoked stronger modulations in the connections from the amygdala to TE and TEO; threatening faces evoked weaker modulations in the connections from the amygdala and VLPFC to TE; and appeasing faces evoked weaker modulations in the connection from VLPFC to the amygdala. Our results suggest dynamic alterations in neural coupling during the perception of behaviorally relevant facial expressions that are vital for social communication.


Asunto(s)
Amígdala del Cerebelo/fisiología , Emociones/fisiología , Expresión Facial , Vías Nerviosas/fisiología , Lóbulo Temporal/fisiología , Animales , Teorema de Bayes , Mapeo Encefálico , Potenciales Evocados , Macaca , Imagen por Resonancia Magnética/métodos , Masculino
18.
Cereb Cortex ; 27(5): 2739-2757, 2017 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-27166166

RESUMEN

We have an incomplete picture of how the brain links object representations to reward value, and how this information is stored and later retrieved. The orbitofrontal cortex (OFC), medial frontal cortex (MFC), and ventrolateral prefrontal cortex (VLPFC), together with the amygdala, are thought to play key roles in these processes. There is an apparent discrepancy, however, regarding frontal areas thought to encode value in macaque monkeys versus humans. To address this issue, we used fMRI in macaque monkeys to localize brain areas encoding recently learned image values. Each week, monkeys learned to associate images of novel objects with a high or low probability of water reward. Areas responding to the value of recently learned reward-predictive images included MFC area 10 m/32, VLPFC area 12, and inferior temporal visual cortex (IT). The amygdala and OFC, each thought to be involved in value encoding, showed little such effect. Instead, these 2 areas primarily responded to visual stimulation and reward receipt, respectively. Strong image value encoding in monkey MFC compared with OFC is surprising, but agrees with results from human imaging studies. Our findings demonstrate the importance of VLPFC, MFC, and IT in representing the values of recently learned visual images.


Asunto(s)
Aprendizaje por Asociación/fisiología , Lóbulo Frontal/fisiología , Reconocimiento Visual de Modelos/fisiología , Recompensa , Vías Visuales/fisiología , Amígdala del Cerebelo/diagnóstico por imagen , Amígdala del Cerebelo/fisiología , Animales , Mapeo Encefálico , Conducta de Elección/fisiología , Lóbulo Frontal/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Macaca mulatta , Imagen por Resonancia Magnética , Masculino , Recuerdo Mental/fisiología , Oxígeno/sangre , Estimulación Luminosa , Factores de Tiempo , Vías Visuales/diagnóstico por imagen
19.
Proc Natl Acad Sci U S A ; 112(24): E3123-30, 2015 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-26015576

RESUMEN

Increasing evidence has shown that oxytocin (OT), a mammalian hormone, modifies the way social stimuli are perceived and the way they affect behavior. Thus, OT may serve as a treatment for psychiatric disorders, many of which are characterized by dysfunctional social behavior. To explore the neural mechanisms mediating the effects of OT in macaque monkeys, we investigated whether OT would modulate functional magnetic resonance imaging (fMRI) responses in face-responsive regions (faces vs. blank screen) evoked by the perception of various facial expressions (neutral, fearful, aggressive, and appeasing). In the placebo condition, we found significantly increased activation for emotional (mainly fearful and appeasing) faces compared with neutral faces across the face-responsive regions. OT selectively, and differentially, altered fMRI responses to emotional expressions, significantly reducing responses to both fearful and aggressive faces in face-responsive regions while leaving responses to appeasing as well as neutral faces unchanged. We also found that OT administration selectively reduced functional coupling between the amygdala and areas in the occipital and inferior temporal cortex during the viewing of fearful and aggressive faces, but not during the viewing of neutral or appeasing faces. Taken together, our results indicate homologies between monkeys and humans in the neural circuits mediating the effects of OT. Thus, the monkey may be an ideal animal model to explore the development of OT-based pharmacological strategies for treating patients with dysfunctional social behavior.


Asunto(s)
Expresión Facial , Macaca mulatta/fisiología , Macaca mulatta/psicología , Oxitocina/fisiología , Administración Intranasal , Amígdala del Cerebelo/efectos de los fármacos , Amígdala del Cerebelo/fisiología , Animales , Conducta Animal/efectos de los fármacos , Conducta Animal/fisiología , Corteza Cerebral/efectos de los fármacos , Corteza Cerebral/fisiología , Emociones/efectos de los fármacos , Emociones/fisiología , Neuroimagen Funcional , Imagen por Resonancia Magnética , Masculino , Oxitocina/administración & dosificación , Conducta Social , Percepción Social
20.
J Vis ; 18(12): 3, 2018 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-30458511

RESUMEN

Animate and inanimate objects differ in their intermediate visual features. For instance, animate objects tend to be more curvilinear compared to inanimate objects (e.g., Levin, Takarae, Miner, & Keil, 2001). Recently, it has been demonstrated that these differences in the intermediate visual features of animate and inanimate objects are sufficient for categorization: Human participants viewing synthesized images of animate and inanimate objects that differ largely in the amount of these visual features classify objects as animate/inanimate significantly above chance (Long, Stormer, & Alvarez, 2017). A remaining question, however, is whether the observed categorization is a consequence of top-down cognitive strategies (e.g., rectangular shapes are less likely to be animals) or a consequence of bottom-up processing of their intermediate visual features, per se, in the absence of top-down cognitive strategies. To address this issue, we repeated the classification experiment of Long et al. (2017) but, unlike Long et al. (2017), matched the synthesized images, on average, in the amount of image-based and perceived curvilinear and rectilinear information. Additionally, in our synthesized images, global shape information was not preserved, and the images appeared as texture patterns. These changes prevented participants from using top-down cognitive strategies to perform the task. During the experiment, participants were presented with these synthesized, texture-like animate and inanimate images and, on each trial, were required to classify them as either animate or inanimate with no feedback given. Participants were told that these synthesized images depicted abstract art patterns. We found that participants still classified the synthesized stimuli significantly above chance even though they were unaware of their classification performance. For both object categories, participants depended more on the curvilinear and less on the rectilinear, image-based information present in the stimuli for classification. Surprisingly, the stimuli most consistently classified as animate were the most dangerous animals in our sample of images. We conclude that bottom-up processing of intermediate features present in the visual input is sufficient for animate/inanimate object categorization and that these features may convey information associated with the affective content of the visual stimuli.


Asunto(s)
Clasificación/métodos , Formación de Concepto/fisiología , Reconocimiento Visual de Modelos/fisiología , Adulto , Atención/fisiología , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA