Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 114(51): 13435-13440, 2017 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-29203678

RESUMO

Incoming sensory input is condensed by our perceptual system to optimally represent and store information. In the temporal domain, this process has been described in terms of temporal windows (TWs) of integration/segregation, in which the phase of ongoing neural oscillations determines whether two stimuli are integrated into a single percept or segregated into separate events. However, TWs can vary substantially, raising the question of whether different TWs map onto unique oscillations or, rather, reflect a single, general fluctuation in cortical excitability (e.g., in the alpha band). We used multivariate decoding of electroencephalography (EEG) data to investigate perception of stimuli that either repeated in the same location (two-flash fusion) or moved in space (apparent motion). By manipulating the interstimulus interval (ISI), we created bistable stimuli that caused subjects to perceive either integration (fusion/apparent motion) or segregation (two unrelated flashes). Training a classifier searchlight on the whole channels/frequencies/times space, we found that the perceptual outcome (integration vs. segregation) could be reliably decoded from the phase of prestimulus oscillations in right parieto-occipital channels. The highest decoding accuracy for the two-flash fusion task (ISI = 40 ms) was evident in the phase of alpha oscillations (8-10 Hz), while the highest decoding accuracy for the apparent motion task (ISI = 120 ms) was evident in the phase of theta oscillations (6-7 Hz). These results reveal a precise relationship between specific TW durations and specific oscillations. Such oscillations at different frequencies may provide a hierarchical framework for the temporal organization of perception.


Assuntos
Ritmo alfa , Ritmo Teta , Percepção Visual , Encéfalo/fisiologia , Feminino , Humanos , Masculino , Tempo de Reação , Adulto Jovem
2.
Cereb Cortex ; 27(8): 4277-4291, 2017 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-28591837

RESUMO

Humans prioritize different semantic qualities of a complex stimulus depending on their behavioral goals. These semantic features are encoded in distributed neural populations, yet it is unclear how attention might operate across these distributed representations. To address this, we presented participants with naturalistic video clips of animals behaving in their natural environments while the participants attended to either behavior or taxonomy. We used models of representational geometry to investigate how attentional allocation affects the distributed neural representation of animal behavior and taxonomy. Attending to animal behavior transiently increased the discriminability of distributed population codes for observed actions in anterior intraparietal, pericentral, and ventral temporal cortices. Attending to animal taxonomy while viewing the same stimuli increased the discriminability of distributed animal category representations in ventral temporal cortex. For both tasks, attention selectively enhanced the discriminability of response patterns along behaviorally relevant dimensions. These findings suggest that behavioral goals alter how the brain extracts semantic features from the visual world. Attention effectively disentangles population responses for downstream read-out by sculpting representational geometry in late-stage perceptual areas.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Percepção de Movimento/fisiologia , Semântica , Adulto , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Modelos Estatísticos , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiologia , Testes Neuropsicológicos , Reconhecimento Visual de Modelos/fisiologia
3.
J Neurosci ; 36(41): 10522-10528, 2016 10 12.
Artigo em Inglês | MEDLINE | ID: mdl-27733605

RESUMO

The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. SIGNIFICANCE STATEMENT: Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments.


Assuntos
Atenção/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Lobo Occipital/fisiologia , Estimulação Luminosa , Lobo Temporal/fisiologia , Córtex Visual/fisiologia , Adulto Jovem
4.
J Neurosci ; 36(19): 5373-84, 2016 05 11.
Artigo em Inglês | MEDLINE | ID: mdl-27170133

RESUMO

UNLABELLED: Common or folk knowledge about animals is dominated by three dimensions: (1) level of cognitive complexity or "animacy;" (2) dangerousness or "predacity;" and (3) size. We investigated the neural basis of the perceived dangerousness or aggressiveness of animals, which we refer to more generally as "perception of threat." Using functional magnetic resonance imaging (fMRI), we analyzed neural activity evoked by viewing images of animal categories that spanned the dissociable semantic dimensions of threat and taxonomic class. The results reveal a distributed network for perception of threat extending along the right superior temporal sulcus. We compared neural representational spaces with target representational spaces based on behavioral judgments and a computational model of early vision and found a processing pathway in which perceived threat emerges as a dominant dimension: whereas visual features predominate in early visual cortex and taxonomy in lateral occipital and ventral temporal cortices, these dimensions fall away progressively from posterior to anterior temporal cortices, leaving threat as the dominant explanatory variable. Our results suggest that the perception of threat in the human brain is associated with neural structures that underlie perception and cognition of social actions and intentions, suggesting a broader role for these regions than has been thought previously, one that includes the perception of potential threat from agents independent of their biological class. SIGNIFICANCE STATEMENT: For centuries, philosophers have wondered how the human mind organizes the world into meaningful categories and concepts. Today this question is at the core of cognitive science, but our focus has shifted to understanding how knowledge manifests in dynamic activity of neural systems in the human brain. This study advances the young field of empirical neuroepistemology by characterizing the neural systems engaged by an important dimension in our cognitive representation of the animal kingdom ontological subdomain: how the brain represents the perceived threat, dangerousness, or "predacity" of animals. Our findings reveal how activity for domain-specific knowledge of animals overlaps the social perception networks of the brain, suggesting domain-general mechanisms underlying the representation of conspecifics and other animals.


Assuntos
Encéfalo/fisiologia , Conectoma , Comportamento Predatório/classificação , Percepção Visual , Adulto , Anfíbios/fisiologia , Animais , Artrópodes/fisiologia , Encéfalo/citologia , Cognição , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Neurônios/fisiologia , Répteis/fisiologia
5.
J Neurosci ; 35(49): 16034-45, 2015 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-26658857

RESUMO

Understanding other people's actions is a fundamental prerequisite for social interactions. Whether action understanding relies on simulating the actions of others in the observers' motor system or on the access to conceptual knowledge stored in nonmotor areas is strongly debated. It has been argued previously that areas that play a crucial role in action understanding should (1) distinguish between different actions, (2) generalize across the ways in which actions are performed (Dinstein et al., 2008; Oosterhof et al., 2013; Caramazza et al., 2014), and (3) have access to action information around the time of action recognition (Hauk et al., 2008). Whereas previous studies focused on the first two criteria, little is known about the dynamics underlying action understanding. We examined which human brain regions are able to distinguish between pointing and grasping, regardless of reach direction (left or right) and effector (left or right hand), using multivariate pattern analysis of magnetoencephalography data. We show that the lateral occipitotemporal cortex (LOTC) has the earliest access to abstract action representations, which coincides with the time point from which there was enough information to allow discriminating between the two actions. By contrast, precentral regions, though recruited early, have access to such abstract representations substantially later. Our results demonstrate that in contrast to the LOTC, the early recruitment of precentral regions does not contain the detailed information that is required to recognize an action. We discuss previous theoretical claims of motor theories and how they are incompatible with our data. SIGNIFICANCE STATEMENT: It is debated whether our ability to understand other people's actions relies on the simulation of actions in the observers' motor system, or is based on access to conceptual knowledge stored in nonmotor areas. Here, using magnetoencephalography in combination with machine learning, we examined where in the brain and at which point in time it is possible to distinguish between pointing and grasping actions regardless of the way in which they are performed (effector, reach direction). We show that, in contrast to the predictions of motor theories of action understanding, the lateral occipitotemporal cortex has access to abstract action representations substantially earlier than precentral regions.


Assuntos
Formação de Conceito/fisiologia , Lateralidade Funcional/fisiologia , Magnetoencefalografia , Lobo Occipital/fisiologia , Desempenho Psicomotor/fisiologia , Lobo Temporal/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Força da Mão , Humanos , Masculino , Análise Multivariada , Estimulação Luminosa , Fatores de Tempo , Adulto Jovem
6.
Neuroimage ; 136: 197-207, 2016 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-27173760

RESUMO

To be able to interact with our environment, we need to transform incoming sensory information into goal-directed motor outputs. Whereas our ability to plan an appropriate movement based on sensory information appears effortless and simple, the underlying brain dynamics are still largely unknown. Here we used magnetoencephalography (MEG) to investigate this issue by recording brain activity during the planning of non-visually guided reaching and grasping actions, performed with either the left or right hand. Adopting a combination of univariate and multivariate analyses, we revealed specific patterns of beta power modulations underlying varying levels of neural representations during movement planning. (1) Effector-specific modulations were evident as a decrease in power in the beta band. Within both hemispheres, this decrease was stronger while planning a movement with the contralateral hand. (2) The comparison of planned grasping and reaching led to a relative increase in power in the beta band. These power changes were localized within temporal, premotor and posterior parietal cortices. Action-related modulations overlapped with effector-related beta power changes within widespread frontal and parietal regions, suggesting the possible integration of these two types of neural representations. (3) Multivariate analyses of action-specific power changes revealed that part of this broadband beta modulation also contributed to the encoding of an effector-independent neural representation of a planned action within fronto-parietal and temporal regions. Our results suggest that beta band power modulations play a central role in movement planning, within both the dorsal and ventral stream, by coding and integrating different levels of neural representations, ranging from the simple representation of the to-be-moved effector up to an abstract, effector-independent representation of the upcoming action.


Assuntos
Antecipação Psicológica/fisiologia , Atenção/fisiologia , Ritmo beta/fisiologia , Córtex Cerebral/fisiologia , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Mapeamento Encefálico , Feminino , Objetivos , Mãos/fisiologia , Humanos , Magnetoencefalografia , Masculino , Rede Nervosa/fisiologia , Adulto Jovem
7.
J Cogn Neurosci ; 27(4): 665-78, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25269114

RESUMO

Major theories for explaining the organization of semantic memory in the human brain are premised on the often-observed dichotomous dissociation between living and nonliving objects. Evidence from neuroimaging has been interpreted to suggest that this distinction is reflected in the functional topography of the ventral vision pathway as lateral-to-medial activation gradients. Recently, we observed that similar activation gradients also reflect differences among living stimuli consistent with the semantic dimension of graded animacy. Here, we address whether the salient dichotomous distinction between living and nonliving objects is actually reflected in observable measured brain activity or whether previous observations of a dichotomous dissociation were the illusory result of stimulus sampling biases. Using fMRI, we measured neural responses while participants viewed 10 animal species with high to low animacy and two inanimate categories. Representational similarity analysis of the activity in ventral vision cortex revealed a main axis of variation with high-animacy species maximally different from artifacts and with the least animate species closest to artifacts. Although the associated functional topography mirrored activation gradients observed for animate-inanimate contrasts, we found no evidence for a dichotomous dissociation. We conclude that a central organizing principle of human object vision corresponds to the graded psychological property of animacy with no clear distinction between living and nonliving stimuli. The lack of evidence for a dichotomous dissociation in the measured brain activity challenges theories based on this premise.


Assuntos
Mapeamento Encefálico , Ilusões Ópticas/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Semântica , Córtex Visual/fisiologia , Vias Visuais/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Estimulação Luminosa , Análise de Componente Principal , Tempo de Reação/fisiologia , Córtex Visual/irrigação sanguínea , Vias Visuais/irrigação sanguínea
8.
Neuroimage ; 88: 69-78, 2014 03.
Artigo em Inglês | MEDLINE | ID: mdl-24246486

RESUMO

Studies investigating the role of oscillatory activity in sensory perception are primarily conducted in the visual domain, while the contribution of oscillatory activity to auditory perception is heavily understudied. The objective of the present study was to investigate macroscopic (EEG) oscillatory brain response patterns that contribute to an auditory (Zwicker tone, ZT) illusion. Three different analysis approaches were chosen: 1) a parametric variation of the ZT illusion intensity via three different notch widths of the ZT-inducing noise; 2) contrasts of high-versus-low-intensity ZT illusion trials, excluding physical stimuli differences; 3) a representational similarity analysis to relate source activity patterns to loudness ratings. Depending on the analysis approach, levels of alpha to beta activity (10-20Hz) reflected illusion intensity, mainly defined by reduced power levels co-occurring with stronger percepts. Consistent across all analysis approaches, source level analysis implicated auditory cortices as main generators, providing evidence that the activity level in the alpha and beta range - at least in part - contributes to the strength of the illusory auditory percept. This study corroborates the notion that alpha to beta activity in the auditory cortex is linked to functionally similar states, as has been proposed for visual, somatosensory and motor regions. Furthermore, our study provides certain theoretical implications for pathological auditory conscious perception (tinnitus).


Assuntos
Ritmo alfa/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Ritmo beta/fisiologia , Ilusões/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
9.
Behav Brain Sci ; 37(2): 213-5, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24775171

RESUMO

Cook et al. overstate the evidence supporting their associative account of mirror neurons in humans: most studies do not address a key property, action-specificity that generalizes across the visual and motor domains. Multivariate pattern analysis (MVPA) of neuroimaging data can address this concern, and we illustrate how MVPA can be used to test key predictions of their account.


Assuntos
Evolução Biológica , Encéfalo/fisiologia , Aprendizagem/fisiologia , Neurônios-Espelho/fisiologia , Percepção Social , Animais , Humanos
10.
J Neurosci ; 31(29): 10701-11, 2011 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-21775613

RESUMO

Motivation improves the efficiency of intentional behavior, but how this performance modulation is instantiated in the human brain remains unclear. We used a reward-cued antisaccade paradigm to investigate how motivational goals (the expectation of a reward for good performance) modulate patterns of neural activation and functional connectivity to improve preparation for antisaccade performance. Behaviorally, subjects performed better (faster and more accurate antisaccades) when they knew they would be rewarded for good performance. Reward anticipation was associated with increased activation in the ventral and dorsal striatum, and cortical oculomotor regions. Functional connectivity between the caudate nucleus and cortical oculomotor control structures predicted individual differences in the behavioral benefit of reward anticipation. We conclude that although both dorsal and ventral striatal circuitry are involved in the anticipation of reward, only the dorsal striatum and its connected cortical network is involved in the direct modulation of oculomotor behavior by motivational incentive.


Assuntos
Gânglios da Base/fisiologia , Núcleo Caudado/fisiologia , Movimentos Oculares/fisiologia , Motivação/fisiologia , Vias Neurais/fisiologia , Análise de Variância , Atenção/fisiologia , Gânglios da Base/irrigação sanguínea , Mapeamento Encefálico , Núcleo Caudado/irrigação sanguínea , Sinais (Psicologia) , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Vias Neurais/irrigação sanguínea , Oxigênio/sangue , Estimulação Luminosa/métodos , Tempo de Reação , Recompensa , Aprendizagem Seriada/fisiologia , Fatores de Tempo , Adulto Jovem
11.
J Cogn Neurosci ; 24(4): 975-89, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22264198

RESUMO

The discovery of mirror neurons-neurons that code specific actions both when executed and observed-in area F5 of the macaque provides a potential neural mechanism underlying action understanding. To date, neuroimaging evidence for similar coding of specific actions across the visual and motor modalities in human ventral premotor cortex (PMv)-the putative homologue of macaque F5-is limited to the case of actions observed from a first-person perspective. However, it is the third-person perspective that figures centrally in our understanding of the actions and intentions of others. To address this gap in the literature, we scanned participants with fMRI while they viewed two actions from either a first- or third-person perspective during some trials and executed the same actions during other trials. Using multivoxel pattern analysis, we found action-specific cross-modal visual-motor representations in PMv for the first-person but not for the third-person perspective. Additional analyses showed no evidence for spatial or attentional differences across the two perspective conditions. In contrast, more posterior areas in the parietal and occipitotemporal cortex did show cross-modal coding regardless of perspective. These findings point to a stronger role for these latter regions, relative to PMv, in supporting the understanding of others' actions with reference to one's own actions.


Assuntos
Atenção/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Comportamento Imitativo/fisiologia , Percepção Visual/fisiologia , Adulto , Análise de Variância , Córtex Cerebral/irrigação sanguínea , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Percepção de Movimento , Oxigênio/sangue , Reconhecimento Visual de Modelos , Estimulação Luminosa , Desempenho Psicomotor , Adulto Jovem
12.
Neuroimage ; 63(1): 262-71, 2012 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-22766163

RESUMO

An important human capacity is the ability to imagine performing an action, and its consequences, without actually executing it. Here we seek neural representations of specific manual actions that are common across visuo-motor performance and imagery. Participants were scanned with fMRI while they performed and observed themselves performing two different manual actions during some trials, and imagined performing and observing themselves performing the same actions during other trials. We used multi-variate pattern analysis to identify areas where representations of specific actions generalize across imagined and performed actions. The left anterior parietal cortex showed this property. In this region, we also found that activity patterns for imagined actions generalize better to performed actions than vice versa, and we provide simulation results that can explain this asymmetry. The present results are the first demonstration of action-specific representations that are similar irrespective of whether actions are actively performed or covertly imagined. Further, they demonstrate concretely how the apparent cross-modal visuo-motor coding of actions identified in studies of a human "mirror neuron system" could, at least partially, reflect imagery.


Assuntos
Mapeamento Encefálico/métodos , Córtex Cerebral/fisiologia , Imaginação/fisiologia , Imageamento por Ressonância Magnética/métodos , Movimento/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Percepção Visual/fisiologia , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Masculino , Análise Multivariada , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
13.
J Neurophysiol ; 107(2): 628-39, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22013235

RESUMO

How is working memory for different visual categories supported in the brain? Do the same principles of cortical specialization that govern the initial processing and encoding of visual stimuli also apply to their short-term maintenance? We investigated these questions with a delayed discrimination paradigm for faces, bodies, flowers, and scenes and applied both univariate and multivariate analyses to functional magnetic resonance imaging (fMRI) data. Activity during encoding followed the well-known specialization in posterior areas. During the delay interval, activity shifted to frontal and parietal regions but was not specialized for category. Conversely, activity in visual areas returned to baseline during that interval but showed some evidence of category specialization on multivariate pattern analysis (MVPA). We conclude that principles of cortical activation differ between encoding and maintenance of visual material. Whereas perceptual processes rely on specialized regions in occipitotemporal cortex, maintenance involves the activation of a frontoparietal network that seems to require little specialization at the category level. We also confirm previous findings that MVPA can extract information from fMRI signals in the absence of suprathreshold activation and that such signals from visual areas can reflect the material stored in memory.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Memória de Curto Prazo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Adulto , Encéfalo/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Análise Multivariada , Testes Neuropsicológicos , Oxigênio/sangue , Estimulação Luminosa , Tempo de Reação , Aprendizagem Seriada/fisiologia , Fatores de Tempo , Adulto Jovem
14.
J Cogn Neurosci ; 23(10): 2766-81, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21254805

RESUMO

In two fMRI experiments (n = 44) using tasks with different demands-approach-avoidance versus one-back recognition decisions-we measured the responses to the social value of faces. The face stimuli were produced by a parametric model of face evaluation that reduces multiple social evaluations to two orthogonal dimensions of valence and power [Oosterhof, N. N., & Todorov, A. The functional basis of face evaluation. Proceedings of the National Academy of Sciences, U.S.A., 105, 11087-11092, 2008]. Independent of the task, the response within regions of the occipital, fusiform, and lateral prefrontal cortices was sensitive to the valence dimension, with larger responses to low-valence faces. Additionally, there were extensive quadratic responses in the fusiform gyri and dorsal amygdala, with larger responses to faces at the extremes of the face valence continuum than faces in the middle. In all these regions, participants' avoidance decisions correlated with brain responses, with faces more likely to be avoided evoking stronger responses. The findings suggest that both explicit and implicit face evaluation engage multiple brain regions involved in attention, affect, and decision making.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Expressão Facial , Reconhecimento Visual de Modelos/fisiologia , Valores Sociais , Adolescente , Análise de Variância , Encéfalo/irrigação sanguínea , Tomada de Decisões/fisiologia , Face , Feminino , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Julgamento , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Estimulação Luminosa , Tempo de Reação/fisiologia , Adulto Jovem
15.
Neuroimage ; 56(2): 593-600, 2011 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-20621701

RESUMO

For functional magnetic resonance imaging (fMRI), multi-voxel pattern analysis (MVPA) has been shown to be a sensitive method to detect areas that encode certain stimulus dimensions. By moving a searchlight through the volume of the brain, one can continuously map the information content about the experimental conditions of interest to the brain. Traditionally, the searchlight is defined as a volume sphere that does not take into account the anatomy of the cortical surface. Here we present a method that uses a cortical surface reconstruction to guide voxel selection for information mapping. This approach differs in two important aspects from a volume-based searchlight definition. First, it uses only voxels that are classified as grey matter based on an anatomical scan. Second, it uses a surface-based geodesic distance metric to define neighbourhoods of voxels, and does not select voxels across a sulcus. We study here the influence of these two factors onto classification accuracy and onto the spatial specificity of the resulting information map. In our example data set, participants pressed one of four fingers while undergoing fMRI. We used MVPA to identify regions in which local fMRI patterns can successfully discriminate which finger was moved. We show that surface-based information mapping is a more sensitive measure of local information content, and provides better spatial selectivity. This makes surface-based information mapping a useful technique for a data-driven analysis of information representation in the cerebral cortex.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Feminino , Humanos , Masculino , Adulto Jovem
16.
Proc Natl Acad Sci U S A ; 105(32): 11087-92, 2008 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-18685089

RESUMO

People automatically evaluate faces on multiple trait dimensions, and these evaluations predict important social outcomes, ranging from electoral success to sentencing decisions. Based on behavioral studies and computer modeling, we develop a 2D model of face evaluation. First, using a principal components analysis of trait judgments of emotionally neutral faces, we identify two orthogonal dimensions, valence and dominance, that are sufficient to describe face evaluation and show that these dimensions can be approximated by judgments of trustworthiness and dominance. Second, using a data-driven statistical model for face representation, we build and validate models for representing face trustworthiness and face dominance. Third, using these models, we show that, whereas valence evaluation is more sensitive to features resembling expressions signaling whether the person should be avoided or approached, dominance evaluation is more sensitive to features signaling physical strength/weakness. Fourth, we show that important social judgments, such as threat, can be reproduced as a function of the two orthogonal dimensions of valence and dominance. The findings suggest that face evaluation involves an overgeneralization of adaptive mechanisms for inferring harmful intentions and the ability to cause harm and can account for rapid, yet not necessarily accurate, judgments from faces.


Assuntos
Simulação por Computador , Emoções , Face , Processamento de Imagem Assistida por Computador/métodos , Humanos
17.
J Neurophysiol ; 104(2): 1077-89, 2010 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-20538772

RESUMO

Many lines of evidence point to a tight linkage between the perceptual and motoric representations of actions. Numerous demonstrations show how the visual perception of an action engages compatible activity in the observer's motor system. This is seen for both intransitive actions (e.g., in the case of unconscious postural imitation) and transitive actions (e.g., grasping an object). Although the discovery of "mirror neurons" in macaques has inspired explanations of these processes in human action behaviors, the evidence for areas in the human brain that similarly form a crossmodal visual/motor representation of actions remains incomplete. To address this, in the present study, participants performed and observed hand actions while being scanned with functional MRI. We took a data-driven approach by applying whole-brain information mapping using a multivoxel pattern analysis (MVPA) classifier, performed on reconstructed representations of the cortical surface. The aim was to identify regions in which local voxelwise patterns of activity can distinguish among different actions, across the visual and motor domains. Experiment 1 tested intransitive, meaningless hand movements, whereas experiment 2 tested object-directed actions (all right-handed). Our analyses of both experiments revealed crossmodal action regions in the lateral occipitotemporal cortex (bilaterally) and in the left postcentral gyrus/anterior parietal cortex. Furthermore, in experiment 2 we identified a gradient of bias in the patterns of information in the left hemisphere postcentral/parietal region. The postcentral gyrus carried more information about the effectors used to carry out the action (fingers vs. whole hand), whereas anterior parietal regions carried more information about the goal of the action (lift vs. punch). Taken together, these results provide evidence for common neural coding in these areas of the visual and motor aspects of actions, and demonstrate further how MVPA can contribute to our understanding of the nature of distributed neural representations.


Assuntos
Mapeamento Encefálico , Lobo Occipital/fisiologia , Lobo Parietal/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto , Análise Discriminante , Feminino , Lateralidade Funcional , Mãos/fisiologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Modelos Lineares , Imageamento por Ressonância Magnética/métodos , Masculino , Lobo Occipital/irrigação sanguínea , Oxigênio/sangue , Lobo Parietal/irrigação sanguínea , Estimulação Luminosa/métodos , Desempenho Psicomotor , Lobo Temporal/irrigação sanguínea , Adulto Jovem
18.
Elife ; 92020 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-32108572

RESUMO

Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.


The world is full of rich and dynamic visual information. To avoid information overload, the human brain groups inputs into categories such as faces, houses, or tools. A part of the brain called the ventral occipito-temporal cortex (VOTC) helps categorize visual information. Specific parts of the VOTC prefer different types of visual input; for example, one part may tend to respond more to faces, whilst another may prefer houses. However, it is not clear how the VOTC characterizes information. One idea is that similarities between certain types of visual information may drive how information is organized in the VOTC. For example, looking at faces requires using central vision, while looking at houses requires using peripheral vision. Furthermore, all faces have a roundish shape while houses tend to have a more rectangular shape. Another possibility, however, is that the categorization of different inputs cannot be explained just by vision, and is also be driven by higher-level aspects of each category. For instance, how humans use or interact with something may also influence how an input is categorized. If categories are established depending (at least partially) on these higher-level aspects, rather than purely through visual likeness, it is likely that the VOTC would respond similarly to both sounds and images representing these categories. Now, Mattioni et al. have tested how individuals with and without sight respond to eight different categories of information to find out whether or not categorization is driven purely by visual likeness. Each category was presented to participants using sounds while measuring their brain activity. In addition, a group of participants who could see were also presented with the categories visually. Mattioni et al. then compared what happened in the VOTC of the three groups ­ sighted people presented with sounds, blind people presented with sounds, and sighted people presented with images ­ in response to each category. The experiment revealed that the VOTC organizes both auditory and visual information in a similar way. However, there were more similarities between the way blind people categorized auditory information and how sighted people categorized visual information than between how sighted people categorized each type of input. Mattioni et al. also found that the region of the VOTC that responds to inanimate objects massively overlapped across the three groups, whereas the part of the VOTC that responds to living things was more variable. These findings suggest that the way that the VOTC organizes information is, at least partly, independent from vision. The experiments also provide some information about how the brain reorganizes in people who are born blind. Further studies may reveal how differences in the VOTC of people with and without sight affect regions typically associated with auditory categorization, and potentially explain how the brain reorganizes in people who become blind later in life.


Assuntos
Percepção Auditiva , Cegueira/fisiopatologia , Lobo Occipital/fisiopatologia , Lobo Temporal/fisiopatologia , Estimulação Acústica , Estudos de Casos e Controles , Humanos
19.
Trends Cogn Sci ; 12(12): 455-60, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18951830

RESUMO

People reliably and automatically make personality inferences from facial appearance despite little evidence for their accuracy. Although such inferences are highly inter-correlated, research has traditionally focused on studying specific traits such as trustworthiness. We advocate an alternative, data-driven approach to identify and model the structure of face evaluation. Initial findings indicate that specific trait inferences can be represented within a 2D space defined by valence/trustworthiness and power/dominance evaluation of faces. Inferences along these dimensions are based on similarity to expressions signaling approach or avoidance behavior and features signaling physical strength, respectively, indicating that trait inferences from faces originate in functionally adaptive mechanisms. We conclude with a discussion of the potential role of the amygdala in face evaluation.


Assuntos
Caráter , Expressão Facial , Relações Interpessoais , Julgamento , Percepção Visual , Tonsila do Cerebelo/fisiologia , Simulação por Computador , Emoções/fisiologia , Extroversão Psicológica , Generalização do Estímulo/fisiologia , Humanos , Julgamento/fisiologia , Distorção da Percepção/fisiologia , Teoria da Construção Pessoal , Predomínio Social , Confiança/psicologia , Percepção Visual/fisiologia
20.
Emotion ; 9(1): 128-33, 2009 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-19186926

RESUMO

Using a dynamic stimuli paradigm, in which faces expressed either happiness or anger, the authors tested the hypothesis that perceptions of trustworthiness are related to these expressions. Although the same emotional intensity was added to both trustworthy and untrustworthy faces, trustworthy faces who expressed happiness were perceived as happier than untrustworthy faces, and untrustworthy faces who expressed anger were perceived as angrier than trustworthy faces. The authors also manipulated changes in face trustworthiness simultaneously with the change in expression. Whereas transitions in face trustworthiness in the direction of the expressed emotion (e.g., high-to-low trustworthiness and anger) increased the perceived intensity of the emotion, transitions in the opposite direction decreased this intensity. For example, changes from high to low trustworthiness increased the intensity of perceived anger but decreased the intensity of perceived happiness. These findings support the hypothesis that changes along the trustworthiness dimension correspond to subtle changes resembling expressions signaling whether the person displaying the emotion should be avoided or approached.


Assuntos
Afeto , Comportamento Cooperativo , Emoções Manifestas , Face , Expressão Facial , Percepção , Confiança , Feminino , Humanos , Masculino , Fatores Sexuais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA