Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Elife ; 132024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38770736

RESUMO

Pavlovian fear conditioning has been extensively used to study the behavioral and neural basis of defensive systems. In a typical procedure, a cue is paired with foot shock, and subsequent cue presentation elicits freezing, a behavior theoretically linked to predator detection. Studies have since shown a fear conditioned cue can elicit locomotion, a behavior that - in addition to jumping, and rearing - is theoretically linked to imminent or occurring predation. A criticism of studies observing fear conditioned cue-elicited locomotion is that responding is non-associative. We gave rats Pavlovian fear discrimination over a baseline of reward seeking. TTL-triggered cameras captured 5 behavior frames/s around cue presentation. Experiment 1 examined the emergence of danger-specific behaviors over fear acquisition. Experiment 2 examined the expression of danger-specific behaviors in fear extinction. In total, we scored 112,000 frames for nine discrete behavior categories. Temporal ethograms show that during acquisition, a fear conditioned cue suppresses reward seeking and elicits freezing, but also elicits locomotion, jumping, and rearing - all of which are maximal when foot shock is imminent. During extinction, a fear conditioned cue most prominently suppresses reward seeking, and elicits locomotion that is timed to shock delivery. The independent expression of these behaviors in both experiments reveal a fear conditioned cue to orchestrate a temporally organized suite of behaviors.

2.
J Neurosci ; 43(23): 4291-4303, 2023 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-37142430

RESUMO

According to a classical view of face perception (Bruce and Young, 1986; Haxby et al., 2000), face identity and facial expression recognition are performed by separate neural substrates (ventral and lateral temporal face-selective regions, respectively). However, recent studies challenge this view, showing that expression valence can also be decoded from ventral regions (Skerry and Saxe, 2014; Li et al., 2019), and identity from lateral regions (Anzellotti and Caramazza, 2017). These findings could be reconciled with the classical view if regions specialized for one task (either identity or expression) contain a small amount of information for the other task (that enables above-chance decoding). In this case, we would expect representations in lateral regions to be more similar to representations in deep convolutional neural networks (DCNNs) trained to recognize facial expression than to representations in DCNNs trained to recognize face identity (the converse should hold for ventral regions). We tested this hypothesis by analyzing neural responses to faces varying in identity and expression. Representational dissimilarity matrices (RDMs) computed from human intracranial recordings (n = 11 adults; 7 females) were compared with RDMs from DCNNs trained to label either identity or expression. We found that RDMs from DCNNs trained to recognize identity correlated with intracranial recordings more strongly in all regions tested-even in regions classically hypothesized to be specialized for expression. These results deviate from the classical view, suggesting that face-selective ventral and lateral regions contribute to the representation of both identity and expression.SIGNIFICANCE STATEMENT Previous work proposed that separate brain regions are specialized for the recognition of face identity and facial expression. However, identity and expression recognition mechanisms might share common brain regions instead. We tested these alternatives using deep neural networks and intracranial recordings from face-selective brain regions. Deep neural networks trained to recognize identity and networks trained to recognize expression learned representations that correlate with neural recordings. Identity-trained representations correlated with intracranial recordings more strongly in all regions tested, including regions hypothesized to be expression specialized in the classical hypothesis. These findings support the view that identity and expression recognition rely on common brain regions. This discovery may require reevaluation of the roles that the ventral and lateral neural pathways play in processing socially relevant stimuli.


Assuntos
Eletrocorticografia , Reconhecimento Facial , Adulto , Feminino , Humanos , Encéfalo , Redes Neurais de Computação , Reconhecimento Facial/fisiologia , Lobo Temporal/fisiologia , Mapeamento Encefálico , Imageamento por Ressonância Magnética/métodos
4.
J Neurosci ; 43(15): 2756-2766, 2023 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-36894316

RESUMO

Category selectivity is a fundamental principle of organization of perceptual brain regions. Human occipitotemporal cortex is subdivided into areas that respond preferentially to faces, bodies, artifacts, and scenes. However, observers need to combine information about objects from different categories to form a coherent understanding of the world. How is this multicategory information encoded in the brain? Studying the multivariate interactions between brain regions of male and female human subjects with fMRI and artificial neural networks, we found that the angular gyrus shows joint statistical dependence with multiple category-selective regions. Adjacent regions show effects for the combination of scenes and each other category, suggesting that scenes provide a context to combine information about the world. Additional analyses revealed a cortical map of areas that encode information across different subsets of categories, indicating that multicategory information is not encoded in a single centralized location, but in multiple distinct brain regions.SIGNIFICANCE STATEMENT Many cognitive tasks require combining information about entities from different categories. However, visual information about different categorical objects is processed by separate, specialized brain regions. How is the joint representation from multiple category-selective regions implemented in the brain? Using fMRI movie data and state-of-the-art multivariate statistical dependence based on artificial neural networks, we identified the angular gyrus encoding responses across face-, body-, artifact-, and scene-selective regions. Further, we showed a cortical map of areas that encode information across different subsets of categories. These findings suggest that multicategory information is not encoded in a single centralized location, but at multiple cortical sites which might contribute to distinct cognitive functions, offering insights to understand integration in a variety of domains.


Assuntos
Lobo Occipital , Córtex Visual , Humanos , Masculino , Feminino , Lobo Occipital/fisiologia , Córtex Visual/fisiologia , Lobo Temporal/fisiologia , Imageamento por Ressonância Magnética , Lobo Parietal/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Estimulação Luminosa , Reconhecimento Visual de Modelos/fisiologia
5.
Brain Sci ; 13(2)2023 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-36831839

RESUMO

Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources.

7.
Front Neuroinform ; 16: 835772, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35811995

RESUMO

Cognitive tasks engage multiple brain regions. Studying how these regions interact is key to understand the neural bases of cognition. Standard approaches to model the interactions between brain regions rely on univariate statistical dependence. However, newly developed methods can capture multivariate dependence. Multivariate pattern dependence (MVPD) is a powerful and flexible approach that trains and tests multivariate models of the interactions between brain regions using independent data. In this article, we introduce PyMVPD: an open source toolbox for multivariate pattern dependence. The toolbox includes linear regression models and artificial neural network models of the interactions between regions. It is designed to be easily customizable. We demonstrate example applications of PyMVPD using well-studied seed regions such as the fusiform face area (FFA) and the parahippocampal place area (PPA). Next, we compare the performance of different model architectures. Overall, artificial neural networks outperform linear regression. Importantly, the best performing architecture is region-dependent: MVPD subdivides cortex in distinct, contiguous regions whose interaction with FFA and PPA is best captured by different models.

8.
Science ; 376(6597): 1070-1074, 2022 06 03.
Artigo em Inglês | MEDLINE | ID: mdl-35653486

RESUMO

Autism spectrum disorder (ASD) is highly heterogeneous. Identifying systematic individual differences in neuroanatomy could inform diagnosis and personalized interventions. The challenge is that these differences are entangled with variation because of other causes: individual differences unrelated to ASD and measurement artifacts. We used contrastive deep learning to disentangle ASD-specific neuroanatomical variation from variation shared with typical control participants. ASD-specific variation correlated with individual differences in symptoms. The structure of this ASD-specific variation also addresses a long-standing debate about the nature of ASD: At least in terms of neuroanatomy, individuals do not cluster into distinct subtypes; instead, they are organized along continuous dimensions that affect distinct sets of regions.


Assuntos
Transtorno do Espectro Autista , Encéfalo , Aprendizado Profundo , Transtorno do Espectro Autista/patologia , Encéfalo/anormalidades , Neuroimagem Funcional , Humanos , Neuroanatomia
9.
Neuroinformatics ; 20(3): 599-611, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34519963

RESUMO

Recent analysis methods can capture nonlinear interactions between brain regions. However, noise sources might induce spurious nonlinear relationships between the responses in different regions. Previous research has demonstrated that traditional denoising techniques effectively remove noise-induced linear relationships between brain areas, but it is unknown whether these techniques can remove spurious nonlinear relationships. To address this question, we analyzed fMRI responses while participants watched the film Forrest Gump. We tested whether nonlinear Multivariate Pattern Dependence Networks (MVPN) outperform linear MVPN in non-denoised data, and whether this difference is reduced after CompCor denoising. Whereas nonlinear MVPN outperformed linear MVPN in the non-denoised data, denoising removed these nonlinear interactions. We replicated our results using different neural network architectures as the bases of MVPN, different activation functions (ReLU and sigmoid), different dimensionality reduction techniques for CompCor (PCA and ICA), and multiple datasets, demonstrating that CompCor's ability to remove nonlinear interactions is robust across these analysis choices and across different groups of participants. Finally, we asked whether information contributing to the removal of nonlinear interactions is localized to specific anatomical regions of no interest or to specific principal components. We denoised the data 8 separate times by regressing out 5 principal components extracted from combined white matter (WM) and cerebrospinal fluid (CSF), each of the 5 components separately, 5 components extracted from WM only, and 5 components extracted solely from CSF. In all cases, denoising was sufficient to remove the observed nonlinear interactions.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação
11.
Netw Neurosci ; 6(4): 1296-1315, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-38800459

RESUMO

Here, we propose a novel technique to investigate nonlinear interactions between brain regions that captures both the strength and type of the functional relationship. Inspired by the field of functional analysis, we propose that the relationship between activity in separate brain areas can be viewed as a point in function space, identified by coordinates along an infinite set of basis functions. Using Hermite polynomials as bases, we estimate a subset of these values that serve as "functional coordinates," characterizing the interaction between BOLD activity across brain areas. We provide a proof of the convergence of the estimates in the limit, and we validate the method with simulations in which the ground truth is known, additionally showing that functional coordinates detect statistical dependence even when correlations ("functional connectivity") approach zero. We then use functional coordinates to examine neural interactions with a chosen seed region: the fusiform face area (FFA). Using k-means clustering across each voxel's functional coordinates, we illustrate that adding nonlinear basis functions allows for the discrimination of interregional interactions that are otherwise grouped together when using only linear dependence. Finally, we show that regions in V5 and medial occipital and temporal lobes exhibit significant nonlinear interactions with the FFA.


In this paper, we introduce a new method to investigate not only whether a set of brain areas interact, but also how the activity in those regions is related. To do this, we model the functional relationships between activity in distinct brain areas as points in a function space that can be described by "functional coordinates" along multiple basis functions. First, we demonstrate the efficacy of this novel method on simulated data; next, we apply it to real neural data, reporting evidence of nonlinear interactions. Functional coordinates can serve as a tool in future studies to further our understanding of the complex interactions across the brain.

12.
Emotion ; 21(1): 96-107, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31580092

RESUMO

Observers attribute emotions to others relying on multiple cues, including facial expressions and information about the situation. Recent research has used Bayesian models to study how these cues are integrated. Existing studies have used a variety of tasks to probe emotion inferences, but limited attention has been devoted to the possibility that different decision processes might be involved depending on the task. If this is the case, understanding emotion representations might require understanding the decision processes through which they give rise to judgments. This article 1) shows that the different tasks that have been used in the literature yield very different results, 2) proposes an account of the decision processes involved that explain the differences, and 3) tests novel predictions of this account. The results offer new insights into how emotions are represented, and more broadly demonstrate the importance of taking decision processes into account in Bayesian models of cognition. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Emoções/fisiologia , Expressão Facial , Teorema de Bayes , Feminino , Humanos , Masculino , Projetos Piloto
13.
Cereb Cortex ; 31(2): 884-898, 2021 01 05.
Artigo em Inglês | MEDLINE | ID: mdl-32959050

RESUMO

Recent work in psychology and neuroscience has revealed differences in impression updating across social distance and group membership. Observers tend to maintain prior impressions of close (vs. distant) and ingroup (vs. outgroup) others in light of new information, and this belief maintenance is at times accompanied by increased activity in Theory of Mind regions. It remains an open question whether differences in the strength of prior beliefs, in a context absent social motivation, contribute to neural differences during belief updating. We devised a functional magnetic resonance imaging study to isolate the impact of experimentally induced prior beliefs on mentalizing activity. Participants learned about targets who performed 2 or 4 same-valenced behaviors (leading to the formation of weak or strong priors), before performing 2 counter-valenced behaviors. We found a greater change in activity in dorsomedial prefrontal cortex (DMPFC) and right temporo-parietal junction following the violation of strong versus weak priors, and a greater change in activity in DMPFC and left temporo-parietal junction following the violation of positive versus negative priors. These results indicate that differences in neural responses to unexpected behaviors from close versus distant others, and ingroup versus outgroup members, may be driven in part by differences in the strength of prior beliefs.


Assuntos
Cultura , Teoria da Mente/fisiologia , Adolescente , Adulto , Antecipação Psicológica , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Motivação , Lobo Parietal/diagnóstico por imagem , Lobo Parietal/fisiologia , Córtex Pré-Frontal/diagnóstico por imagem , Córtex Pré-Frontal/fisiologia , Desempenho Psicomotor/fisiologia , Meio Social , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Adulto Jovem
14.
Annu Rev Psychol ; 71: 613-634, 2020 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-31553673

RESUMO

How do we learn what we know about others? Answering this question requires understanding the perceptual mechanisms with which we recognize individuals and their actions, and the processes by which the resulting perceptual representations lead to inferences about people's mental states and traits. This review discusses recent behavioral, neural, and computational studies that have contributed to this broad research program, encompassing both social perception and social cognition.


Assuntos
Aprendizagem , Percepção Social , Teoria da Mente , Humanos
15.
PLoS One ; 14(9): e0222914, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31550276

RESUMO

Noise is a major challenge for the analysis of fMRI data in general and for connectivity analyses in particular. As researchers develop increasingly sophisticated tools to model statistical dependence between the fMRI signal in different brain regions, there is a risk that these models may increasingly capture artifactual relationships between regions, that are the result of noise. Thus, choosing optimal denoising methods is a crucial step to maximize the accuracy and reproducibility of connectivity models. Most comparisons between denoising methods require knowledge of the ground truth: of what is the 'real signal'. For this reason, they are usually based on simulated fMRI data. However, simulated data may not match the statistical properties of real data, limiting the generalizability of the conclusions. In this article, we propose an approach to evaluate denoising methods using real (non-simulated) fMRI data. First, we introduce an intersubject version of multivariate pattern dependence (iMVPD) that computes the statistical dependence between a brain region in one participant, and another brain region in a different participant. iMVPD has the following advantages: 1) it is multivariate, 2) it trains and tests models on independent partitions of the real fMRI data, and 3) it generates predictions that are both between subjects and between regions. Since whole-brain sources of noise are more strongly correlated within subject than between subjects, we can use the difference between standard MVPD and iMVPD as a 'discrepancy metric' to evaluate denoising techniques (where more effective techniques should yield smaller differences). As predicted, the difference is the greatest in the absence of denoising methods. Furthermore, a combination of removal of the global signal and CompCorr optimizes denoising (among the set of denoising options tested).


Assuntos
Mapeamento Encefálico/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Adulto , Algoritmos , Artefatos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Conjuntos de Dados como Assunto , Feminino , Humanos , Masculino , Modelos Estatísticos , Reprodutibilidade dos Testes , Adulto Jovem
16.
Cortex ; 103: 24-43, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29554540

RESUMO

Individuals with Autism Spectrum Disorders (ASD) report difficulties extracting meaningful information from dynamic and complex social cues, like facial expressions. The nature and mechanisms of these difficulties remain unclear. Here we tested whether that difficulty can be traced to the pattern of activity in "social brain" regions, when viewing dynamic facial expressions. In two studies, adult participants (male and female) watched brief videos of a range of positive and negative facial expressions, while undergoing functional magnetic resonance imaging (Study 1: ASD n = 16, control n = 21; Study 2: ASD n = 22, control n = 30). Patterns of hemodynamic activity differentiated among facial emotional expressions in left and right superior temporal sulcus, fusiform gyrus, and parts of medial prefrontal cortex. In both control participants and high-functioning individuals with ASD, we observed (i) similar responses to emotional valence that generalized across facial expressions and animated social events; (ii) similar flexibility of responses to emotional valence, when manipulating the task-relevance of perceived emotions; and (iii) similar responses to a range of emotions within valence. Altogether, the data indicate that there was little or no group difference in cortical responses to isolated dynamic emotional facial expressions, as measured with fMRI. Difficulties with real-world social communication and social interaction in ASD may instead reflect differences in initiating and maintaining contingent interactions, or in integrating social information over time or context.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Córtex Cerebral/fisiopatologia , Emoções/fisiologia , Expressão Facial , Generalização do Estímulo/fisiologia , Percepção Social , Percepção Visual/fisiologia , Adulto , Transtorno do Espectro Autista/diagnóstico por imagem , Transtorno do Espectro Autista/psicologia , Mapeamento Encefálico , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Adulto Jovem
17.
Trends Cogn Sci ; 22(3): 258-269, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29305206

RESUMO

For over two decades, interactions between brain regions have been measured in humans by asking how the univariate responses in different regions co-vary ('Functional Connectivity'). Thousands of Functional Connectivity studies have been published investigating the healthy brain and how it is affected by neural disorders. The advent of multivariate fMRI analyses showed that patterns of responses within regions encode information that is lost by averaging. Despite this, connectivity methods predominantly continue to focus on univariate responses. In this review, we discuss the recent emergence of multivariate and nonlinear methods for studying interactions between brain regions. These new developments bring sensitivity to fluctuations in multivariate information, and offer the possibility to ask not only whether brain regions interact, but how they do so.


Assuntos
Encéfalo/fisiologia , Conectoma/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Análise Multivariada , Rede Nervosa/fisiologia , Encéfalo/diagnóstico por imagem , Humanos , Rede Nervosa/diagnóstico por imagem
18.
PLoS Comput Biol ; 13(11): e1005799, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29155809

RESUMO

When we perform a cognitive task, multiple brain regions are engaged. Understanding how these regions interact is a fundamental step to uncover the neural bases of behavior. Most research on the interactions between brain regions has focused on the univariate responses in the regions. However, fine grained patterns of response encode important information, as shown by multivariate pattern analysis. In the present article, we introduce and apply multivariate pattern dependence (MVPD): a technique to study the statistical dependence between brain regions in humans in terms of the multivariate relations between their patterns of responses. MVPD characterizes the responses in each brain region as trajectories in region-specific multidimensional spaces, and models the multivariate relationship between these trajectories. We applied MVPD to the posterior superior temporal sulcus (pSTS) and to the fusiform face area (FFA), using a searchlight approach to reveal interactions between these seed regions and the rest of the brain. Across two different experiments, MVPD identified significant statistical dependence not detected by standard functional connectivity. Additionally, MVPD outperformed univariate connectivity in its ability to explain independent variance in the responses of individual voxels. In the end, MVPD uncovered different connectivity profiles associated with different representational subspaces of FFA: the first principal component of FFA shows differential connectivity with occipital and parietal regions implicated in the processing of low-level properties of faces, while the second and third components show differential connectivity with anterior temporal regions implicated in the processing of invariant representations of face identity.


Assuntos
Reconhecimento Fisiológico de Modelo , Reconhecimento Visual de Modelos , Adolescente , Adulto , Encéfalo/fisiologia , Feminino , Humanos , Masculino , Análise Multivariada , Adulto Jovem
20.
Cortex ; 89: 85-97, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28242496

RESUMO

Recognizing the identity of a person is fundamental to guide social interactions. We can recognize the identity of a person looking at her face, but also listening to her voice. An important question concerns how visual and auditory information come together, enabling us to recognize identity independently of the modality of the stimulus. This study reports converging evidence across univariate contrasts and multivariate classification showing that the posterior superior temporal sulcus (pSTS), previously known to encode polymodal visual and auditory representations, encodes information about person identity with invariance within and across modality. In particular, pSTS shows selectivity for faces, selectivity for voices, classification of face identity across image transformations within the visual modality, and classification of person identity across modality.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/diagnóstico por imagem , Reconhecimento Facial/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção Visual/fisiologia , Adulto , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Face , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Voz , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...