Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 15(1): 5531, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38982092

RESUMEN

In everyday life, people need to respond appropriately to many types of emotional stimuli. Here, we investigate whether human occipital-temporal cortex (OTC) shows co-representation of the semantic category and affective content of visual stimuli. We also explore whether OTC transformation of semantic and affective features extracts information of value for guiding behavior. Participants viewed 1620 emotional natural images while functional magnetic resonance imaging data were acquired. Using voxel-wise modeling we show widespread tuning to semantic and affective image features across OTC. The top three principal components underlying OTC voxel-wise responses to image features encoded stimulus animacy, stimulus arousal and interactions of animacy with stimulus valence and arousal. At low to moderate dimensionality, OTC tuning patterns predicted behavioral responses linked to each image better than regressors directly based on image features. This is consistent with OTC representing stimulus semantic category and affective content in a manner suited to guiding behavior.


Asunto(s)
Emociones , Imagen por Resonancia Magnética , Lóbulo Occipital , Semántica , Lóbulo Temporal , Humanos , Femenino , Masculino , Imagen por Resonancia Magnética/métodos , Lóbulo Temporal/fisiología , Lóbulo Temporal/diagnóstico por imagen , Adulto , Lóbulo Occipital/fisiología , Lóbulo Occipital/diagnóstico por imagen , Adulto Joven , Emociones/fisiología , Mapeo Encefálico , Estimulación Luminosa , Afecto/fisiología , Nivel de Alerta/fisiología
2.
Neurobiol Lang (Camb) ; 5(1): 80-106, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38645624

RESUMEN

Language neuroscience currently relies on two major experimental paradigms: controlled experiments using carefully hand-designed stimuli, and natural stimulus experiments. These approaches have complementary advantages which allow them to address distinct aspects of the neurobiology of language, but each approach also comes with drawbacks. Here we discuss a third paradigm-in silico experimentation using deep learning-based encoding models-that has been enabled by recent advances in cognitive computational neuroscience. This paradigm promises to combine the interpretability of controlled experiments with the generalizability and broad scope of natural stimulus experiments. We show four examples of simulating language neuroscience experiments in silico and then discuss both the advantages and caveats of this approach.

4.
Sci Data ; 10(1): 555, 2023 08 23.
Artículo en Inglés | MEDLINE | ID: mdl-37612332

RESUMEN

Speech comprehension is a complex process that draws on humans' abilities to extract lexical information, parse syntax, and form semantic understanding. These sub-processes have traditionally been studied using separate neuroimaging experiments that attempt to isolate specific effects of interest. More recently it has become possible to study all stages of language comprehension in a single neuroimaging experiment using narrative natural language stimuli. The resulting data are richly varied at every level, enabling analyses that can probe everything from spectral representations to high-level representations of semantic meaning. We provide a dataset containing BOLD fMRI responses recorded while 8 participants each listened to 27 complete, natural, narrative stories (~6 hours). This dataset includes pre-processed and raw MRIs, as well as hand-constructed 3D cortical surfaces for each participant. To address the challenges of analyzing naturalistic data, this dataset is accompanied by a python library containing basic code for creating voxelwise encoding models. Altogether, this dataset provides a large and novel resource for understanding speech and language processing in the human brain.


Asunto(s)
Percepción Auditiva , Imagen por Resonancia Magnética , Humanos , Lenguaje , Neuroimagen , Semántica
5.
Nat Commun ; 14(1): 4309, 2023 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-37463907

RESUMEN

Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on a dynamic segmentation of the speech stream. Using whole brain mapping obtained in fMRI, we investigate the locus of cortical phonemic processing not only for single phonemes but also for short combinations made of diphones and triphones. We find that phonemic processing areas are much larger than previously described: they include not only the classical areas in the dorsal superior temporal gyrus but also a larger region in the lateral temporal cortex where diphone features are best represented. These identified phonemic regions overlap with the lexical retrieval region, but we show that short word retrieval is not sufficient to explain the observed responses to diphones. Behavioral studies have shown that phonemic processing and lexical retrieval are intertwined. Here, we also have identified candidate regions within the speech cortical network where this joint processing occurs.


Asunto(s)
Percepción del Habla , Habla , Humanos , Habla/fisiología , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Encéfalo/fisiología , Percepción del Habla/fisiología , Mapeo Encefálico , Imagen por Resonancia Magnética , Corteza Cerebral/diagnóstico por imagen
6.
Nat Neurosci ; 26(5): 858-866, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37127759

RESUMEN

A brain-computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain-computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain-computer interfaces.


Asunto(s)
Interfaces Cerebro-Computador , Percepción del Habla , Semántica , Encéfalo , Lenguaje , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos
7.
Adv Neural Inf Process Syst ; 36: 29654-29666, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39015152

RESUMEN

Encoding models have been used to assess how the human brain represents concepts in language and vision. While language and vision rely on similar concept representations, current encoding models are typically trained and tested on brain responses to each modality in isolation. Recent advances in multimodal pretraining have produced transformers that can extract aligned representations of concepts in language and vision. In this work, we used representations from multimodal transformers to train encoding models that can transfer across fMRI responses to stories and movies. We found that encoding models trained on brain responses to one modality can successfully predict brain responses to the other modality, particularly in cortical regions that represent conceptual meaning. Further analysis of these encoding models revealed shared semantic dimensions that underlie concept representations in language and vision. Comparing encoding models trained using representations from multimodal and unimodal transformers, we found that multimodal transformers learn more aligned representations of concepts in language and vision. Our results demonstrate how multimodal transformers can provide insights into the brain's capacity for multimodal processing.

8.
Adv Neural Inf Process Syst ; 36: 21895-21907, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-39035676

RESUMEN

Representations from transformer-based unidirectional language models are known to be effective at predicting brain responses to natural language. However, most studies comparing language models to brains have used GPT-2 or similarly sized language models. Here we tested whether larger open-source models such as those from the OPT and LLaMA families are better at predicting brain responses recorded using fMRI. Mirroring scaling results from other contexts, we found that brain prediction performance scales logarithmically with model size from 125M to 30B parameter models, with ~15% increased encoding performance as measured by correlation with a held-out test set across 3 subjects. Similar logarithmic behavior was observed when scaling the size of the fMRI training set. We also characterized scaling for acoustic encoding models that use HuBERT, WavLM, and Whisper, and we found comparable improvements with model size. A noise ceiling analysis of these large, high-performance encoding models showed that performance is nearing the theoretical maximum for brain areas such as the precuneus and higher auditory cortex. These results suggest that increasing scale in both models and data will yield incredibly effective models of language processing in the brain, enabling better scientific understanding as well as applications such as decoding.

9.
J Neurosci ; 41(50): 10341-10355, 2021 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-34732520

RESUMEN

There is a growing body of research demonstrating that the cerebellum is involved in language understanding. Early theories assumed that the cerebellum is involved in low-level language processing. However, those theories are at odds with recent work demonstrating cerebellar activation during cognitive tasks. Using natural language stimuli and an encoding model framework, we performed an fMRI experiment on 3 men and 2 women, where subjects passively listened to 5 h of natural language stimuli, which allowed us to analyze language processing in the cerebellum with higher precision than previous work. We used these data to fit voxelwise encoding models with five different feature spaces that span the hierarchy of language processing from acoustic input to high-level conceptual processing. Examining the prediction performance of these models on separate BOLD data shows that cerebellar responses to language are almost entirely explained by high-level conceptual language features rather than low-level acoustic or phonemic features. Additionally, we found that the cerebellum has a higher proportion of voxels that represent social semantic categories, which include "social" and "people" words, and lower representations of all other semantic categories, including "mental," "concrete," and "place" words, than cortex. This suggests that the cerebellum is representing language at a conceptual level with a preference for social information.SIGNIFICANCE STATEMENT Recent work has demonstrated that, beyond its typical role in motor planning, the cerebellum is implicated in a wide variety of tasks, including language. However, little is known about the language representations in the cerebellum, or how those representations compare to cortex. Using voxelwise encoding models and natural language fMRI data, we demonstrate here that language representations are significantly different in the cerebellum compared with cortex. Cerebellum language representations are almost entirely semantic, and the cerebellum contains overrepresentation of social semantic information compared with cortex. These results suggest that the cerebellum is not involved in language processing per se, but cognitive processing more generally.


Asunto(s)
Cerebelo/fisiología , Lenguaje , Modelos Neurológicos , Semántica , Percepción del Habla/fisiología , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino
10.
Nat Neurosci ; 24(11): 1628-1636, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34711960

RESUMEN

Semantic information in the human brain is organized into multiple networks, but the fine-grain relationships between them are poorly understood. In this study, we compared semantic maps obtained from two functional magnetic resonance imaging experiments in the same participants: one that used silent movies as stimuli and another that used narrative stories. Movies evoked activity from a network of modality-specific, semantically selective areas in visual cortex. Stories evoked activity from another network of semantically selective areas immediately anterior to visual cortex. Remarkably, the pattern of semantic selectivity in these two distinct networks corresponded along the boundary of visual cortex: for visual categories represented posterior to the boundary, the same categories were represented linguistically on the anterior side. These results suggest that these two networks are smoothly joined to form one contiguous map.


Asunto(s)
Lingüística/métodos , Reconocimiento Visual de Modelos/fisiología , Semántica , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
11.
Cereb Cortex ; 31(11): 4986-5005, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-34115102

RESUMEN

Humans are remarkably adept in listening to a desired speaker in a crowded environment, while filtering out nontarget speakers in the background. Attention is key to solving this difficult cocktail-party task, yet a detailed characterization of attentional effects on speech representations is lacking. It remains unclear across what levels of speech features and how much attentional modulation occurs in each brain area during the cocktail-party task. To address these questions, we recorded whole-brain blood-oxygen-level-dependent (BOLD) responses while subjects either passively listened to single-speaker stories, or selectively attended to a male or a female speaker in temporally overlaid stories in separate experiments. Spectral, articulatory, and semantic models of the natural stories were constructed. Intrinsic selectivity profiles were identified via voxelwise models fit to passive listening responses. Attentional modulations were then quantified based on model predictions for attended and unattended stories in the cocktail-party task. We find that attention causes broad modulations at multiple levels of speech representations while growing stronger toward later stages of processing, and that unattended speech is represented up to the semantic level in parabelt auditory cortex. These results provide insights on attentional mechanisms that underlie the ability to selectively listen to a desired speaker in noisy multispeaker environments.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica/métodos , Atención/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva , Femenino , Humanos , Masculino , Habla/fisiología , Percepción del Habla/fisiología
12.
Lang Cogn Neurosci ; 35(5): 573-582, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32656294

RESUMEN

Humans have a unique ability to produce and consume rich, complex, and varied language in order to communicate ideas to one another. Still, outside of natural reading, the most common methods for studying how our brains process speech or understand language use only isolated words or simple sentences. Recent studies have upset this status quo by employing complex natural stimuli and measuring how the brain responds to language as it is used. In this article we argue that natural stimuli offer many advantages over simplified, controlled stimuli for studying how language is processed by the brain. Furthermore, the downsides of using natural language stimuli can be mitigated using modern statistical and computational techniques.

13.
J Neurosci ; 39(39): 7722-7736, 2019 09 25.
Artículo en Inglés | MEDLINE | ID: mdl-31427396

RESUMEN

An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken language is represented in multiple regions in the human cerebral cortex, while amodal semantic information appears to be represented in a few broad brain regions. However, previous studies were too insensitive to determine whether semantic representations were shared at a fine level of detail rather than merely at a coarse scale. We used fMRI to record brain activity in two separate experiments while participants listened to or read several hours of the same narrative stories, and then created voxelwise encoding models to characterize semantic selectivity in each voxel and in each individual participant. We find that semantic tuning during listening and reading are highly correlated in most semantically selective regions of cortex, and models estimated using one modality accurately predict voxel responses in the other modality. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.SIGNIFICANCE STATEMENT Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here, we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.


Asunto(s)
Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Comprensión/fisiología , Modelos Neurológicos , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Lectura , Semántica
14.
Neuroimage ; 197: 482-492, 2019 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-31075394

RESUMEN

Predictive models for neural or fMRI data are often fit using regression methods that employ priors on the model parameters. One widely used method is ridge regression, which employs a spherical multivariate normal prior that assumes equal and independent variance for all parameters. However, a spherical prior is not always optimal or appropriate. There are many cases where expert knowledge or hypotheses about the structure of the model parameters could be used to construct a better prior. In these cases, non-spherical multivariate normal priors can be employed using a generalized form of ridge known as Tikhonov regression. Yet Tikhonov regression is only rarely used in neuroscience. In this paper we discuss the theoretical basis for Tikhonov regression, demonstrate a computationally efficient method for its application, and show several examples of how Tikhonov regression can improve predictive models for fMRI data. We also show that many earlier studies have implicitly used Tikhonov regression by linearly transforming the regressors before performing ridge regression.


Asunto(s)
Encéfalo/fisiología , Simulación por Computador , Imagen por Resonancia Magnética , Modelos Neurológicos , Neurociencias/métodos , Algoritmos , Humanos
15.
J Cogn Neurosci ; 31(3): 327-338, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-29916793

RESUMEN

Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top-down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged-by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging "real-world neuroscientific" approaches. These approaches differ in their principal aims, assumptions, or even definitions of "real-world neuroscience" research. Here, we showcase the commonalities and distinctive features of the different "real-world neuroscience" approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.


Asunto(s)
Encéfalo/fisiología , Cognición/fisiología , Ambiente , Neurociencias , Atención/fisiología , Humanos
16.
J Neurosci ; 37(27): 6539-6557, 2017 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-28588065

RESUMEN

Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech.SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here.


Asunto(s)
Corteza Cerebral/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Percepción del Habla/fisiología , Adulto , Simulación por Computador , Femenino , Humanos , Masculino , Vías Nerviosas/fisiología
17.
J Vis ; 17(1): 11, 2017 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-28114479

RESUMEN

During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.


Asunto(s)
Encéfalo/fisiología , Movimientos Oculares/fisiología , Fijación Ocular/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino
18.
J Neurosci ; 36(40): 10257-10273, 2016 10 05.
Artículo en Inglés | MEDLINE | ID: mdl-27707964

RESUMEN

Functional MRI studies suggest that at least three brain regions in human visual cortex-the parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA; often called the transverse occipital sulcus)-represent large-scale information in natural scenes. Tuning of voxels within each region is often assumed to be functionally homogeneous. To test this assumption, we recorded blood oxygenation level-dependent responses during passive viewing of complex natural movies. We then used a voxelwise modeling framework to estimate voxelwise category tuning profiles within each scene-selective region. In all three regions, cluster analysis of the voxelwise tuning profiles reveals two functional subdomains that differ primarily in their responses to animals, man-made objects, social communication, and movement. Thus, the conventional functional definitions of the PPA, RSC, and OPA appear to be too coarse. One attractive hypothesis is that this consistent functional subdivision of scene-selective regions is a reflection of an underlying anatomical organization into two separate processing streams, one selectively biased toward static stimuli and one biased toward dynamic stimuli. SIGNIFICANCE STATEMENT: Visual scene perception is a critical ability to survive in the real world. It is therefore reasonable to assume that the human brain contains neural circuitry selective for visual scenes. Here we show that responses in three scene-selective areas-identified in previous studies-carry information about many object and action categories encountered in daily life. We identify two subregions in each area: one that is selective for categories of man-made objects, and another that is selective for vehicles and locomotion-related action categories that appear in dynamic scenes. This consistent functional subdivision may reflect an anatomical organization into two processing streams, one biased toward static stimuli and one biased toward dynamic stimuli.


Asunto(s)
Corteza Cerebral/fisiología , Lóbulo Occipital/fisiología , Giro Parahipocampal/fisiología , Adulto , Mapeo Encefálico , Comunicación , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Modelos Neurológicos , Movimiento , Oxígeno/sangre , Estimulación Luminosa , Adulto Joven
19.
Front Syst Neurosci ; 10: 81, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27781035

RESUMEN

One crucial test for any quantitative model of the brain is to show that the model can be used to accurately decode information from evoked brain activity. Several recent neuroimaging studies have decoded the structure or semantic content of static visual images from human brain activity. Here we present a decoding algorithm that makes it possible to decode detailed information about the object and action categories present in natural movies from human brain activity signals measured by functional MRI. Decoding is accomplished using a hierarchical logistic regression (HLR) model that is based on labels that were manually assigned from the WordNet semantic taxonomy. This model makes it possible to simultaneously decode information about both specific and general categories, while respecting the relationships between them. Our results show that we can decode the presence of many object and action categories from averaged blood-oxygen level-dependent (BOLD) responses with a high degree of accuracy (area under the ROC curve > 0.9). Furthermore, we used this framework to test whether semantic relationships defined in the WordNet taxonomy are represented the same way in the human brain. This analysis showed that hierarchical relationships between general categories and atypical examples, such as organism and plant, did not seem to be reflected in representations measured by BOLD fMRI.

20.
Nature ; 532(7600): 453-8, 2016 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-27121839

RESUMEN

The meaning of language is represented in regions of the cerebral cortex collectively known as the 'semantic system'. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods--commonplace in studies of human neuroanatomy and functional connectivity--provide a powerful and efficient means for mapping functional representations in the brain.


Asunto(s)
Mapeo Encefálico , Corteza Cerebral/anatomía & histología , Corteza Cerebral/fisiología , Semántica , Habla , Adulto , Percepción Auditiva , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Narración , Análisis de Componente Principal , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA