Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 146
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Annu Rev Neurosci ; 47(1): 277-301, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38669478

RESUMO

It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the new purchase LMs are providing on the question of how language is implemented in the brain. We discuss why, a priori, LMs might be expected to share similarities with the human language system. We then summarize evidence that LMs represent linguistic information similarly enough to humans to enable relatively accurate brain encoding and decoding during language processing. Finally, we examine which LM properties-their architecture, task performance, or training-are critical for capturing human neural responses to language and review studies using LMs as in silico model organisms for testing hypotheses about language. These ongoing investigations bring us closer to understanding the representations and processes that underlie our ability to comprehend sentences and express thoughts in language.


Assuntos
Encéfalo , Idioma , Humanos , Encéfalo/fisiologia , Animais , Inteligência Artificial , Modelos Neurológicos
2.
Proc Natl Acad Sci U S A ; 120(32): e2220642120, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37523537

RESUMO

Human face recognition is highly accurate and exhibits a number of distinctive and well-documented behavioral "signatures" such as the use of a characteristic representational space, the disproportionate performance cost when stimuli are presented upside down, and the drop in accuracy for faces from races the participant is less familiar with. These and other phenomena have long been taken as evidence that face recognition is "special". But why does human face perception exhibit these properties in the first place? Here, we use deep convolutional neural networks (CNNs) to test the hypothesis that all of these signatures of human face perception result from optimization for the task of face recognition. Indeed, as predicted by this hypothesis, these phenomena are all found in CNNs trained on face recognition, but not in CNNs trained on object recognition, even when additionally trained to detect faces while matching the amount of face experience. To test whether these signatures are in principle specific to faces, we optimized a CNN on car discrimination and tested it on upright and inverted car images. As we found for face perception, the car-trained network showed a drop in performance for inverted vs. upright cars. Similarly, CNNs trained on inverted faces produced an inverted face inversion effect. These findings show that the behavioral signatures of human face perception reflect and are well explained as the result of optimization for the task of face recognition, and that the nature of the computations underlying this task may not be so special after all.


Assuntos
Reconhecimento Facial , Humanos , Face , Percepção Visual , Orientação Espacial , Automóveis , Reconhecimento Visual de Modelos
3.
Proc Natl Acad Sci U S A ; 118(45)2021 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-34737231

RESUMO

The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models. By revealing trends across models, this approach yields novel insights into cognitive and neural mechanisms in the target domain. We here present a systematic study taking this approach to higher-level cognition: human language processing, our species' signature cognitive skill. We find that the most powerful "transformer" models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities (functional MRI and electrocorticography). Models' neural fits ("brain score") and fits to behavioral responses are both strongly correlated with model accuracy on the next-word prediction task (but not other language tasks). Model architecture appears to substantially contribute to neural fit. These results provide computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.


Assuntos
Encéfalo/fisiologia , Idioma , Modelos Neurológicos , Redes Neurais de Computação , Humanos
4.
Dev Sci ; 26(5): e13387, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36951215

RESUMO

Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized "model-matched" stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk. RESEARCH HIGHLIGHTS: Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI. Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants. Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus. Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.


Assuntos
Córtex Auditivo , Música , Percepção da Fala , Adulto , Humanos , Lactente , Feminino , Estimulação Acústica , Percepção Auditiva/fisiologia , Córtex Auditivo/fisiologia , Ruído , Imageamento por Ressonância Magnética , Percepção da Fala/fisiologia
5.
Proc Natl Acad Sci U S A ; 117(37): 23011-23020, 2020 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-32839334

RESUMO

The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.


Assuntos
Face/fisiologia , Reconhecimento Facial/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Reconhecimento Psicológico/fisiologia
6.
Hum Brain Mapp ; 43(9): 2782-2800, 2022 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-35274789

RESUMO

Scanning young children while they watch short, engaging, commercially-produced movies has emerged as a promising approach for increasing data retention and quality. Movie stimuli also evoke a richer variety of cognitive processes than traditional experiments, allowing the study of multiple aspects of brain development simultaneously. However, because these stimuli are uncontrolled, it is unclear how effectively distinct profiles of brain activity can be distinguished from the resulting data. Here we develop an approach for identifying multiple distinct subject-specific Regions of Interest (ssROIs) using fMRI data collected during movie-viewing. We focused on the test case of higher-level visual regions selective for faces, scenes, and objects. Adults (N = 13) were scanned while viewing a 5.6-min child-friendly movie, as well as a traditional localizer experiment with blocks of faces, scenes, and objects. We found that just 2.7 min of movie data could identify subject-specific face, scene, and object regions. While successful, movie-defined ssROIS still showed weaker domain selectivity than traditional ssROIs. Having validated our approach in adults, we then used the same methods on movie data collected from 3 to 12-year-old children (N = 122). Movie response timecourses in 3-year-old children's face, scene, and object regions were already significantly and specifically predicted by timecourses from the corresponding regions in adults. We also found evidence of continued developmental change, particularly in the face-selective posterior superior temporal sulcus. Taken together, our results reveal both early maturity and functional change in face, scene, and object regions, and more broadly highlight the promise of short, child-friendly movies for developmental cognitive neuroscience.


Assuntos
Mapeamento Encefálico , Filmes Cinematográficos , Retenção Psicológica , Adulto , Mapeamento Encefálico/métodos , Criança , Pré-Escolar , Humanos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia
7.
J Neurophysiol ; 125(6): 2237-2263, 2021 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-33596723

RESUMO

Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Música , Prática Psicológica , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
8.
Magn Reson Med ; 86(3): 1773-1785, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33829546

RESUMO

PURPOSE: Functional magnetic resonance imaging (fMRI) during infancy poses challenges due to practical, methodological, and analytical considerations. The aim of this study was to implement a hardware-related approach to increase subject compliance for fMRI involving awake infants. To accomplish this, we designed, constructed, and evaluated an adaptive 32-channel array coil. METHODS: To allow imaging with a close-fitting head array coil for infants aged 1-18 months, an adjustable head coil concept was developed. The coil setup facilitates a half-seated scanning position to improve the infant's overall scan compliance. Earmuff compartments are integrated directly into the coil housing to enable the usage of sound protection without losing a snug fit of the coil around the infant's head. The constructed array coil was evaluated from phantom data using bench-level metrics, signal-to-noise ratio (SNR) performances, and accelerated imaging capabilities for both in-plane and simultaneous multislice (SMS) reconstruction methodologies. Furthermore, preliminary fMRI data were acquired to evaluate the in vivo coil performance. RESULTS: Phantom data showed a 2.7-fold SNR increase on average when compared with a commercially available 32-channel head coil. At the center and periphery regions of the infant head phantom, the SNR gains were measured to be 1.25-fold and 3-fold, respectively. The infant coil further showed favorable encoding capabilities for undersampled k-space reconstruction methods and SMS techniques. CONCLUSIONS: An infant-friendly head coil array was developed to improve sensitivity, spatial resolution, accelerated encoding, motion insensitivity, and subject tolerance in pediatric MRI. The adaptive 32-channel array coil is well-suited for fMRI acquisitions in awake infants.


Assuntos
Imageamento por Ressonância Magnética , Vigília , Criança , Humanos , Lactente , Neuroimagem , Imagens de Fantasmas , Razão Sinal-Ruído
9.
Neuroimage ; 215: 116844, 2020 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-32302763

RESUMO

The ability to perceive others' social interactions, here defined as the directed contingent actions between two or more people, is a fundamental part of human experience that develops early in infancy and is shared with other primates. However, the neural computations underlying this ability remain largely unknown. Is social interaction recognition a rapid feedforward process or a slower post-perceptual inference? Here we used magnetoencephalography (MEG) decoding to address this question. Subjects in the MEG viewed snapshots of visually matched real-world scenes containing a pair of people who were either engaged in a social interaction or acting independently. The presence versus absence of a social interaction could be read out from subjects' MEG data spontaneously, even while subjects performed an orthogonal task. This readout generalized across different people and scenes, revealing abstract representations of social interactions in the human brain. These representations, however, did not come online until quite late, at 300 â€‹ms after image onset, well after feedforward visual processes. In a second experiment, we found that social interaction readout still occurred at this same late latency even when subjects performed an explicit task detecting social interactions. We further showed that MEG responses distinguished between different types of social interactions (mutual gaze vs joint attention) even later, around 500 â€‹ms after image onset. Taken together, these results suggest that the human brain spontaneously extracts information about others' social interactions, but does so slowly, likely relying on iterative top-down computations.


Assuntos
Encéfalo/fisiologia , Magnetoencefalografia/métodos , Tempo de Reação/fisiologia , Interação Social , Percepção Social/psicologia , Percepção Visual/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa/métodos , Adulto Jovem
10.
Neuroimage ; 221: 117191, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32711066

RESUMO

Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus ("fSTS") also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Sinais (Psicologia) , Expressão Facial , Reconhecimento Facial/fisiologia , Comunicação não Verbal/fisiologia , Percepção Social , Lobo Temporal/fisiologia , Adolescente , Adulto , Gestos , Mãos/fisiologia , Humanos , Imageamento por Ressonância Magnética , Percepção da Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA