Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS Biol ; 21(12): e3002366, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38091351

ABSTRACT

Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.


Subject(s)
Auditory Cortex , Neural Networks, Computer , Brain , Hearing , Auditory Perception/physiology , Noise , Auditory Cortex/physiology
2.
Nat Neurosci ; 26(11): 2017-2034, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37845543

ABSTRACT

Deep neural network models of sensory systems are often proposed to learn representational transformations with invariances like those in the brain. To reveal these invariances, we generated 'model metamers', stimuli whose activations within a model stage are matched to those of a natural stimulus. Metamers for state-of-the-art supervised and unsupervised neural network models of vision and audition were often completely unrecognizable to humans when generated from late model stages, suggesting differences between model and human invariances. Targeted model changes improved human recognizability of model metamers but did not eliminate the overall human-model discrepancy. The human recognizability of a model's metamers was well predicted by their recognizability by other models, suggesting that models contain idiosyncratic invariances in addition to those required by the task. Metamer recognizability dissociated from both traditional brain-based benchmarks and adversarial vulnerability, revealing a distinct failure mode of existing sensory models and providing a complementary benchmark for model assessment.


Subject(s)
Learning , Neural Networks, Computer , Humans , Brain
3.
Trends Cogn Sci ; 27(8): 699-701, 2023 08.
Article in English | MEDLINE | ID: mdl-37357063

ABSTRACT

Johnston and Fusi recently investigated the emergence of disentangled representations when a neural network was trained to perform multiple simultaneous tasks. Such experiments explore the benefits of flexible representations and add to a growing field of research investigating the representational geometry of artificial and biological neural networks.


Subject(s)
Neural Networks, Computer , Humans , Mathematics
5.
Curr Biol ; 32(7): 1470-1484.e12, 2022 04 11.
Article in English | MEDLINE | ID: mdl-35196507

ABSTRACT

How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.


Subject(s)
Auditory Cortex , Music , Speech Perception , Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Brain Mapping/methods , Humans , Speech/physiology , Speech Perception/physiology
6.
Neuroimage ; 197: 565-574, 2019 08 15.
Article in English | MEDLINE | ID: mdl-31077844

ABSTRACT

Many studies have investigated the development of face-, scene-, and body-selective regions in the ventral visual pathway. This work has primarily focused on comparing the size and univariate selectivity of these neural regions in children versus adults. In contrast, very few studies have investigated the developmental trajectory of more distributed activation patterns within and across neural regions. Here, we scanned both children (ages 5-7) and adults to test the hypothesis that distributed representational patterns arise before category selectivity (for faces, bodies, or scenes) in the ventral pathway. Consistent with this hypothesis, we found mature representational patterns in several ventral pathway regions (e.g., FFA, PPA, etc.), even in children who showed no hint of univariate selectivity. These results suggest that representational patterns emerge first in each region, perhaps forming a scaffold upon which univariate category selectivity can subsequently develop. More generally, our findings demonstrate an important dissociation between category selectivity and distributed response patterns, and raise questions about the relative roles of each in development and adult cognition.


Subject(s)
Child Development/physiology , Pattern Recognition, Visual/physiology , Visual Pathways , Adult , Child , Child, Preschool , Female , Humans , Magnetic Resonance Imaging , Male , Visual Pathways/growth & development , Visual Pathways/physiology
7.
Nat Neurosci ; 19(9): 1250-5, 2016 09.
Article in English | MEDLINE | ID: mdl-27500407

ABSTRACT

What determines the cortical location at which a given functionally specific region will arise in development? We tested the hypothesis that functionally specific regions develop in their characteristic locations because of pre-existing differences in the extrinsic connectivity of that region to the rest of the brain. We exploited the visual word form area (VWFA) as a test case, scanning children with diffusion and functional imaging at age 5, before they learned to read, and at age 8, after they learned to read. We found the VWFA developed functionally in this interval and that its location in a particular child at age 8 could be predicted from that child's connectivity fingerprints (but not functional responses) at age 5. These results suggest that early connectivity instructs the functional development of the VWFA, possibly reflecting a general mechanism of cortical development.


Subject(s)
Functional Laterality/physiology , Neural Pathways/physiology , Reading , Visual Perception , Brain Mapping , Child , Female , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Male , Task Performance and Analysis , Temporal Lobe/physiology
SELECTION OF CITATIONS
SEARCH DETAIL