Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Biol ; 21(12): e3002366, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38091351

RESUMO

Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.


Assuntos
Córtex Auditivo , Redes Neurais de Computação , Encéfalo , Audição , Percepção Auditiva/fisiologia , Ruído , Córtex Auditivo/fisiologia
2.
bioRxiv ; 2023 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-36711687

RESUMO

Human cortical responses to natural sounds, measured with fMRI, can be approximated as the weighted sum of a small number of canonical response patterns (components), each having interpretable functional and anatomical properties. Here, we asked whether this organization is preserved in cases where only one temporal lobe is available due to early brain damage by investigating a unique family: one sibling born without a left temporal lobe, another without a right temporal lobe, and a third anatomically neurotypical. We analyzed fMRI responses to diverse natural sounds within the intact hemispheres of these individuals and compared them to 12 neurotypical participants. All siblings manifested the neurotypical auditory responses in their intact hemispheres. These results suggest that the development of the auditory cortex in each hemisphere does not depend on the existence of the other hemisphere, highlighting the redundancy and equipotentiality of the bilateral auditory system.

4.
Curr Biol ; 32(7): 1470-1484.e12, 2022 04 11.
Artigo em Inglês | MEDLINE | ID: mdl-35196507

RESUMO

How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.


Assuntos
Córtex Auditivo , Música , Percepção da Fala , Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia
5.
J Neurophysiol ; 125(6): 2237-2263, 2021 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-33596723

RESUMO

Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Música , Prática Psicológica , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
6.
Nat Commun ; 9(1): 2122, 2018 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-29844313

RESUMO

The "cocktail party problem" requires us to discern individual sound sources from mixtures of sources. The brain must use knowledge of natural sound regularities for this purpose. One much-discussed regularity is the tendency for frequencies to be harmonically related (integer multiples of a fundamental frequency). To test the role of harmonicity in real-world sound segregation, we developed speech analysis/synthesis tools to perturb the carrier frequencies of speech, disrupting harmonic frequency relations while maintaining the spectrotemporal envelope that determines phonemic content. We find that violations of harmonicity cause individual frequencies of speech to segregate from each other, impair the intelligibility of concurrent utterances despite leaving intelligibility of single utterances intact, and cause listeners to lose track of target talkers. However, additional segregation deficits result from replacing harmonic frequencies with noise (simulating whispering), suggesting additional grouping cues enabled by voiced speech excitation. Our results demonstrate acoustic grouping cues in real-world sound segregation.


Assuntos
Localização de Som/fisiologia , Espectrografia do Som/métodos , Acústica da Fala , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Sinais (Psicologia) , Humanos , Ruído
7.
J Acoust Soc Am ; 140(1): 8, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27475128

RESUMO

When talkers speak in masking sounds, their speech undergoes a variety of acoustic and phonetic changes. These changes are known collectively as the Lombard effect. Most behavioural research and neuroimaging research in this area has concentrated on the effect of energetic maskers such as white noise on Lombard speech. Previous fMRI studies have argued that neural responses to speaking in noise are driven by the quality of auditory feedback-that is, the audibility of the speaker's voice over the masker. However, we also frequently produce speech in the presence of informational maskers such as another talker. Here, speakers read sentences over a range of maskers varying in their informational and energetic content: speech, rotated speech, speech modulated noise, and white noise. Subjects also spoke in quiet and listened to the maskers without speaking. When subjects spoke in masking sounds, their vocal intensity increased in line with the energetic content of the masker. However, the opposite pattern was found neurally. In the superior temporal gyrus, activation was most strongly associated with increases in informational, rather than energetic, masking. This suggests that the neural activations associated with speaking in noise are more complex than a simple feedback response.


Assuntos
Mascaramento Perceptivo/fisiologia , Medida da Produção da Fala , Fala/fisiologia , Imagem de Difusão por Ressonância Magnética , Humanos , Ruído , Fonética
9.
Cereb Cortex ; 25(11): 4638-50, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26092220

RESUMO

Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual-motor interactions for processing heard and internally generated auditory information.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/fisiologia , Imaginação/fisiologia , Individualidade , Ruído , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Vias Neurais/irrigação sanguínea , Vias Neurais/fisiologia , Oxigênio/sangue , Análise de Regressão , Adulto Jovem
10.
J Acoust Soc Am ; 137(1): 378-87, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25618067

RESUMO

There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that non-verbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise.


Assuntos
Música , Mascaramento Perceptivo/fisiologia , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Adulto , Atenção , Limiar Auditivo/fisiologia , Feminino , Humanos , Inteligência , Masculino , Ruído , Ocupações , Discriminação da Altura Tonal/fisiologia , Psicoacústica , Desempenho Psicomotor , Razão Sinal-Ruído , Teste de Stroop , Inquéritos e Questionários , Percepção do Tempo/fisiologia , Escalas de Wechsler , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...