Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 69
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 153(1): 286, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36732241

RESUMO

Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.


Assuntos
Memória de Curto Prazo , Percepção da Fala , Adulto , Humanos , Fala , Individualidade , Audiometria de Tons Puros
2.
J Neurosci ; 41(35): 7449-7460, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34341154

RESUMO

During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here, we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening.SIGNIFICANCE STATEMENT Music perception depends on our ability to learn and detect melodic structures. It has been suggested that our brain does so by actively predicting upcoming music notes, a process inducing instantaneous neural responses as the music confronts these expectations. Here, we studied this prediction process using EEGs recorded while participants listen to and imagine Bach melodies. Specifically, we examined neural signals during the ubiquitous musical pauses (or silent intervals) in a music stream and analyzed them in contrast to the imagery responses. We find that imagined predictive responses are routinely co-opted during ongoing music listening. These conclusions are revealed by a new paradigm using listening and imagery of naturalistic melodies.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Imaginação/fisiologia , Motivação/fisiologia , Música/psicologia , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Aprendizagem/fisiologia , Masculino , Cadeias de Markov , Ocupações , Adulto Jovem
3.
J Neurosci ; 41(35): 7435-7448, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34341155

RESUMO

Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery.SIGNIFICANCE STATEMENT It is well known that the human brain is activated during musical imagery: the act of voluntarily hearing music in our mind without external stimulation. It is unclear, however, what the temporal dynamics of this activation are, as well as what musical features are precisely encoded in the neural signals. This study uses an experimental paradigm with high temporal precision to record and analyze the cortical activity during musical imagery. This study reveals that neural signals encode music acoustics and melodic expectations during both listening and imagery. Crucially, it is also found that a simple mapping based on a time-shift and a polarity inversion could robustly describe the relationship between listening and imagery signals.


Assuntos
Córtex Auditivo/fisiologia , Mapeamento Encefálico , Lobo Frontal/fisiologia , Imaginação/fisiologia , Motivação/fisiologia , Música/psicologia , Estimulação Acústica , Adulto , Eletroencefalografia , Eletromiografia , Potenciais Evocados/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Cadeias de Markov , Ocupações , Simbolismo , Adulto Jovem
4.
Neuroimage ; 227: 117586, 2021 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-33346131

RESUMO

Acquiring a new language requires individuals to simultaneously and gradually learn linguistic attributes on multiple levels. Here, we investigated how this learning process changes the neural encoding of natural speech by assessing the encoding of the linguistic feature hierarchy in second-language listeners. Electroencephalography (EEG) signals were recorded from native Mandarin speakers with varied English proficiency and from native English speakers while they listened to audio-stories in English. We measured the temporal response functions (TRFs) for acoustic, phonemic, phonotactic, and semantic features in individual participants and found a main effect of proficiency on linguistic encoding. This effect of second-language proficiency was particularly prominent on the neural encoding of phonemes, showing stronger encoding of "new" phonemic contrasts (i.e., English contrasts that do not exist in Mandarin) with increasing proficiency. Overall, we found that the nonnative listeners with higher proficiency levels had a linguistic feature representation more similar to that of native listeners, which enabled the accurate decoding of language proficiency. This result advances our understanding of the cortical processing of linguistic information in second-language learners and provides an objective measure of language proficiency.


Assuntos
Encéfalo/fisiologia , Compreensão/fisiologia , Multilinguismo , Percepção da Fala/fisiologia , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Fonética , Adulto Jovem
5.
Proc Natl Acad Sci U S A ; 115(17): E3869-E3878, 2018 04 24.
Artigo em Inglês | MEDLINE | ID: mdl-29632213

RESUMO

Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.


Assuntos
Córtex Auditivo/fisiologia , Sinalização do Cálcio/fisiologia , Cálcio/metabolismo , Modelos Neurológicos , Rede Nervosa/fisiologia , Animais , Córtex Auditivo/diagnóstico por imagem , Camundongos , Rede Nervosa/diagnóstico por imagem
6.
J Neurosci ; 39(44): 8664-8678, 2019 10 30.
Artigo em Inglês | MEDLINE | ID: mdl-31519821

RESUMO

Natural sounds such as vocalizations often have covarying acoustic attributes, resulting in redundancy in neural coding. The efficient coding hypothesis proposes that sensory systems are able to detect such covariation and adapt to reduce redundancy, leading to more efficient neural coding. Recent psychoacoustic studies have shown the auditory system can rapidly adapt to efficiently encode two covarying dimensions as a single dimension, following passive exposure to sounds in which temporal and spectral attributes covaried in a correlated fashion. However, these studies observed a cost to this adaptation, which was a loss of sensitivity to the orthogonal dimension. Here we explore the neural basis of this psychophysical phenomenon by recording single-unit responses from the primary auditory cortex in awake ferrets exposed passively to stimuli with two correlated attributes, similar in stimulus design to the psychoacoustic experiments in humans. We found: (1) the signal-to-noise ratio of spike-rate coding of cortical responses driven by sounds with correlated attributes remained unchanged along the exposure dimension, but was reduced along the orthogonal dimension; (2) performance of a decoder trained with spike data to discriminate stimuli along the orthogonal dimension was equally reduced; (3) correlations between neurons tuned to the two covarying attributes decreased after exposure; and (4) these exposure effects still occurred if sounds were correlated along two acoustic dimensions, but varied randomly along a third dimension. These neurophysiological results are consistent with the efficient coding hypothesis and may help deepen our understanding of how the auditory system encodes and represents acoustic regularities and covariance.SIGNIFICANCE STATEMENT The efficient coding (EC) hypothesis (Attneave, 1954; Barlow, 1961) proposes that the neural code in sensory systems efficiently encodes natural stimuli by minimizing the number of spikes to transmit a sensory signal. Results of recent psychoacoustic studies in humans are consistent with the EC hypothesis in that, following passive exposure to stimuli with correlated attributes, the auditory system rapidly adapts so as to more efficiently encode the two covarying dimensions as a single dimension. In the current neurophysiological experiments, using a similar stimulus design and the experimental paradigm to the psychoacoustic studies of Stilp et al. (2010) and Stilp and Kluender (2011, 2012, 2016), we recorded responses from single neurons in the auditory cortex of the awake ferret, showing adaptive efficient neural coding of two correlated acoustic attributes.


Assuntos
Adaptação Fisiológica , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Neurônios/fisiologia , Estimulação Acústica , Potenciais de Ação , Animais , Feminino , Furões , Modelos Neurológicos , Psicoacústica
7.
J Neurosci ; 38(46): 9955-9966, 2018 11 14.
Artigo em Inglês | MEDLINE | ID: mdl-30266740

RESUMO

Responses of auditory cortical neurons encode sound features of incoming acoustic stimuli and also are shaped by stimulus context and history. Previous studies of mammalian auditory cortex have reported a variable time course for such contextual effects ranging from milliseconds to minutes. However, in secondary auditory forebrain areas of songbirds, long-term stimulus-specific neuronal habituation to acoustic stimuli can persist for much longer periods of time, ranging from hours to days. Such long-term habituation in the songbird is a form of long-term auditory memory that requires gene expression. Although such long-term habituation has been demonstrated in avian auditory forebrain, this phenomenon has not previously been described in the mammalian auditory system. Utilizing a similar version of the avian habituation paradigm, we explored whether such long-term effects of stimulus history also occur in auditory cortex of a mammalian auditory generalist, the ferret. Following repetitive presentation of novel complex sounds, we observed significant response habituation in secondary auditory cortex, but not in primary auditory cortex. This long-term habituation appeared to be independent for each novel stimulus and often lasted for at least 20 min. These effects could not be explained by simple neuronal fatigue in the auditory pathway, because time-reversed sounds induced undiminished responses similar to those elicited by completely novel sounds. A parallel set of pupillometric response measurements in the ferret revealed long-term habituation effects similar to observed long-term neural habituation, supporting the hypothesis that habituation to passively presented stimuli is correlated with implicit learning and long-term recognition of familiar sounds.SIGNIFICANCE STATEMENT Long-term habituation in higher areas of songbird auditory forebrain is associated with gene expression and is correlated with recognition memory. Similar long-term auditory habituation in mammals has not been previously described. We studied such habituation in single neurons in the auditory cortex of awake ferrets that were passively listening to repeated presentations of various complex sounds. Responses exhibited long-lasting habituation (at least 20 min) in the secondary, but not primary auditory cortex. Habituation ceased when stimuli were played backward, despite having identical spectral content to the original sound. This long-term neural habituation correlated with similar habituation of ferret pupillary responses to repeated presentations of the same stimuli, suggesting that stimulus habituation is retained as a long-term behavioral memory.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Habituação Psicofisiológica/fisiologia , Memória/fisiologia , Animais , Vias Auditivas/fisiologia , Feminino , Furões
8.
Cereb Cortex ; 28(3): 868-879, 2018 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-28069762

RESUMO

Sensory environments change over a wide dynamic range and sensory processing can change rapidly to facilitate stable perception. While rapid changes may occur throughout the sensory processing pathway, cortical changes are believed to profoundly influence perception. Prior stimulation studies showed that orbitofrontal cortex (OFC) can modify receptive fields and sensory coding in A1, but the engagement of OFC during listening and the pathways mediating OFC influences on A1 are unknown. We show in mice that OFC neurons respond to sounds consistent with a role of OFC in audition. We then show in vitro that OFC axons are present in A1 and excite pyramidal and GABAergic cells in all layers of A1 via glutamatergic synapses. Optogenetic stimulation of OFC terminals in A1 in vivo evokes short-latency neural activity in A1 and pairing activation of OFC projections in A1 with sounds alters sound-evoked A1 responses. Together, our results identify a direct connection from OFC to A1 that can excite A1 neurons at the earliest stage of cortical processing, and thereby sculpt A1 receptive fields. These results are consistent with a role for OFC in adjusting to changing behavioral relevance of sensory inputs and modulating A1 receptive fields to enhance sound processing.


Assuntos
Córtex Auditivo/citologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Córtex Pré-Frontal/citologia , Som , Estimulação Acústica , Potenciais de Ação/fisiologia , Animais , Percepção Auditiva , Axônios/fisiologia , Channelrhodopsins/genética , Channelrhodopsins/metabolismo , Potenciais Evocados/fisiologia , Potenciais Pós-Sinápticos Excitadores , Feminino , Glutamato Descarboxilase/genética , Glutamato Descarboxilase/metabolismo , Proteínas Luminescentes/genética , Proteínas Luminescentes/metabolismo , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Tempo de Reação/fisiologia
9.
J Acoust Soc Am ; 144(4): 2424, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30404514

RESUMO

The brain decomposes mixtures of sounds, such as competing talkers, into perceptual streams that can be attended to individually. Attention can enhance the cortical representation of streams, but it is unknown what acoustic features the enhancement reflects, or where in the auditory pathways attentional enhancement is first observed. Here, behavioral measures of streaming were combined with simultaneous low- and high-frequency envelope-following responses (EFR) that are thought to originate primarily from cortical and subcortical regions, respectively. Repeating triplets of harmonic complex tones were presented with alternating fundamental frequencies. The tones were filtered to contain either low-numbered spectrally resolved harmonics, or only high-numbered unresolved harmonics. The behavioral results confirmed that segregation can be based on either tonotopic or pitch cues. The EFR results revealed no effects of streaming or attention on subcortical responses. Cortical responses revealed attentional enhancement under conditions of streaming, but only when tonotopic cues were available, not when streaming was based only on pitch cues. The results suggest that the attentional modulation of phase-locked responses is dominated by tonotopically tuned cortical neurons that are insensitive to pitch or periodicity cues.


Assuntos
Córtex Auditivo/fisiologia , Percepção da Altura Sonora , Atenção , Feminino , Humanos , Percepção Sonora , Masculino , Som , Adulto Jovem
10.
PLoS Comput Biol ; 12(7): e1005019, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27398600

RESUMO

Sound waveforms convey information largely via amplitude modulations (AM). A large body of experimental evidence has provided support for a modulation (bandpass) filterbank. Details of this model have varied over time partly reflecting different experimental conditions and diverse datasets from distinct task strategies, contributing uncertainty to the bandwidth measurements and leaving important issues unresolved. We adopt here a solely data-driven measurement approach in which we first demonstrate how different models can be subsumed within a common 'cascade' framework, and then proceed to characterize the cascade via system identification analysis using a single stimulus/task specification and hence stable task rules largely unconstrained by any model or parameters. Observers were required to detect a brief change in level superimposed onto random level changes that served as AM noise; the relationship between trial-by-trial noisy fluctuations and corresponding human responses enables targeted identification of distinct cascade elements. The resulting measurements exhibit a dynamic complex picture in which human perception of auditory modulations appears adaptive in nature, evolving from an initial lowpass to bandpass modes (with broad tuning, Q∼1) following repeated stimulus exposure.


Assuntos
Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Análise e Desempenho de Tarefas , Adulto , Biologia Computacional , Humanos , Ruído , Adulto Jovem
11.
Proc Natl Acad Sci U S A ; 111(18): 6792-7, 2014 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-24753585

RESUMO

Humans and animals can reliably perceive behaviorally relevant sounds in noisy and reverberant environments, yet the neural mechanisms behind this phenomenon are largely unknown. To understand how neural circuits represent degraded auditory stimuli with additive and reverberant distortions, we compared single-neuron responses in ferret primary auditory cortex to speech and vocalizations in four conditions: clean, additive white and pink (1/f) noise, and reverberation. Despite substantial distortion, responses of neurons to the vocalization signal remained stable, maintaining the same statistical distribution in all conditions. Stimulus spectrograms reconstructed from population responses to the distorted stimuli resembled more the original clean than the distorted signals. To explore mechanisms contributing to this robustness, we simulated neural responses using several spectrotemporal receptive field models that incorporated either a static nonlinearity or subtractive synaptic depression and multiplicative gain normalization. The static model failed to suppress the distortions. A dynamic model incorporating feed-forward synaptic depression could account for the reduction of additive noise, but only the combined model with feedback gain normalization was able to predict the effects across both additive and reverberant conditions. Thus, both mechanisms can contribute to the abilities of humans and animals to extract relevant sounds in diverse noisy environments.


Assuntos
Córtex Auditivo/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Animais , Feminino , Furões/fisiologia , Humanos , Modelos Neurológicos , Neurônios/fisiologia , Ruído , Dinâmica não Linear , Vocalização Animal
12.
J Neurosci ; 35(18): 7256-63, 2015 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-25948273

RESUMO

The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene.


Assuntos
Estimulação Acústica/métodos , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Psicoacústica , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Fatores de Tempo , Adulto Jovem
13.
Neuroimage ; 124(Pt A): 906-917, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26436490

RESUMO

The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy.


Assuntos
Atenção/fisiologia , Percepção Sonora/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Algoritmos , Percepção Auditiva/fisiologia , Meio Ambiente , Feminino , Humanos , Magnetoencefalografia , Masculino , Modelos Neurológicos , Adulto Jovem
14.
J Neurophysiol ; 115(5): 2389-98, 2016 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-26912594

RESUMO

Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Potenciais Evocados Auditivos , Animais , Córtex Auditivo/citologia , Furões , Neurônios/fisiologia , Tempo de Reação , Som
15.
Cereb Cortex ; 25(7): 1697-706, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24429136

RESUMO

How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Fatores de Tempo
16.
J Acoust Soc Am ; 140(6): 4046, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28040019

RESUMO

In order to explore the representation of sound features in auditory long-term memory, two groups of ferrets were trained on Go vs Nogo, 3-zone classification tasks. The sound stimuli differed primarily along the spectral and temporal dimensions. In Group 1, two ferrets were trained to (i) classify tones based on their frequency (Tone-task), and subsequently learned to (ii) classify white noise based on its amplitude modulation rate (AM-task). In Group 2, two ferrets were trained to classify tones based on correlated combinations of their frequency and AM rate (AM-Tone task). Both groups of ferrets learned their tasks and were able to generalize performance along the trained spectral (tone frequency) or temporal (AM rate) dimensions. Insights into stimulus representations in memory were gained when the animals were tested with a diverse set of untrained probes that mixed features from the two dimensions. Animals exhibited a complex pattern of responses to the probes reflecting primarily the probes' spectral similarity with the training stimuli, and secondarily the temporal features of the stimuli. These diverse behavioral decisions could be well accounted for by a nearest-neighbor classifier model that relied on a multiscale spectrotemporal cortical representation of the training and probe sounds.


Assuntos
Memória de Longo Prazo , Estimulação Acústica , Animais , Córtex Auditivo , Percepção Auditiva , Aprendizagem
17.
J Neurosci ; 34(12): 4396-408, 2014 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-24647959

RESUMO

Complex natural and environmental sounds, such as speech and music, convey information along both spectral and temporal dimensions. The cortical representation of such stimuli rapidly adapts when animals become actively engaged in discriminating them. In this study, we examine the nature of these changes using simplified spectrotemporal versions (upward vs downward shifting tone sequences) with domestic ferrets (Mustela putorius). Cortical processing rapidly adapted to enhance the contrast between the two discriminated stimulus categories, by changing spectrotemporal receptive field properties to encode both the spectral and temporal structure of the tone sequences. Furthermore, the valence of the changes was closely linked to the task reward structure: stimuli associated with negative reward became enhanced relative to those associated with positive reward. These task- and-stimulus-related spectrotemporal receptive field changes occurred only in trained animals during, and immediately following, behavior. This plasticity was independently confirmed by parallel changes in a directionality function measured from the responses to the transition of tone sequences during task performance. The results demonstrate that induced patterns of rapid plasticity reflect closely the spectrotemporal structure of the task stimuli, thus extending the functional relevance of rapid task-related plasticity to the perception and learning of natural sounds such speech and animal vocalizations.


Assuntos
Adaptação Psicológica/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Plasticidade Neuronal/fisiologia , Estimulação Acústica/métodos , Animais , Aprendizagem por Discriminação/fisiologia , Furões , Tempo de Reação/fisiologia
18.
J Neurosci ; 34(46): 15135-8, 2014 Nov 12.
Artigo em Inglês | MEDLINE | ID: mdl-25392481

RESUMO

The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Animais , Audição/fisiologia , Humanos
19.
PLoS Biol ; 10(1): e1001251, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22303281

RESUMO

How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.


Assuntos
Córtex Auditivo/fisiologia , Mapeamento Encefálico , Acústica da Fala , Algoritmos , Simulação por Computador , Eletrodos Implantados , Eletroencefalografia , Feminino , Humanos , Modelos Lineares , Masculino , Modelos Biológicos
20.
Proc Natl Acad Sci U S A ; 109(6): 2144-9, 2012 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-22308415

RESUMO

As sensory stimuli and behavioral demands change, the attentive brain quickly identifies task-relevant stimuli and associates them with appropriate motor responses. The effects of attention on sensory processing vary across task paradigms, suggesting that the brain may use multiple strategies and mechanisms to highlight attended stimuli and link them to motor action. To better understand factors that contribute to these variable effects, we studied sensory representations in primary auditory cortex (A1) during two instrumental tasks that shared the same auditory discrimination but required different behavioral responses, either approach or avoidance. In the approach task, ferrets were rewarded for licking a spout when they heard a target tone amid a sequence of reference noise sounds. In the avoidance task, they were punished unless they inhibited licking to the target. To explore how these changes in task reward structure influenced attention-driven rapid plasticity in A1, we measured changes in sensory neural responses during behavior. Responses to the target changed selectively during both tasks but did so with opposite sign. Despite the differences in sign, both effects were consistent with a general neural coding strategy that maximizes discriminability between sound classes. The dependence of the direction of plasticity on task suggests that representations in A1 change not only to sharpen representations of task-relevant stimuli but also to amplify responses to stimuli that signal aversive outcomes and lead to behavioral inhibition. Thus, top-down control of sensory processing can be shaped by task reward structure in addition to the required sensory discrimination.


Assuntos
Córtex Auditivo/fisiologia , Plasticidade Neuronal/fisiologia , Recompensa , Análise e Desempenho de Tarefas , Estimulação Acústica , Animais , Aprendizagem da Esquiva/fisiologia , Comportamento Animal/fisiologia , Furões , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA