Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(49): e2309166120, 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38032934

RESUMO

Neural speech tracking has advanced our understanding of how our brains rapidly map an acoustic speech signal onto linguistic representations and ultimately meaning. It remains unclear, however, how speech intelligibility is related to the corresponding neural responses. Many studies addressing this question vary the level of intelligibility by manipulating the acoustic waveform, but this makes it difficult to cleanly disentangle the effects of intelligibility from underlying acoustical confounds. Here, using magnetoencephalography recordings, we study neural measures of speech intelligibility by manipulating intelligibility while keeping the acoustics strictly unchanged. Acoustically identical degraded speech stimuli (three-band noise-vocoded, ~20 s duration) are presented twice, but the second presentation is preceded by the original (nondegraded) version of the speech. This intermediate priming, which generates a "pop-out" percept, substantially improves the intelligibility of the second degraded speech passage. We investigate how intelligibility and acoustical structure affect acoustic and linguistic neural representations using multivariate temporal response functions (mTRFs). As expected, behavioral results confirm that perceived speech clarity is improved by priming. mTRFs analysis reveals that auditory (speech envelope and envelope onset) neural representations are not affected by priming but only by the acoustics of the stimuli (bottom-up driven). Critically, our findings suggest that segmentation of sounds into words emerges with better speech intelligibility, and most strongly at the later (~400 ms latency) word processing stage, in prefrontal cortex, in line with engagement of top-down mechanisms associated with priming. Taken together, our results show that word representations may provide some objective measures of speech comprehension.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Inteligibilidade da Fala/fisiologia , Estimulação Acústica/métodos , Fala/fisiologia , Ruído , Acústica , Magnetoencefalografia/métodos , Percepção da Fala/fisiologia
2.
Transl Psychiatry ; 13(1): 13, 2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36653335

RESUMO

Aberrant gamma frequency neural oscillations in schizophrenia have been well demonstrated using auditory steady-state responses (ASSR). However, the neural circuits underlying 40 Hz ASSR deficits in schizophrenia remain poorly understood. Sixty-six patients with schizophrenia spectrum disorders and 85 age- and gender-matched healthy controls completed one electroencephalography session measuring 40 Hz ASSR and one imaging session for resting-state functional connectivity (rsFC) assessments. The associations between the normalized power of 40 Hz ASSR and rsFC were assessed via linear regression and mediation models. We found that rsFC among auditory, precentral, postcentral, and prefrontal cortices were positively associated with 40 Hz ASSR in patients and controls separately and in the combined sample. The mediation analysis further confirmed that the deficit of gamma band ASSR in schizophrenia was nearly fully mediated by three of the rsFC circuits between right superior temporal gyrus-left medial prefrontal cortex (MPFC), left MPFC-left postcentral gyrus (PoG), and left precentral gyrus-right PoG. Gamma-band ASSR deficits in schizophrenia may be associated with deficient circuitry level connectivity to support gamma frequency synchronization. Correcting gamma band deficits in schizophrenia may require corrective interventions to normalize these aberrant networks.


Assuntos
Córtex Auditivo , Conectoma , Esquizofrenia , Humanos , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia/métodos
3.
PLoS Biol ; 18(10): e3000883, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33091003

RESUMO

Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition of the acoustic mixture, allowing the attended speech to be reconstructed via optimally weighted recombinations that discount spectrotemporal regions where sources heavily overlap. Using human magnetoencephalography (MEG) responses to a 2-talker mixture, we show evidence for an alternative possibility, in which early, active segregation occurs even for strongly spectrotemporally overlapping regions. Early (approximately 70-millisecond) responses to nonoverlapping spectrotemporal features are seen for both talkers. When competing talkers' spectrotemporal features mask each other, the individual representations persist, but they occur with an approximately 20-millisecond delay. This suggests that the auditory cortex recovers acoustic features that are masked in the mixture, even if they occurred in the ignored speech. The existence of such noise-robust cortical representations, of features present in attended as well as ignored speech, suggests an active cortical stream segregation process, which could explain a range of behavioral effects of ignored background speech.


Assuntos
Córtex Auditivo/fisiologia , Fala/fisiologia , Estimulação Acústica , Acústica , Adulto , Atenção/fisiologia , Feminino , Humanos , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Modelos Biológicos , Fatores de Tempo , Adulto Jovem
4.
Sci Rep ; 7(1): 17536, 2017 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-29235479

RESUMO

In the phenomenon of perceptual filling-in, missing sensory information can be reconstructed via interpolation or extrapolation from adjacent contextual cues by what is necessarily an endogenous, not yet well understood, neural process. In this investigation, sound stimuli were chosen to allow observation of fixed cortical oscillations driven by contextual (but missing) sensory input, thus entirely reflecting endogenous neural activity. The stimulus employed was a 5 Hz frequency-modulated tone, with brief masker probes (noise bursts) occasionally added. For half the probes, the rhythmic frequency modulation was moreover removed. Listeners reported whether the tone masked by each probe was perceived as being rhythmic or not. Time-frequency analysis of neural responses obtained by magnetoencephalography (MEG) shows that for maskers without the underlying acoustic rhythm, trials where rhythm was nonetheless perceived show higher evoked sustained rhythmic power than trials for which no rhythm was reported. The results support a model in which perceptual filling-in is aided by differential co-modulations of cortical activity at rates directly relevant to human speech communication. We propose that the presence of rhythmically-modulated neural dynamics predicts the subjective experience of a rhythmically modulated sound in real time, even when the perceptual experience is not supported by corresponding sensory data.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Mascaramento Perceptivo/fisiologia , Periodicidade , Estimulação Acústica , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Reconhecimento Fisiológico de Modelo/fisiologia , Processamento de Sinais Assistido por Computador
5.
J Neurophysiol ; 116(5): 2346-2355, 2016 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-27535374

RESUMO

Humans have a remarkable ability to track and understand speech in unfavorable conditions, such as in background noise, but speech understanding in noise does deteriorate with age. Results from several studies have shown that in younger adults, low-frequency auditory cortical activity reliably synchronizes to the speech envelope, even when the background noise is considerably louder than the speech signal. However, cortical speech processing may be limited by age-related decreases in the precision of neural synchronization in the midbrain. To understand better the neural mechanisms contributing to impaired speech perception in older adults, we investigated how aging affects midbrain and cortical encoding of speech when presented in quiet and in the presence of a single-competing talker. Our results suggest that central auditory temporal processing deficits in older adults manifest in both the midbrain and in the cortex. Specifically, midbrain frequency following responses to a speech syllable are more degraded in noise in older adults than in younger adults. This suggests a failure of the midbrain auditory mechanisms needed to compensate for the presence of a competing talker. Similarly, in cortical responses, older adults show larger reductions than younger adults in their ability to encode the speech envelope when a competing talker is added. Interestingly, older adults showed an exaggerated cortical representation of speech in both quiet and noise conditions, suggesting a possible imbalance between inhibitory and excitatory processes, or diminished network connectivity that may impair their ability to encode speech efficiently.


Assuntos
Envelhecimento/fisiologia , Córtex Auditivo/fisiologia , Mesencéfalo/fisiologia , Ruído , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Idoso , Eletroencefalografia/tendências , Feminino , Humanos , Magnetoencefalografia/tendências , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Fala/fisiologia , Adulto Jovem
6.
Neuroimage ; 124(Pt A): 906-917, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26436490

RESUMO

The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy.


Assuntos
Atenção/fisiologia , Percepção Sonora/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Algoritmos , Percepção Auditiva/fisiologia , Meio Ambiente , Feminino , Humanos , Magnetoencefalografia , Masculino , Modelos Neurológicos , Adulto Jovem
7.
Int J Psychophysiol ; 95(2): 184-90, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24841996

RESUMO

Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Magnetocardiografia , Fala , Estimulação Acústica , Mapeamento Encefálico , Humanos
8.
PLoS One ; 9(12): e114427, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25490720

RESUMO

Humans routinely segregate a complex acoustic scene into different auditory streams, through the extraction of bottom-up perceptual cues and the use of top-down selective attention. To determine the neural mechanisms underlying this process, neural responses obtained through magnetoencephalography (MEG) were correlated with behavioral performance in the context of an informational masking paradigm. In half the trials, subjects were asked to detect frequency deviants in a target stream, consisting of a rhythmic tone sequence, embedded in a separate masker stream composed of a random cloud of tones. In the other half of the trials, subjects were exposed to identical stimuli but asked to perform a different task­to detect tone-length changes in the random cloud of tones. In order to verify that the normalized neural response to the target sequence served as an indicator of streaming, we correlated neural responses with behavioral performance under a variety of stimulus parameters (target tone rate, target tone frequency, and the "protection zone", that is, the spectral area with no tones around the target frequency) and attentional states (changing task objective while maintaining the same stimuli). In all conditions that facilitated target/masker streaming behaviorally, MEG normalized neural responses also changed in a manner consistent with the behavior. Thus, attending to the target stream caused a significant increase in power and phase coherence of the responses in recording channels correlated with an increase in the behavioral performance of the listeners. Normalized neural target responses also increased as the protection zone widened and as the frequency of the target tones increased. Finally, when the target sequence rate increased, the buildup of the normalized neural responses was significantly faster, mirroring the accelerated buildup of the streaming percepts. Our data thus support close links between the perceptual and neural consequences of the auditory stream segregation.


Assuntos
Encéfalo/fisiologia , Mascaramento Perceptivo/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Magnetoencefalografia , Masculino , Psicoacústica , Fatores de Tempo , Adulto Jovem
9.
J Neurosci ; 33(13): 5728-35, 2013 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-23536086

RESUMO

Speech recognition is remarkably robust to the listening background, even when the energy of background sounds strongly overlaps with that of speech. How the brain transforms the corrupted acoustic signal into a reliable neural representation suitable for speech recognition, however, remains elusive. Here, we hypothesize that this transformation is performed at the level of auditory cortex through adaptive neural encoding, and we test the hypothesis by recording, using MEG, the neural responses of human subjects listening to a narrated story. Spectrally matched stationary noise, which has maximal acoustic overlap with the speech, is mixed in at various intensity levels. Despite the severe acoustic interference caused by this noise, it is here demonstrated that low-frequency auditory cortical activity is reliably synchronized to the slow temporal modulations of speech, even when the noise is twice as strong as the speech. Such a reliable neural representation is maintained by intensity contrast gain control and by adaptive processing of temporal modulations at different time scales, corresponding to the neural δ and θ bands. Critically, the precision of this neural synchronization predicts how well a listener can recognize speech in noise, indicating that the precision of the auditory cortical representation limits the performance of speech recognition in noise. Together, these results suggest that, in a complex listening environment, auditory cortex can selectively encode a speech stream in a background insensitive manner, and this stable neural representation of speech provides a plausible basis for background-invariant recognition of speech.


Assuntos
Adaptação Psicológica/fisiologia , Córtex Auditivo/citologia , Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Neurônios/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Córtex Auditivo/fisiologia , Ondas Encefálicas/fisiologia , Simulação por Computador , Sincronização Cortical , Feminino , Humanos , Magnetoencefalografia , Masculino , Modelos Neurológicos , Ruído , Psicoacústica , Tempo de Reação , Análise Espectral , Estatística como Assunto , Adulto Jovem
10.
J Acoust Soc Am ; 133(1): EL7-12, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23298020

RESUMO

Modern psychophysical models of auditory modulation processing suggest that concurrent auditory features with syllabic (~5 Hz) and phonemic rates (~20 Hz) are processed by different modulation filterbank elements, whereas features at similar modulation rates are processed together by a single element. The neurophysiology of concurrent modulation processing at speech-relevant rates is here investigated using magnetoencephalography. Results demonstrate expected neural responses to stimulus modulation frequencies; nonlinear interaction frequencies are also present, but, critically, only for nearby rates, analogous to "beating" in a cochlear filter. This provides direct physiological evidence for modulation filterbanks, allowing separate processing of concurrent syllabic and phonemic modulations.


Assuntos
Córtex Auditivo/fisiologia , Fonética , Percepção da Fala , Estimulação Acústica , Potenciais Evocados Auditivos , Feminino , Humanos , Magnetoencefalografia , Masculino , Dinâmica não Linear , Psicoacústica , Acústica da Fala , Inteligibilidade da Fala , Fatores de Tempo , Adulto Jovem
11.
Proc Natl Acad Sci U S A ; 109(29): 11854-9, 2012 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-22753470

RESUMO

A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation.


Assuntos
Atenção , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Audição/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Modelos Teóricos , Fatores Sexuais
12.
J Neurophysiol ; 107(8): 2033-41, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21975451

RESUMO

Slow acoustic modulations below 20 Hz, of varying bandwidths, are dominant components of speech and many other natural sounds. The dynamic neural representations of these modulations are difficult to study through noninvasive neural-recording methods, however, because of the omnipresent background of slow neural oscillations throughout the brain. We recorded the auditory steady-state responses (aSSR) to slow amplitude modulations (AM) from 14 human subjects using magnetoencephalography. The responses to five AM rates (1.5, 3.5, 7.5, 15.5, and 31.5 Hz) and four types of carrier (pure tone and 1/3-, 2-, and 5-octave pink noise) were investigated. The phase-locked aSSR was detected reliably in all conditions. The response power generally decreases with increasing modulation rate, and the response latency is between 100 and 150 ms for all but the highest rates. Response properties depend only weakly on the bandwidth. Analysis of the complex-valued aSSR magnetic fields in the Fourier domain reveals several neural sources with different response phases. These neural sources of the aSSR, when approximated by a single equivalent current dipole (ECD), are distinct from and medial to the ECD location of the N1m response. These results demonstrate that the globally synchronized activity in the human auditory cortex is phase locked to slow temporal modulations below 30 Hz, and the neural sensitivity decreases with an increasing AM rate, with relative insensitivity to bandwidth.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Magnetoencefalografia/métodos , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Fatores de Tempo
13.
Brain Topogr ; 24(2): 134-48, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21380858

RESUMO

Most ecologically natural sensory inputs are not limited to a single modality. While it is possible to use real ecological materials as experimental stimuli to investigate the neural basis of multi-sensory experience, parametric control of such tokens is limited. By using artificial bimodal stimuli composed of approximations to ecological signals, we aim to observe the interactions between putatively relevant stimulus attributes. Here we use MEG as an electrophysiological tool and employ as a measure the steady-state response (SSR), an experimental paradigm typically applied to unimodal signals. In this experiment we quantify the responses to a bimodal audio-visual signal with different degrees of temporal (phase) congruity, focusing on stimulus properties critical to audiovisual speech. An amplitude modulated auditory signal ('pseudo-speech') is paired with a radius-modulated ellipse ('pseudo-mouth'), with the envelope of low-frequency modulations occurring in phase or at offset phase values across modalities. We observe (i) that it is possible to elicit an SSR to bimodal signals; (ii) that bimodal signals exhibit greater response power than unimodal signals; and (iii) that the SSR power at specific harmonics and sensors differentially reflects the congruity between signal components. Importantly, we argue that effects found at the modulation frequency and second harmonic reflect differential aspects of neural coding of multisensory signals. The experimental paradigm facilitates a quantitative characterization of properties of multi-sensory speech and other bimodal computations.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Potenciais Evocados Visuais/fisiologia , Magnetoencefalografia/métodos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Adulto Jovem
14.
Neuropsychologia ; 48(11): 3262-71, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20633569

RESUMO

Studies in all sensory modalities have demonstrated amplification of early brain responses to attended signals, but less is known about the processes by which listeners selectively ignore stimuli. Here we use MEG and a new paradigm to dissociate the effects of selectively attending, and ignoring in time. Two different tasks were performed successively on the same acoustic stimuli: triplets of tones (A, B, C) with noise-bursts interspersed between the triplets. In the COMPARE task subjects were instructed to respond when tones A and C were of same frequency. In the PASSIVE task they were instructed to respond as fast as possible to noise-bursts. COMPARE requires attending to A and C and actively ignoring tone B, but PASSIVE involves neither attending to nor ignoring the tones. The data were analyzed separately for frontal and auditory-cortical channels to independently address attentional effects on low-level sensory versus putative control processing. We observe the earliest attend/ignore effects as early as 100 ms post-stimulus onset in auditory cortex. These appear to be generated by modulation of exogenous (stimulus-driven) sensory evoked activity. Specifically related to ignoring, we demonstrate that active-ignoring-induced input inhibition involves early selection. We identified a sequence of early (<200 ms post-onset) auditory cortical effects, comprised of onset response attenuation and the emergence of an inhibitory response, and provide new, direct evidence that listeners actively ignoring a sound can reduce their stimulus related activity in auditory cortex by 100 ms after onset when this is required to execute specific behavioral objectives.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Filtro Sensorial/fisiologia , Estimulação Acústica , Encéfalo/fisiologia , Interpretação Estatística de Dados , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Discriminação da Altura Tonal/fisiologia , Desempenho Psicomotor/fisiologia , Adulto Jovem
15.
J Neurophysiol ; 102(5): 2731-43, 2009 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19692508

RESUMO

Natural sounds such as speech contain multiple levels and multiple types of temporal modulations. Because of nonlinearities of the auditory system, however, the neural response to multiple, simultaneous temporal modulations cannot be predicted from the neural responses to single modulations. Here we show the cortical neural representation of an auditory stimulus simultaneously frequency modulated (FM) at a high rate, f(FM) approximately 40 Hz, and amplitude modulation (AM) at a slow rate, f(AM) <15 Hz. Magnetoencephalography recordings show fast FM and slow AM stimulus features evoke two separate but not independent auditory steady-state responses (aSSR) at f(FM) and f(AM), respectively. The power, rather than phase locking, of the aSSR of both decreases with increasing stimulus f(AM). The aSSR at f(FM) is itself simultaneously amplitude modulated and phase modulated with fundamental frequency f(AM), showing that the slow stimulus AM is not only encoded in the neural response at f(AM) but also encoded in the instantaneous amplitude and phase of the neural response at f(FM). Both the amplitude modulation and phase modulation of the aSSR at f(FM) are most salient for low stimulus f(AM) but remain observable at the highest tested f(AM) (13.8 Hz). The instantaneous amplitude of the aSSR at f(FM) is successfully predicted by a model containing temporal integration on two time scales, approximately 25 and approximately 200 ms, followed by a static compression nonlinearity.


Assuntos
Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Neurônios/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Simulação por Computador , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Modelos Neurológicos , Dinâmica não Linear , Psicoacústica , Tempo de Reação/fisiologia , Som , Adulto Jovem
16.
PLoS Biol ; 7(6): e1000129, 2009 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-19529760

RESUMO

The mechanism by which a complex auditory scene is parsed into coherent objects depends on poorly understood interactions between task-driven and stimulus-driven attentional processes. We illuminate these interactions in a simultaneous behavioral-neurophysiological study in which we manipulate participants' attention to different features of an auditory scene (with a regular target embedded in an irregular background). Our experimental results reveal that attention to the target, rather than to the background, correlates with a sustained (steady-state) increase in the measured neural target representation over the entire stimulus sequence, beyond auditory attention's well-known transient effects on onset responses. This enhancement, in both power and phase coherence, occurs exclusively at the frequency of the target rhythm, and is only revealed when contrasting two attentional states that direct participants' focus to different features of the acoustic stimulus. The enhancement originates in auditory cortex and covaries with both behavioral task and the bottom-up saliency of the target. Furthermore, the target's perceptual detectability improves over time, correlating strongly, within participants, with the target representation's neural buildup. These results have substantial implications for models of foreground/background organization, supporting a role of neuronal temporal synchrony in mediating auditory object formation.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Adulto , Comportamento , Feminino , Humanos , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Sistema Nervoso , Fatores de Tempo
17.
Brain Res ; 1213: 78-90, 2008 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-18455707

RESUMO

Auditory objects are detected if they differ acoustically from the ongoing background. In simple cases, the appearance or disappearance of an object involves a transition in power, or frequency content, of the ongoing sound. However, it is more realistic that the background and object possess substantial non-stationary statistics, and the task is then to detect a transition in the pattern of ongoing statistics. How does the system detect and process such transitions? We use magnetoencephalography (MEG) to measure early auditory cortical responses to transitions between constant tones, regularly alternating, and randomly alternating tone-pip sequences. Such transitions embody key characteristics of natural auditory temporal edges. Our data demonstrate that the temporal dynamics and response polarity of the neural temporal-edge-detection processes depend in specific ways on the generalized nature of the edge (the context preceding and following the transition) and suggest that distinct neural substrates in core and non-core auditory cortex are recruited depending on the kind of computation (discovery of a violation of regularity, vs. the detection of a new regularity) required to extract the edge from the ongoing fluctuating input entering a listener's ears.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Detecção de Sinal Psicológico/fisiologia , Estimulação Acústica/métodos , Adulto , Análise de Variância , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Magnetoencefalografia , Masculino , Tempo de Reação/fisiologia
18.
J Neurophysiol ; 98(6): 3473-85, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17898148

RESUMO

Complex natural sounds (e.g., animal vocalizations or speech) can be characterized by specific spectrotemporal patterns the components of which change in both frequency (FM) and amplitude (AM). The neural coding of AM and FM has been widely studied in humans and animals but typically with either pure AM or pure FM stimuli. The neural mechanisms employed to perceptually unify AM and FM acoustic features remain unclear. Using stimuli with simultaneous sinusoidal AM (at rate f(AM) = 37 Hz) and FM (with varying rates f(FM)), magnetoencephalography (MEG) is used to investigate the elicited auditory steady-state response (aSSR) at relevant frequencies (f(AM), f(FM), f(AM) + f(FM)). Previous work demonstrated that for sounds with slower FM dynamics (f(FM) < 5 Hz), the phase of the aSSR at f(AM) tracked the FM; in other words, AM and FM features were co-tracked and co-represented by "phase modulation" encoding. This study explores the neural coding mechanism for stimuli with faster FM dynamics (< or =30 Hz), demonstrating that at faster rates (f(FM) > 5 Hz), there is a transition from pure phase modulation encoding to a single-upper-sideband (SSB) response (at frequency f(AM) + f(FM)) pattern. We propose that this unexpected SSB response can be explained by the additional involvement of subsidiary AM encoding responses simultaneously to, and in quadrature with, the ongoing phase modulation. These results, using MEG to reveal a possible neural encoding of specific acoustic properties, demonstrate more generally that physiological tests of encoding hypotheses can be performed noninvasively on human subjects, complementing invasive, single-unit recordings in animals.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Algoritmos , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Magnetoencefalografia , Modelos Neurológicos
19.
J Neurophysiol ; 98(1): 224-31, 2007 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-17493921

RESUMO

We use magnetoencephalography to study human auditory cortical processing of changes in interaural correlation (IAC). We studied transitions from correlated (identical signals at the 2 ears) to uncorrelated (different signals at the 2 ears) or vice versa for two types of wide-band noise stimuli: CHANGE signals contained a single IAC change (or none) and ALT signals alternated between correlated and uncorrelated at a constant rate. The relevant transitions, from correlated to uncorrelated or vice versa, are physically identical in both stimulus conditions, but auditory cortical response patterns differed substantially. CHANGE stimuli exhibited a response asymmetry in their temporal dynamics and magnetic field morphology according to the direction of change. Distinct field patterns indicate the involvement of separate neural substrates for processing, and distinct latencies are suggestive of different temporal integration windows. In contrast, the temporal dynamics of responses to change in the ALT stimuli did not differ substantially according to the direction of change. Notably, the uncorrelated-to-correlated transition in the ALT stimuli showed a first deflection approximately 90 ms earlier than for the same transition in the CHANGE stimuli and with an opposite magnetic field distribution. This finding suggests that as early as 50 ms after the onset of an IAC transition, a given physical change is processed differentially depending on stimulus context. Consequently, even early cortical activation cannot be interpreted independently of the specific long-term stimulus context used in the experiment.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Lateralidade Funcional/fisiologia , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico , Campos Eletromagnéticos , Feminino , Humanos , Masculino
20.
J Neurosci ; 27(19): 5207-14, 2007 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-17494707

RESUMO

Auditory environments vary as a result of the appearance and disappearance of acoustic sources, as well as fluctuations characteristic of the sources themselves. The appearance of an object is often manifest as a transition in the pattern of ongoing fluctuation, rather than an onset or offset of acoustic power. How does the system detect and process such transitions? Based on magnetoencephalography data, we show that the temporal dynamics and response morphology of the neural temporal-edge detection processes depend in precise ways on the nature of the change. We measure auditory cortical responses to transitions between "disorder," modeled as a sequence of random frequency tone pips, and "order," modeled as a constant tone. Such transitions embody key characteristics of natural auditory edges. Early cortical responses (from approximately 50 ms post-transition) reveal that order-disorder transitions, and vice versa, are processed by different neural mechanisms. Their dynamics suggest that the auditory cortex optimally adjusts to stimulus statistics, even when this is not required for overt behavior. Furthermore, this response profile bears a striking similarity to that measured from another order-disorder transition, between interaurally correlated and uncorrelated noise, a radically different stimulus. This parallelism suggests the existence of a general mechanism that operates early in the processing stream on the abstract statistics of the auditory input, and is putatively related to the processes of constructing a new representation or detecting a deviation from a previously acquired model of the auditory scene. Together, the data reveal information about the mechanisms with which the brain samples, represents, and detects changes in the environment.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Rede Nervosa/fisiologia , Tempo de Reação/fisiologia , Fatores de Tempo , Percepção do Tempo/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA