Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
J Neurosci ; 35(18): 7256-63, 2015 May 06.
Article in English | MEDLINE | ID: mdl-25948273

ABSTRACT

The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene.


Subject(s)
Acoustic Stimulation/methods , Auditory Pathways/physiology , Auditory Perception/physiology , Brain Mapping/methods , Psychoacoustics , Adult , Electroencephalography/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Time Factors , Young Adult
2.
Cereb Cortex ; 25(7): 1697-706, 2015 Jul.
Article in English | MEDLINE | ID: mdl-24429136

ABSTRACT

How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces.


Subject(s)
Attention/physiology , Brain/physiology , Electroencephalography/methods , Signal Processing, Computer-Assisted , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Neuropsychological Tests , Time Factors
3.
Article in English | MEDLINE | ID: mdl-24110309

ABSTRACT

Traditionally, the use of electroencephalography (EEG) to study the neural processing of natural stimuli in humans has been hampered by the need to repeatedly present discrete stimuli. Progress has been made recently by the realization that cortical population activity tracks the amplitude envelope of speech stimuli. This has led to studies using linear regression methods which allow the presentation of continuous speech. One such method, known as stimulus reconstruction, has so far only been utilized in multi-electrode cortical surface recordings and magnetoencephalography (MEG). Here, in two studies, we show that such an approach is also possible with EEG, despite the poorer signal-to-noise ratio of the data. In the first study, we show that it is possible to decode attention in a naturalistic cocktail party scenario on a single trial (≈60 s) basis. In the second, we show that the representation of the envelope of auditory speech in the cortex is more robust when accompanied by visual speech. The sensitivity of this inexpensive, widely-accessible technology for the online monitoring of natural stimuli has implications for the design of future studies of the cocktail party problem and for the implementation of EEG-based brain-computer interfaces.


Subject(s)
Attention/physiology , Electroencephalography/methods , Speech/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Behavior , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL