Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Neurosci ; 44(14)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38350998

RESUMO

Human listeners possess an innate capacity to discern patterns within rapidly unfolding sensory input. Core questions, guiding ongoing research, focus on the mechanisms through which these representations are acquired and whether the brain prioritizes or suppresses predictable sensory signals. Previous work, using fast auditory sequences (tone-pips presented at a rate of 20 Hz), revealed sustained response effects that appear to track the dynamic predictability of the sequence. Here, we extend the investigation to slower sequences (4 Hz), permitting the isolation of responses to individual tones. Stimuli were 50 ms tone-pips, ordered into random (RND) and regular (REG; a repeating pattern of 10 frequencies) sequences; Two timing profiles were created: in "fast" sequences, tone-pips were presented in direct succession (20 Hz); in "slow" sequences, tone-pips were separated by a 200 ms silent gap (4 Hz). Naive participants (N = 22; both sexes) passively listened to these sequences, while brain responses were recorded using magnetoencephalography (MEG). Results unveiled a heightened magnitude of sustained brain responses in REG when compared to RND patterns. This manifested from three tones after the onset of the pattern repetition, even in the context of slower sequences characterized by extended pattern durations (2,500 ms). This observation underscores the remarkable implicit sensitivity of the auditory brain to acoustic regularities. Importantly, brain responses evoked by single tones exhibited the opposite pattern-stronger responses to tones in RND than REG sequences. The demonstration of simultaneous but opposing sustained and evoked response effects reveals concurrent processes that shape the representation of unfolding auditory patterns.


Assuntos
Córtex Auditivo , Percepção Auditiva , Masculino , Feminino , Humanos , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Encéfalo/fisiologia , Magnetoencefalografia , Córtex Auditivo/fisiologia
2.
J Neurosci ; 44(11)2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38331581

RESUMO

Microsaccades are small, involuntary eye movements that occur during fixation. Their role is debated with recent hypotheses proposing a contribution to automatic scene sampling. Microsaccadic inhibition (MSI) refers to the abrupt suppression of microsaccades, typically evoked within 0.1 s after new stimulus onset. The functional significance and neural underpinnings of MSI are subjects of ongoing research. It has been suggested that MSI is a component of the brain's attentional re-orienting network which facilitates the allocation of attention to new environmental occurrences by reducing disruptions or shifts in gaze that could interfere with processing. The extent to which MSI is reflexive or influenced by top-down mechanisms remains debated. We developed a task that examines the impact of auditory top-down attention on MSI, allowing us to disentangle ocular dynamics from visual sensory processing. Participants (N = 24 and 27; both sexes) listened to two simultaneous streams of tones and were instructed to attend to one stream while detecting specific task "targets." We quantified MSI in response to occasional task-irrelevant events presented in both the attended and unattended streams (frequency steps in Experiment 1, omissions in Experiment 2). The results show that initial stages of MSI are not affected by auditory attention. However, later stages (∼0.25 s postevent onset), affecting the extent and duration of the inhibition, are enhanced for sounds in the attended stream compared to the unattended stream. These findings provide converging evidence for the reflexive nature of early MSI stages and robustly demonstrate the involvement of auditory attention in modulating the later stages.


Assuntos
Movimentos Oculares , Percepção Visual , Masculino , Feminino , Humanos , Percepção Visual/fisiologia , Sensação , Som , Percepção Auditiva/fisiologia
3.
J Neurosci ; 43(26): 4856-4866, 2023 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-37127361

RESUMO

Listening in noisy environments requires effort- the active engagement of attention and other cognitive abilities- as well as increased arousal. The ability to separately quantify the contribution of these components is key to understanding the dynamics of effort and how it may change across listening situations and in certain populations. We concurrently measured two types of ocular data in young participants (both sexes): pupil dilation (PD; thought to index arousal aspects of effort) and microsaccades (MS; hypothesized to reflect automatic visual exploratory sampling), while they performed a speech-in-noise task under high- (HL) and low- (LL) listening load conditions. Sentences were manipulated so that the behaviorally relevant information (keywords) appeared at the end (Experiment 1) or beginning (Experiment 2) of the sentence, resulting in different temporal demands on focused attention. In line with previous reports, PD effects were associated with increased dilation under load. We observed a sustained difference between HL and LL conditions, consistent with increased phasic and tonic arousal. Importantly we show that MS rate was also modulated by listening load. This was manifested as a reduced MS rate in HL relative to LL. Critically, in contrast to the sustained difference seen for PD, MS effects were localized in time, specifically during periods when demands on auditory attention were greatest. These results demonstrate that auditory selective attention interfaces with the mechanisms controlling MS generation, establishing MS as an informative measure, complementary to PD, with which to quantify the temporal dynamics of auditory attentional processing under effortful listening conditions.SIGNIFICANCE STATEMENT Listening effort, reflecting the "cognitive bandwidth" deployed to effectively process sound in adverse environments, contributes critically to listening success. Understanding listening effort and the processes involved in its allocation is a major challenge in auditory neuroscience. Here, we demonstrate that microsaccade rate can be used to index a specific subcomponent of listening effort, the allocation of instantaneous auditory attention, that is distinct from the modulation of arousal indexed by pupil dilation (currently the dominant measure of listening effort). These results reveal the push-pull process through which auditory attention interfaces with the (visual) attention network that controls microsaccades, establishing microsaccades as a powerful tool for measuring auditory attention and its deficits.


Assuntos
Pupila , Percepção da Fala , Masculino , Feminino , Humanos , Percepção Auditiva , Ruído , Nível de Alerta
4.
Eur J Neurosci ; 58(8): 3859-3878, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37691137

RESUMO

Listeners often operate in complex acoustic environments, consisting of many concurrent sounds. Accurately encoding and maintaining such auditory objects in short-term memory is crucial for communication and scene analysis. Yet, the neural underpinnings of successful auditory short-term memory (ASTM) performance are currently not well understood. To elucidate this issue, we presented a novel, challenging auditory delayed match-to-sample task while recording MEG. Human participants listened to 'scenes' comprising three concurrent tone pip streams. The task was to indicate, after a delay, whether a probe stream was present in the just-heard scene. We present three key findings: First, behavioural performance revealed faster responses in correct versus incorrect trials as well as in 'probe present' versus 'probe absent' trials, consistent with ASTM search. Second, successful compared with unsuccessful ASTM performance was associated with a significant enhancement of event-related fields and oscillatory activity in the theta, alpha and beta frequency ranges. This extends previous findings of an overall increase of persistent activity during short-term memory performance. Third, using distributed source modelling, we found these effects to be confined mostly to sensory areas during encoding, presumably related to ASTM contents per se. Parietal and frontal sources then became relevant during the maintenance stage, indicating that effective STM operation also relies on ongoing inhibitory processes suppressing task-irrelevant information. In summary, our results deliver a detailed account of the neural patterns that differentiate successful from unsuccessful ASTM performance in the context of a complex, multi-object auditory scene.


Assuntos
Córtex Auditivo , Memória de Curto Prazo , Humanos , Memória de Curto Prazo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Som , Córtex Auditivo/fisiologia
5.
J Neurosci ; 2021 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-34083259

RESUMO

The brain is highly sensitive to auditory regularities and exploits the predictable order of sounds in many situations, from parsing complex auditory scenes, to the acquisition of language. To understand the impact of stimulus predictability on perception, it is important to determine how the detection of predictable structure influences processing and attention. Here we use pupillometry to gain insight into the effect of sensory regularity on arousal. Pupillometry is a commonly used measure of salience and processing effort, with more perceptually salient or perceptually demanding stimuli consistently associated with larger pupil diameters.In two experiments we tracked human listeners' pupil dynamics while they listened to sequences of 50ms tone pips of different frequencies. The order of the tone pips was either random, contained deterministic (fully predictable) regularities (experiment 1, n = 18, 11 female) or had a probabilistic regularity structure (experiment 2, n = 20, 17 female). The sequences were rapid, preventing conscious tracking of sequence structure thus allowing us to focus on the automatic extraction of different types of regularities. We hypothesized that if regularity facilitates processing by reducing processing demands, a smaller pupil diameter would be seen in response to regular relative to random patterns. Conversely, if regularity is associated with heightened arousal and attention (i.e. engages processing resources) the opposite pattern would be expected. In both experiments we observed a smaller sustained (tonic) pupil diameter for regular compared with random sequences, consistent with the former hypothesis and confirming that predictability facilitates sequence processing.SIGNIFICANCE STATEMENTThe brain is highly sensitive to auditory regularities. To appreciate the impact that the presence of predictability has on perception, we need to better understand how a predictable structure influences processing and attention. We recorded listeners' pupil responses to sequences of tones that followed either a predictable or unpredictable pattern, as the pupil can be used to implicitly tap into these different cognitive processes. We found that the pupil showed a smaller sustained diameter to predictable sequences, indicating that predictability eased processing rather than boosted attention. The findings suggest that the pupil response can be used to study the automatic extraction of regularities, and that the effects are most consistent with predictability helping the listener to efficiently process upcoming sounds.

6.
PLoS Comput Biol ; 17(5): e1008995, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34038404

RESUMO

[This corrects the article DOI: 10.1371/journal.pcbi.1008304.].

7.
PLoS Comput Biol ; 16(11): e1008304, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33147209

RESUMO

Statistical learning and probabilistic prediction are fundamental processes in auditory cognition. A prominent computational model of these processes is Prediction by Partial Matching (PPM), a variable-order Markov model that learns by internalizing n-grams from training sequences. However, PPM has limitations as a cognitive model: in particular, it has a perfect memory that weights all historic observations equally, which is inconsistent with memory capacity constraints and recency effects observed in human cognition. We address these limitations with PPM-Decay, a new variant of PPM that introduces a customizable memory decay kernel. In three studies-one with artificially generated sequences, one with chord sequences from Western music, and one with new behavioral data from an auditory pattern detection experiment-we show how this decay kernel improves the model's predictive performance for sequences whose underlying statistics change over time, and enables the model to capture effects of memory constraints on auditory pattern detection. The resulting model is available in our new open-source R package, ppm (https://github.com/pmcharrison/ppm).


Assuntos
Percepção Auditiva , Simulação por Computador , Memória , Algoritmos , Humanos , Música
8.
Behav Res Methods ; 53(4): 1551-1562, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33300103

RESUMO

Online experimental platforms can be used as an alternative to, or complement, lab-based research. However, when conducting auditory experiments via online methods, the researcher has limited control over the participants' listening environment. We offer a new method to probe one aspect of that environment, headphone use. Headphones not only provide better control of sound presentation but can also "shield" the listener from background noise. Here we present a rapid (< 3 min) headphone screening test based on Huggins Pitch (HP), a perceptual phenomenon that can only be detected when stimuli are presented dichotically. We validate this test using a cohort of "Trusted" online participants who completed the test using both headphones and loudspeakers. The same participants were also used to test an existing headphone test (AP test; Woods et al., 2017, Attention Perception Psychophysics). We demonstrate that compared to the AP test, the HP test has a higher selectivity for headphone users, rendering it as a compelling alternative to existing methods. Overall, the new HP test correctly detects 80% of headphone users and has a false-positive rate of 20%. Moreover, we demonstrate that combining the HP test with an additional test-either the AP test or an alternative based on a beat test (BT)-can lower the false-positive rate to ~ 7%. This should be useful in situations where headphone use is particularly critical (e.g., dichotic or spatial manipulations). Code for implementing the new tests is publicly available in JavaScript and through Gorilla (gorilla.sc).


Assuntos
Percepção Auditiva , Ruído , Estimulação Acústica , Humanos , Psicofísica , Som
9.
J Neurosci ; 39(9): 1699-1708, 2019 02 27.
Artigo em Inglês | MEDLINE | ID: mdl-30541915

RESUMO

Figure-ground segregation is fundamental to listening in complex acoustic environments. An ongoing debate pertains to whether segregation requires attention or is "automatic" and preattentive. In this magnetoencephalography study, we tested a prediction derived from load theory of attention (e.g., Lavie, 1995) that segregation requires attention but can benefit from the automatic allocation of any "leftover" capacity under low load. Complex auditory scenes were modeled with stochastic figure-ground stimuli (Teki et al., 2013), which occasionally contained repeated frequency component "figures." Naive human participants (both sexes) passively listened to these signals while performing a visual attention task of either low or high load. While clear figure-related neural responses were observed under conditions of low load, high visual load substantially reduced the neural response to the figure in auditory cortex (planum temporale, Heschl's gyrus). We conclude that fundamental figure-ground segregation in hearing is not automatic but draws on resources that are shared across vision and audition.SIGNIFICANCE STATEMENT This work resolves a long-standing question of whether figure-ground segregation, a fundamental process of auditory scene analysis, requires attention or is underpinned by automatic, encapsulated computations. Task-irrelevant sounds were presented during performance of a visual search task. We revealed a clear magnetoencephalography neural signature of figure-ground segregation in conditions of low visual load, which was substantially reduced in conditions of high visual load. This demonstrates that, although attention does not need to be actively allocated to sound for auditory segregation to occur, segregation depends on shared computational resources across vision and hearing. The findings further highlight that visual load can impair the computational capacity of the auditory system, even when it does not simply dampen auditory responses as a whole.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Percepção Visual , Adulto , Atenção , Feminino , Humanos , Magnetoencefalografia , Masculino
10.
J Neurosci ; 39(39): 7703-7714, 2019 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-31391262

RESUMO

Despite the prevalent use of alerting sounds in alarms and human-machine interface systems and the long-hypothesized role of the auditory system as the brain's "early warning system," we have only a rudimentary understanding of what determines auditory salience-the automatic attraction of attention by sound-and which brain mechanisms underlie this process. A major roadblock has been the lack of a robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N = 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (of either sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless of their modality.SIGNIFICANCE STATEMENT Microsaccades are small, rapid, fixational eye movements that are measurable with sensitive eye-tracking equipment. We reveal a novel, robust link between microsaccade dynamics and the subjective salience of brief sounds (salience rankings obtained from a large number of participants in an online experiment): Within 300 ms of sound onset, the eyes of naive, passively listening participants demonstrate different microsaccade patterns as a function of the sound's crowd-sourced salience. These results position the superior colliculus (hypothesized to underlie microsaccade generation) as an important brain area to investigate in the context of a putative multimodal salience hub. They also demonstrate an objective means for quantifying auditory salience.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Movimentos Sacádicos/fisiologia , Colículos Superiores/fisiologia , Estimulação Acústica , Adolescente , Adulto , Crowdsourcing , Feminino , Humanos , Masculino , Adulto Jovem
11.
Neuroimage ; 217: 116661, 2020 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-32081785

RESUMO

Using fMRI and multivariate pattern analysis, we determined whether spectral and temporal acoustic features are represented by independent or integrated multivoxel codes in human cortex. Listeners heard band-pass noise varying in frequency (spectral) and amplitude-modulation (AM) rate (temporal) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, multivoxel representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features (albeit weakly). Direct between-region comparisons show that whereas independent coding of frequency weakened with increasing levels of the hierarchy, such a progression for AM and integrated coding was less fine-grained and only evident in the higher hierarchical levels from non-core to parietal cortex (with AM coding weakening and integrated coding strengthening). Our findings support the notion that primary auditory cortex can represent spectral and temporal acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of sensory input.


Assuntos
Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Adolescente , Adulto , Algoritmos , Mapeamento Encefálico , Análise por Conglomerados , Feminino , Lateralidade Funcional/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Análise Multivariada , Ruído , Lobo Parietal/diagnóstico por imagem , Lobo Parietal/fisiologia , Adulto Jovem
12.
J Acoust Soc Am ; 147(6): 3814, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32611180

RESUMO

A study by Tóth, Kocsis, Háden, Szerafin, Shinn-Cunningham, and Winkler [Neuroimage 141, 108 - 119 (2016)] reported that spatial cues (such as interaural differences or ITDs) that differentiate the perceived sound source directions of a target tone sequence (figure) from simultaneous distracting tones (background) did not improve the ability of participants to detect the target sequence. The present study aims to investigate more systematically whether spatially separating a complex auditory "figure" from the background auditory stream may enhance the detection of a target in a cluttered auditory scene. Results of the presented experiment suggest that the previous negative results arose because of the specific experimental conditions tested. Here the authors find that ITDs provide a clear benefit for detecting a target tone sequence amid a mixture of other simultaneous tone bursts.


Assuntos
Sinais (Psicologia) , Localização de Som , Estimulação Acústica , Percepção Auditiva , Humanos , Som
13.
Proc Natl Acad Sci U S A ; 113(5): E616-25, 2016 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-26787854

RESUMO

We use behavioral methods, magnetoencephalography, and functional MRI to investigate how human listeners discover temporal patterns and statistical regularities in complex sound sequences. Sensitivity to patterns is fundamental to sensory processing, in particular in the auditory system, because most auditory signals only have meaning as successions over time. Previous evidence suggests that the brain is tuned to the statistics of sensory stimulation. However, the process through which this arises has been elusive. We demonstrate that listeners are remarkably sensitive to the emergence of complex patterns within rapidly evolving sound sequences, performing on par with an ideal observer model. Brain responses reveal online processes of evidence accumulation--dynamic changes in tonic activity precisely correlate with the expected precision or predictability of ongoing auditory input--both in terms of deterministic (first-order) structure and the entropy of random sequences. Source analysis demonstrates an interaction between primary auditory cortex, hippocampus, and inferior frontal gyrus in the process of discovering the regularity within the ongoing sound sequence. The results are consistent with precision based predictive coding accounts of perceptual inference and provide compelling neurophysiological evidence of the brain's capacity to encode high-order temporal structure in sensory signals.


Assuntos
Acústica , Encéfalo/fisiologia , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Psicofísica
14.
J Neurosci ; 37(28): 6751-6760, 2017 07 12.
Artigo em Inglês | MEDLINE | ID: mdl-28607165

RESUMO

Stimulus predictability can lead to substantial modulations of brain activity, such as shifts in sustained magnetic field amplitude, measured with magnetoencephalography (MEG). Here, we provide a mechanistic explanation of these effects using MEG data acquired from healthy human volunteers (N = 13, 7 female). In a source-level analysis of induced responses, we established the effects of orthogonal predictability manipulations of rapid tone-pip sequences (namely, sequence regularity and alphabet size) along the auditory processing stream. In auditory cortex, regular sequences with smaller alphabets induced greater gamma activity. Furthermore, sequence regularity shifted induced activity in frontal regions toward higher frequencies. To model these effects in terms of the underlying neurophysiology, we used dynamic causal modeling for cross-spectral density and estimated slow fluctuations in neural (postsynaptic) gain. Using the model-based parameters, we accurately explain the sensor-level sustained field amplitude, demonstrating that slow changes in synaptic efficacy, combined with sustained sensory input, can result in profound and sustained effects on neural responses to predictable sensory streams.SIGNIFICANCE STATEMENT Brain activity can be strongly modulated by the predictability of stimuli it is currently processing. An example of such a modulation is a shift in sustained magnetic field amplitude, measured with magnetoencephalography. Here, we provide a mechanistic explanation of these effects. First, we establish the oscillatory neural correlates of independent predictability manipulations in hierarchically distinct areas of the auditory processing stream. Next, we use a biophysically realistic computational model to explain these effects in terms of the underlying neurophysiology. Finally, using the model-based parameters describing neural gain modulation, we can explain the previously unexplained effects observed at the sensor level. This demonstrates that slow modulations of synaptic gain can result in profound and sustained effects on neural activity.


Assuntos
Antecipação Psicológica/fisiologia , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Potenciação de Longa Duração/fisiologia , Transmissão Sináptica/fisiologia , Estimulação Acústica , Adulto , Atenção/fisiologia , Feminino , Humanos , Masculino
15.
Cereb Cortex ; 26(9): 3669-80, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27325682

RESUMO

To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis."


Assuntos
Córtex Auditivo/fisiologia , Magnetoencefalografia/métodos , Rede Nervosa/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção da Altura Sonora/fisiologia , Adulto , Mapeamento Encefálico/métodos , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Plasticidade Neuronal/fisiologia , Análise Espaço-Temporal
16.
J Neurosci ; 35(49): 16046-54, 2015 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-26658858

RESUMO

Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT: The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic "push-pull" pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing.


Assuntos
Atenção , Percepção Auditiva/fisiologia , Transtornos da Percepção Auditiva/fisiopatologia , Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Eletroencefalografia , Potenciais Evocados Visuais/fisiologia , Feminino , Humanos , Magnetoencefalografia , Masculino , Psicofísica , Tempo de Reação , Fatores de Tempo , Adulto Jovem
17.
Neuroimage ; 126: 164-72, 2016 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-26631816

RESUMO

Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Magnetoencefalografia/métodos , Detecção de Sinal Psicológico/fisiologia , Adulto , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Fatores de Tempo , Adulto Jovem
18.
Neuroimage ; 110: 194-204, 2015 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-25659464

RESUMO

To probe sensitivity to the time structure of ongoing sound sequences, we measured MEG responses, in human listeners, to the offset of long tone-pip sequences containing various forms of temporal regularity. If listeners learn sequence temporal properties and form expectancies about the arrival time of an upcoming tone, sequence offset should be detectable as soon as an expected tone fails to arrive. Therefore, latencies of offset responses are indicative of the extent to which the temporal pattern has been acquired. In Exp1, sequences were isochronous with tone inter-onset-interval (IOI) set to 75, 125 or 225ms. Exp2 comprised of non-isochronous, temporally regular sequences, comprised of the IOIs above. Exp3 used the same sequences as Exp2 but listeners were required to monitor them for occasional frequency deviants. Analysis of the latency of offset responses revealed that the temporal structure of (even rather simple) regular sequences is not learnt precisely when the sequences are ignored. Pattern coding, supported by a network of temporal, parietal and frontal sources, improved considerably when the signals were made behaviourally pertinent. Thus, contrary to what might be expected in the context of an 'early warning system' framework, learning of temporal structure is not automatic, but affected by the signal's behavioural relevance.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Magnetoencefalografia/métodos , Estimulação Acústica , Adulto , Atenção/fisiologia , Mapeamento Encefálico , Potenciais Evocados Auditivos , Feminino , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Percepção do Tempo/fisiologia , Adulto Jovem
19.
J Cogn Neurosci ; 26(3): 514-28, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24047385

RESUMO

Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Mascaramento Perceptivo/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/fisiologia , Discriminação Psicológica/fisiologia , Feminino , Humanos , Magnetoencefalografia , Masculino , Música , Reconhecimento Fisiológico de Modelo/fisiologia , Competência Profissional , Psicoacústica , Fatores de Tempo , Incerteza
20.
J Neurophysiol ; 112(12): 3053-65, 2014 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-25231619

RESUMO

In animal models, single-neuron response properties such as stimulus-specific adaptation have been described as possible precursors to mismatch negativity, a human brain response to stimulus change. In the present study, we attempted to bridge the gap between human and animal studies by characterising responses to changes in the frequency of repeated tone series in the anesthetised guinea pig using small-animal magnetoencephalography (MEG). We showed that 1) auditory evoked fields (AEFs) qualitatively similar to those observed in human MEG studies can be detected noninvasively in rodents using small-animal MEG; 2) guinea pig AEF amplitudes reduce rapidly with tone repetition, and this AEF reduction is largely complete by the second tone in a repeated series; and 3) differences between responses to the first (deviant) and later (standard) tones after a frequency transition resemble those previously observed in awake humans using a similar stimulus paradigm.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos , Magnetoencefalografia , Inibição Neural , Estimulação Acústica , Animais , Cobaias , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA