Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 153
Filtrar
1.
J Neurosci ; 44(14)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38350998

RESUMO

Human listeners possess an innate capacity to discern patterns within rapidly unfolding sensory input. Core questions, guiding ongoing research, focus on the mechanisms through which these representations are acquired and whether the brain prioritizes or suppresses predictable sensory signals. Previous work, using fast auditory sequences (tone-pips presented at a rate of 20 Hz), revealed sustained response effects that appear to track the dynamic predictability of the sequence. Here, we extend the investigation to slower sequences (4 Hz), permitting the isolation of responses to individual tones. Stimuli were 50 ms tone-pips, ordered into random (RND) and regular (REG; a repeating pattern of 10 frequencies) sequences; Two timing profiles were created: in "fast" sequences, tone-pips were presented in direct succession (20 Hz); in "slow" sequences, tone-pips were separated by a 200 ms silent gap (4 Hz). Naive participants (N = 22; both sexes) passively listened to these sequences, while brain responses were recorded using magnetoencephalography (MEG). Results unveiled a heightened magnitude of sustained brain responses in REG when compared to RND patterns. This manifested from three tones after the onset of the pattern repetition, even in the context of slower sequences characterized by extended pattern durations (2,500 ms). This observation underscores the remarkable implicit sensitivity of the auditory brain to acoustic regularities. Importantly, brain responses evoked by single tones exhibited the opposite pattern-stronger responses to tones in RND than REG sequences. The demonstration of simultaneous but opposing sustained and evoked response effects reveals concurrent processes that shape the representation of unfolding auditory patterns.


Assuntos
Córtex Auditivo , Percepção Auditiva , Masculino , Feminino , Humanos , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Encéfalo/fisiologia , Magnetoencefalografia , Córtex Auditivo/fisiologia
2.
Cereb Cortex ; 33(10): 6257-6272, 2023 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-36562994

RESUMO

Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.


Assuntos
Córtex Auditivo , Música , Humanos , Percepção Auditiva/fisiologia , Imageamento por Ressonância Magnética , Mapeamento Encefálico , Neuroimagem , Estimulação Acústica , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia
3.
Eur J Neurosci ; 58(8): 3859-3878, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37691137

RESUMO

Listeners often operate in complex acoustic environments, consisting of many concurrent sounds. Accurately encoding and maintaining such auditory objects in short-term memory is crucial for communication and scene analysis. Yet, the neural underpinnings of successful auditory short-term memory (ASTM) performance are currently not well understood. To elucidate this issue, we presented a novel, challenging auditory delayed match-to-sample task while recording MEG. Human participants listened to 'scenes' comprising three concurrent tone pip streams. The task was to indicate, after a delay, whether a probe stream was present in the just-heard scene. We present three key findings: First, behavioural performance revealed faster responses in correct versus incorrect trials as well as in 'probe present' versus 'probe absent' trials, consistent with ASTM search. Second, successful compared with unsuccessful ASTM performance was associated with a significant enhancement of event-related fields and oscillatory activity in the theta, alpha and beta frequency ranges. This extends previous findings of an overall increase of persistent activity during short-term memory performance. Third, using distributed source modelling, we found these effects to be confined mostly to sensory areas during encoding, presumably related to ASTM contents per se. Parietal and frontal sources then became relevant during the maintenance stage, indicating that effective STM operation also relies on ongoing inhibitory processes suppressing task-irrelevant information. In summary, our results deliver a detailed account of the neural patterns that differentiate successful from unsuccessful ASTM performance in the context of a complex, multi-object auditory scene.


Assuntos
Córtex Auditivo , Memória de Curto Prazo , Humanos , Memória de Curto Prazo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Som , Córtex Auditivo/fisiologia
4.
Artigo em Inglês | MEDLINE | ID: mdl-36310303

RESUMO

Albert Feng was a pioneer in the field of auditory neuroethology who used frogs to investigate the neural basis of spectral and temporal processing and directional hearing. Among his many contributions was connecting neural mechanisms for sound pattern recognition and localization to the problems of auditory masking that frogs encounter when communicating in noisy, real-world environments. Feng's neurophysiological studies of auditory processing foreshadowed and inspired subsequent behavioral investigations of auditory masking in frogs. For frogs, vocal communication frequently occurs in breeding choruses, where males form dense aggregations and produce loud species-specific advertisement calls to attract potential mates and repel competitive rivals. In this review, we aim to highlight how Feng's research advanced our understanding of how frogs cope with noise. We structure our narrative around three themes woven throughout Feng's research-spectral, temporal, and directional processing-to illustrate how frogs can mitigate problems of auditory masking by exploiting frequency separation between signals and noise, temporal fluctuations in noise amplitude, and spatial separation between signals and noise. We conclude by proposing future research that would build on Feng's considerable legacy to advance our understanding of hearing and sound communication in frogs and other vertebrates.


Assuntos
Ruído , Vocalização Animal , Masculino , Animais , Vocalização Animal/fisiologia , Audição/fisiologia , Percepção Auditiva/fisiologia , Som , Anuros/fisiologia , Mascaramento Perceptivo
5.
Behav Res Methods ; 2023 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-37957432

RESUMO

Auditory scene analysis (ASA) is the process through which the auditory system makes sense of complex acoustic environments by organising sound mixtures into meaningful events and streams. Although music psychology has acknowledged the fundamental role of ASA in shaping music perception, no efficient test to quantify listeners' ASA abilities in realistic musical scenarios has yet been published. This study presents a new tool for testing ASA abilities in the context of music, suitable for both normal-hearing (NH) and hearing-impaired (HI) individuals: the adaptive Musical Scene Analysis (MSA) test. The test uses a simple 'yes-no' task paradigm to determine whether the sound from a single target instrument is heard in a mixture of popular music. During the online calibration phase, 525 NH and 131 HI listeners were recruited. The level ratio between the target instrument and the mixture, choice of target instrument, and number of instruments in the mixture were found to be important factors affecting item difficulty, whereas the influence of the stereo width (induced by inter-aural level differences) only had a minor effect. Based on a Bayesian logistic mixed-effects model, an adaptive version of the MSA test was developed. In a subsequent validation experiment with 74 listeners (20 HI), MSA scores showed acceptable test-retest reliability and moderate correlations with other music-related tests, pure-tone-average audiograms, age, musical sophistication, and working memory capacities. The MSA test is a user-friendly and efficient open-source tool for evaluating musical ASA abilities and is suitable for profiling the effects of hearing impairment on music perception.

6.
Brain Cogn ; 163: 105914, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36155348

RESUMO

The perception of concurrent sound sources depends on processes (i.e., auditory scene analysis) that fuse and segregate acoustic features according to harmonic relations, temporal coherence, and binaural cues (encompass dichotic pitch, location difference, simulated echo). The object-related negativity (ORN) and P400 are electrophysiological indices of concurrent sound perception. Here, we review the different paradigms used to study concurrent sound perception and the brain responses obtained from these paradigms. Recommendations regarding the design and recording parameters of the ORN and P400 are made, and their clinical applications in assessing central auditory processing ability in different populations are discussed.


Assuntos
Percepção Auditiva , Potenciais Evocados Auditivos , Estimulação Acústica , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Sinais (Psicologia) , Potenciais Evocados Auditivos/fisiologia , Audição , Humanos , Percepção da Altura Sonora/fisiologia
7.
Cereb Cortex ; 31(3): 1582-1596, 2021 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-33136138

RESUMO

In our everyday lives, we are often required to follow a conversation when background noise is present ("speech-in-noise" [SPIN] perception). SPIN perception varies widely-and people who are worse at SPIN perception are also worse at fundamental auditory grouping, as assessed by figure-ground tasks. Here, we examined the cortical processes that link difficulties with SPIN perception to difficulties with figure-ground perception using functional magnetic resonance imaging. We found strong evidence that the earliest stages of the auditory cortical hierarchy (left core and belt areas) are similarly disinhibited when SPIN and figure-ground tasks are more difficult (i.e., at target-to-masker ratios corresponding to 60% rather than 90% performance)-consistent with increased cortical gain at lower levels of the auditory hierarchy. Overall, our results reveal a common neural substrate for these basic (figure-ground) and naturally relevant (SPIN) tasks-which provides a common computational basis for the link between SPIN perception and fundamental auditory grouping.


Assuntos
Córtex Auditivo/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Adulto , Atenção/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Ruído
8.
Neuroimage ; 228: 117681, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33359346

RESUMO

Sequences of repeating tones can be masked by other tones of different frequency. When these tone sequences are perceived, nevertheless, a prominent neural response in the auditory cortex is evoked by each tone of the sequence. When the targets are detected based on their isochrony, participants know that they are listening to the target once they detected it. To explore if the neural activity is more closely related to this detection task or to perceptual awareness, this magnetoencephalography (MEG) study used targets that could only be identified with cues provided after or before the masked target. In experiment 1, multiple mono-tone streams with jittered inter-stimulus interval were used, and the tone frequency of the target was indicated by a cue. Results showed no differential auditory cortex activity between hit and miss trials with post-stimulus cues. A late negative response for hit trials was only observed for pre-stimulus cues, suggesting a task-related component. Since experiment 1 provided no evidence for a link of a difference response with tone awareness, experiment 2 was planned to probe if detection of tone streams was linked to a difference response in auditory cortex. Random-tone sequences were presented in the presence of a multi-tone masker, and the sequence was repeated without masker thereafter. Results showed a prominent difference wave for hit compared to miss trials in experiment 2 evoked by targets in the presence of the masker. These results suggest that perceptual awareness of tone streams is linked to neural activity in auditory cortex.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Conscientização/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção do Timbre/fisiologia , Adulto , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Adulto Jovem
9.
J Exp Biol ; 224(16)2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-34387665

RESUMO

Echolocating toothed whales face the problem that high sound speeds in water mean that echoes from closely spaced targets will arrive at time delays within their reported auditory integration time of some 264 µs. Here, we test the hypothesis that echolocating harbour porpoises cannot resolve and discriminate targets within a clutter interference zone given by their integration time. To do this, we trained two harbour porpoises (Phocoena phocoena) to actively approach and choose between two spherical targets at four varying inter-target distances (13.5, 27, 56 and 108 cm) in a two-alternative forced-choice task. The free-swimming, blindfolded porpoises were tagged with a sound and movement tag (DTAG4) to record their echoic scene and acoustic outputs. The known ranges between targets and the porpoise, combined with the sound levels received on target-mounted hydrophones revealed how the porpoises controlled their acoustic gaze. When targets were close together, the discrimination task was more difficult because of smaller echo time delays and lower echo level ratios between the targets. Under these conditions, buzzes were longer and started from farther away, source levels were reduced at short ranges, and the porpoises clicked faster, scanned across the targets more, and delayed making their discrimination decision until closer to the target. We conclude that harbour porpoises can resolve and discriminate closely spaced targets, suggesting a clutter rejection zone much shorter than their auditory integration time, and that such clutter rejection is greatly aided by spatial filtering with their directional biosonar beam.


Assuntos
Ecolocação , Phocoena , Toninhas , Acústica , Animais , Som , Natação
10.
Brain ; 143(9): 2689-2695, 2020 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-32875326

RESUMO

Although posterior cortical atrophy is often regarded as the canonical 'visual dementia', auditory symptoms may also be salient in this disorder. Patients often report particular difficulty hearing in busy environments; however, the core cognitive process-parsing of the auditory environment ('auditory scene analysis')-has been poorly characterized. In this cross-sectional study, we used customized perceptual tasks to assess two generic cognitive operations underpinning auditory scene analysis-sound source segregation and sound event grouping-in a cohort of 21 patients with posterior cortical atrophy, referenced to 15 healthy age-matched individuals and 21 patients with typical Alzheimer's disease. After adjusting for peripheral hearing function and performance on control tasks assessing perceptual and executive response demands, patients with posterior cortical atrophy performed significantly worse on both auditory scene analysis tasks relative to healthy controls and patients with typical Alzheimer's disease (all P < 0.05). Our findings provide further evidence of central auditory dysfunction in posterior cortical atrophy, with implications for our pathophysiological understanding of Alzheimer syndromes as well as clinical diagnosis and management.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/patologia , Perda Auditiva/diagnóstico , Desempenho Psicomotor/fisiologia , Estimulação Acústica/métodos , Idoso , Doença de Alzheimer/diagnóstico , Doença de Alzheimer/fisiopatologia , Atrofia , Córtex Cerebral/fisiopatologia , Estudos de Coortes , Estudos Transversais , Feminino , Perda Auditiva/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade
11.
Proc Natl Acad Sci U S A ; 115(14): E3313-E3322, 2018 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-29563229

RESUMO

The cocktail party problem requires listeners to infer individual sound sources from mixtures of sound. The problem can be solved only by leveraging regularities in natural sound sources, but little is known about how such regularities are internalized. We explored whether listeners learn source "schemas"-the abstract structure shared by different occurrences of the same type of sound source-and use them to infer sources from mixtures. We measured the ability of listeners to segregate mixtures of time-varying sources. In each experiment a subset of trials contained schema-based sources generated from a common template by transformations (transposition and time dilation) that introduced acoustic variation but preserved abstract structure. Across several tasks and classes of sound sources, schema-based sources consistently aided source separation, in some cases producing rapid improvements in performance over the first few exposures to a schema. Learning persisted across blocks that did not contain the learned schema, and listeners were able to learn and use multiple schemas simultaneously. No learning was evident when schema were presented in the task-irrelevant (i.e., distractor) source. However, learning from task-relevant stimuli showed signs of being implicit, in that listeners were no more likely to report that sources recurred in experiments containing schema-based sources than in control experiments containing no schema-based sources. The results implicate a mechanism for rapidly internalizing abstract sound structure, facilitating accurate perceptual organization of sound sources that recur in the environment.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Aprendizagem/fisiologia , Ruído , Localização de Som/fisiologia , Estimulação Acústica , Sinais (Psicologia) , Humanos
12.
J Neurosci ; 39(39): 7703-7714, 2019 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-31391262

RESUMO

Despite the prevalent use of alerting sounds in alarms and human-machine interface systems and the long-hypothesized role of the auditory system as the brain's "early warning system," we have only a rudimentary understanding of what determines auditory salience-the automatic attraction of attention by sound-and which brain mechanisms underlie this process. A major roadblock has been the lack of a robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N = 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (of either sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless of their modality.SIGNIFICANCE STATEMENT Microsaccades are small, rapid, fixational eye movements that are measurable with sensitive eye-tracking equipment. We reveal a novel, robust link between microsaccade dynamics and the subjective salience of brief sounds (salience rankings obtained from a large number of participants in an online experiment): Within 300 ms of sound onset, the eyes of naive, passively listening participants demonstrate different microsaccade patterns as a function of the sound's crowd-sourced salience. These results position the superior colliculus (hypothesized to underlie microsaccade generation) as an important brain area to investigate in the context of a putative multimodal salience hub. They also demonstrate an objective means for quantifying auditory salience.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Movimentos Sacádicos/fisiologia , Colículos Superiores/fisiologia , Estimulação Acústica , Adolescente , Adulto , Crowdsourcing , Feminino , Humanos , Masculino , Adulto Jovem
13.
J Neurosci ; 39(9): 1699-1708, 2019 02 27.
Artigo em Inglês | MEDLINE | ID: mdl-30541915

RESUMO

Figure-ground segregation is fundamental to listening in complex acoustic environments. An ongoing debate pertains to whether segregation requires attention or is "automatic" and preattentive. In this magnetoencephalography study, we tested a prediction derived from load theory of attention (e.g., Lavie, 1995) that segregation requires attention but can benefit from the automatic allocation of any "leftover" capacity under low load. Complex auditory scenes were modeled with stochastic figure-ground stimuli (Teki et al., 2013), which occasionally contained repeated frequency component "figures." Naive human participants (both sexes) passively listened to these signals while performing a visual attention task of either low or high load. While clear figure-related neural responses were observed under conditions of low load, high visual load substantially reduced the neural response to the figure in auditory cortex (planum temporale, Heschl's gyrus). We conclude that fundamental figure-ground segregation in hearing is not automatic but draws on resources that are shared across vision and audition.SIGNIFICANCE STATEMENT This work resolves a long-standing question of whether figure-ground segregation, a fundamental process of auditory scene analysis, requires attention or is underpinned by automatic, encapsulated computations. Task-irrelevant sounds were presented during performance of a visual search task. We revealed a clear magnetoencephalography neural signature of figure-ground segregation in conditions of low visual load, which was substantially reduced in conditions of high visual load. This demonstrates that, although attention does not need to be actively allocated to sound for auditory segregation to occur, segregation depends on shared computational resources across vision and hearing. The findings further highlight that visual load can impair the computational capacity of the auditory system, even when it does not simply dampen auditory responses as a whole.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Percepção Visual , Adulto , Atenção , Feminino , Humanos , Magnetoencefalografia , Masculino
14.
Eur J Neurosci ; 51(2): 641-650, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31430411

RESUMO

In a complex auditory scene, speech comprehension involves several stages: for example segregating the target from the background, recognizing syllables and integrating syllables into linguistic units (e.g., words). Although speech segregation is robust as shown by invariant neural tracking to target speech envelope, whether neural tracking to linguistic units is also robust and how this robustness is achieved remain unknown. To investigate these questions, we concurrently recorded neural responses tracking a rhythmic speech stream at its syllabic and word rates, using electroencephalography. Human participants listened to that target speech under a speech or noise distractor at varying signal-to-noise ratios. Neural tracking at the word rate was not as robust as neural tracking at the syllabic rate. Robust neural tracking to target's words was only observed under the speech distractor but not under the noise distractor. Moreover, this robust word tracking correlated with a successful suppression of distractor tracking. Critically, both word tracking and distractor suppression correlated with behavioural comprehension accuracy. In sum, our results suggest that a robust neural tracking of higher-level linguistic units relates to not only the target tracking, but also the distractor suppression.


Assuntos
Percepção da Fala , Compreensão , Eletroencefalografia , Humanos , Linguística , Fala
15.
Eur J Neurosci ; 51(5): 1353-1363, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-29855099

RESUMO

Human listeners robustly decode speech information from a talker of interest that is embedded in a mixture of spatially distributed interferers. A relevant question is which time-frequency segments of the speech are predominantly used by a listener to solve such a complex Auditory Scene Analysis task. A recent psychoacoustic study investigated the relevance of low signal-to-noise ratio (SNR) components of a target signal on speech intelligibility in a spatial multitalker situation. For this, a three-talker stimulus was manipulated in the spectro-temporal domain such that target speech time-frequency units below a variable SNR threshold (SNRcrit ) were discarded while keeping the interferers unchanged. The psychoacoustic data indicate that only target components at and above a local SNR of about 0 dB contribute to intelligibility. This study applies an auditory scene analysis "glimpsing" model to the same manipulated stimuli. Model data are found to be similar to the human data, supporting the notion of "glimpsing," that is, that salient speech-related information is predominantly used by the auditory system to decode speech embedded in a mixture of sounds, at least for the tested conditions of three overlapping speech signals. This implies that perceptually relevant auditory information is sparse and may be processed with low computational effort, which is relevant for neurophysiological research of scene analysis and novelty processing in the auditory system.


Assuntos
Percepção da Fala , Estimulação Acústica , Limiar Auditivo , Humanos , Mascaramento Perceptivo , Psicoacústica , Razão Sinal-Ruído , Som , Inteligibilidade da Fala
16.
Eur J Neurosci ; 51(5): 1151-1160, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-29250827

RESUMO

Predictive coding is arguably the currently dominant theoretical framework for the study of perception. It has been employed to explain important auditory perceptual phenomena, and it has inspired theoretical, experimental and computational modelling efforts aimed at describing how the auditory system parses the complex sound input into meaningful units (auditory scene analysis). These efforts have uncovered some vital questions, addressing which could help to further specify predictive coding and clarify some of its basic assumptions. The goal of the current review is to motivate these questions and show how unresolved issues in explaining some auditory phenomena lead to general questions of the theoretical framework. We focus on experimental and computational modelling issues related to sequential grouping in auditory scene analysis (auditory pattern detection and bistable perception), as we believe that this is the research topic where predictive coding has the highest potential for advancing our understanding. In addition to specific questions, our analysis led us to identify three more general questions that require further clarification: (1) What exactly is meant by prediction in predictive coding? (2) What governs which generative models make the predictions? and (3) What (if it exists) is the correlate of perceptual experience within the predictive coding framework?


Assuntos
Córtex Auditivo , Percepção Auditiva , Estimulação Acústica , Som
17.
Cereb Cortex ; 29(4): 1561-1571, 2019 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-29788144

RESUMO

Segregating concurrent sound streams is a computationally challenging task that requires integrating bottom-up acoustic cues (e.g. pitch) and top-down prior knowledge about sound streams. In a multi-talker environment, the brain can segregate different speakers in about 100 ms in auditory cortex. Here, we used magnetoencephalographic (MEG) recordings to investigate the temporal and spatial signature of how the brain utilizes prior knowledge to segregate 2 speech streams from the same speaker, which can hardly be separated based on bottom-up acoustic cues. In a primed condition, the participants know the target speech stream in advance while in an unprimed condition no such prior knowledge is available. Neural encoding of each speech stream is characterized by the MEG responses tracking the speech envelope. We demonstrate that an effect in bilateral superior temporal gyrus and superior temporal sulcus is much stronger in the primed condition than in the unprimed condition. Priming effects are observed at about 100 ms latency and last more than 600 ms. Interestingly, prior knowledge about the target stream facilitates speech segregation by mainly suppressing the neural tracking of the non-target speech stream. In sum, prior knowledge leads to reliable speech segregation in auditory cortex, even in the absence of reliable bottom-up speech segregation cue.


Assuntos
Córtex Auditivo/fisiologia , Sinais (Psicologia) , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Atenção , Feminino , Humanos , Magnetoencefalografia , Masculino , Acústica da Fala , Adulto Jovem
18.
Conscious Cogn ; 85: 103027, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33059197

RESUMO

A classical experiment of auditory stream segregation is revisited, reconceptualising perceptual ambiguity in terms of affordances and musical engagement. Specifically, three experiments are reported that investigate how listeners' perception of auditory sequences change dynamically depending on emotional context. The experiments show that listeners adapt their attention to higher or lower pitched streams (Experiments 1 and 2) and the degree of auditory stream integration or segregation (Experiment 3) in accordance with the presented emotional context. Participants with and without formal musical training show this influence, although to differing degrees (Experiment 2). Contributing evidence to the literature on interactions between emotion and cognition, these experiments demonstrate how emotion is an intrinsic part of music perception and not merely a product of the listening experience.


Assuntos
Percepção Auditiva , Música , Estimulação Acústica , Cognição , Emoções , Humanos
19.
J Neurosci ; 38(21): 4977-4984, 2018 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-29712782

RESUMO

The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene.SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent.


Assuntos
Córtex Auditivo/fisiologia , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/diagnóstico por imagem , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Simulação por Computador , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Máquina de Vetores de Suporte
20.
Annu Rev Psychol ; 69: 27-50, 2018 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-29035691

RESUMO

Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights. Cochlear filtering and pitch both play key roles in our ability to parse the auditory scene, enabling us to attend to one auditory object or stream while ignoring others. An improved understanding of the basic mechanisms of auditory perception will aid us in the quest to tackle the increasingly important problem of hearing loss in our aging population.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Perda Auditiva/fisiopatologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Córtex Auditivo/fisiopatologia , Humanos , Som , Percepção da Fala/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA