Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Nat Commun ; 15(1): 3692, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38693186

RESUMO

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Assuntos
Atenção , Movimentos Oculares , Magnetoencefalografia , Percepção da Fala , Fala , Humanos , Atenção/fisiologia , Movimentos Oculares/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Encéfalo/fisiologia , Tecnologia de Rastreamento Ocular
2.
J Neurosci ; 44(11)2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38331581

RESUMO

Microsaccades are small, involuntary eye movements that occur during fixation. Their role is debated with recent hypotheses proposing a contribution to automatic scene sampling. Microsaccadic inhibition (MSI) refers to the abrupt suppression of microsaccades, typically evoked within 0.1 s after new stimulus onset. The functional significance and neural underpinnings of MSI are subjects of ongoing research. It has been suggested that MSI is a component of the brain's attentional re-orienting network which facilitates the allocation of attention to new environmental occurrences by reducing disruptions or shifts in gaze that could interfere with processing. The extent to which MSI is reflexive or influenced by top-down mechanisms remains debated. We developed a task that examines the impact of auditory top-down attention on MSI, allowing us to disentangle ocular dynamics from visual sensory processing. Participants (N = 24 and 27; both sexes) listened to two simultaneous streams of tones and were instructed to attend to one stream while detecting specific task "targets." We quantified MSI in response to occasional task-irrelevant events presented in both the attended and unattended streams (frequency steps in Experiment 1, omissions in Experiment 2). The results show that initial stages of MSI are not affected by auditory attention. However, later stages (∼0.25 s postevent onset), affecting the extent and duration of the inhibition, are enhanced for sounds in the attended stream compared to the unattended stream. These findings provide converging evidence for the reflexive nature of early MSI stages and robustly demonstrate the involvement of auditory attention in modulating the later stages.


Assuntos
Movimentos Oculares , Percepção Visual , Masculino , Feminino , Humanos , Percepção Visual/fisiologia , Sensação , Som , Percepção Auditiva/fisiologia
3.
J Neurosci ; 44(14)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38350998

RESUMO

Human listeners possess an innate capacity to discern patterns within rapidly unfolding sensory input. Core questions, guiding ongoing research, focus on the mechanisms through which these representations are acquired and whether the brain prioritizes or suppresses predictable sensory signals. Previous work, using fast auditory sequences (tone-pips presented at a rate of 20 Hz), revealed sustained response effects that appear to track the dynamic predictability of the sequence. Here, we extend the investigation to slower sequences (4 Hz), permitting the isolation of responses to individual tones. Stimuli were 50 ms tone-pips, ordered into random (RND) and regular (REG; a repeating pattern of 10 frequencies) sequences; Two timing profiles were created: in "fast" sequences, tone-pips were presented in direct succession (20 Hz); in "slow" sequences, tone-pips were separated by a 200 ms silent gap (4 Hz). Naive participants (N = 22; both sexes) passively listened to these sequences, while brain responses were recorded using magnetoencephalography (MEG). Results unveiled a heightened magnitude of sustained brain responses in REG when compared to RND patterns. This manifested from three tones after the onset of the pattern repetition, even in the context of slower sequences characterized by extended pattern durations (2,500 ms). This observation underscores the remarkable implicit sensitivity of the auditory brain to acoustic regularities. Importantly, brain responses evoked by single tones exhibited the opposite pattern-stronger responses to tones in RND than REG sequences. The demonstration of simultaneous but opposing sustained and evoked response effects reveals concurrent processes that shape the representation of unfolding auditory patterns.


Assuntos
Córtex Auditivo , Percepção Auditiva , Masculino , Feminino , Humanos , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Encéfalo/fisiologia , Magnetoencefalografia , Córtex Auditivo/fisiologia
4.
Cognition ; 244: 105696, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38160651

RESUMO

From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.


Assuntos
Música , Percepção da Fala , Humanos , Percepção Auditiva , Percepção da Altura Sonora , Cognição , Estimulação Acústica
5.
Curr Res Neurobiol ; 5: 100115, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38020808

RESUMO

Any listening task, from sound recognition to sound-based communication, rests on auditory memory which is known to decline in healthy ageing. However, how this decline maps onto multiple components and stages of auditory memory remains poorly characterised. In an online unsupervised longitudinal study, we tested ageing effects on implicit auditory memory for rapid tone patterns. The test required participants (younger, aged 20-30, and older adults aged 60-70) to quickly respond to rapid regularly repeating patterns emerging from random sequences. Patterns were novel in most trials (REGn), but unbeknownst to the participants, a few distinct patterns reoccurred identically throughout the sessions (REGr). After correcting for processing speed, the response times (RT) to REGn should reflect the information held in echoic and short-term memory before detecting the pattern; long-term memory formation and retention should be reflected by the RT advantage (RTA) to REGr vs REGn which is expected to grow with exposure. Older participants were slower than younger adults in detecting REGn and exhibited a smaller RTA to REGr. Computational simulations using a model of auditory sequence memory indicated that these effects reflect age-related limitations both in early and long-term memory stages. In contrast to ageing-related accelerated forgetting of verbal material, here older adults maintained stable memory traces for REGr patterns up to 6 months after the first exposure. The results demonstrate that ageing is associated with reduced short-term memory and long-term memory formation for tone patterns, but not with forgetting, even over surprisingly long timescales.

6.
Trends Hear ; 27: 23312165231190688, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37828868

RESUMO

A growing literature is demonstrating a link between working memory (WM) and speech-in-noise (SiN) perception. However, the nature of this correlation and which components of WM might underlie it, are being debated. We investigated how SiN reception links with auditory sensory memory (aSM) - the low-level processes that support the short-term maintenance of temporally unfolding sounds. A large sample of old (N = 199, 60-79 yo) and young (N = 149, 20-35 yo) participants was recruited online and performed a coordinate response measure-based speech-in-babble task that taps listeners' ability to track a speech target in background noise. We used two tasks to investigate implicit and explicit aSM. Both were based on tone patterns overlapping in processing time scales with speech (presentation rate of tones 20 Hz; of patterns 2 Hz). We hypothesised that a link between SiN and aSM may be particularly apparent in older listeners due to age-related reduction in both SiN reception and aSM. We confirmed impaired SiN reception in the older cohort and demonstrated reduced aSM performance in those listeners. However, SiN and aSM did not share variability. Across the two age groups, SiN performance was predicted by a binaural processing test and age. The results suggest that previously observed links between WM and SiN may relate to the executive components and other cognitive demands of the used tasks. This finding helps to constrain the search for the perceptual and cognitive factors that explain individual variability in SiN performance.


Assuntos
Percepção da Fala , Fala , Humanos , Idoso , Fala/fisiologia , Audição/fisiologia , Ruído/efeitos adversos , Percepção da Fala/fisiologia , Memória de Curto Prazo/fisiologia
7.
Eur J Neurosci ; 58(8): 3859-3878, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37691137

RESUMO

Listeners often operate in complex acoustic environments, consisting of many concurrent sounds. Accurately encoding and maintaining such auditory objects in short-term memory is crucial for communication and scene analysis. Yet, the neural underpinnings of successful auditory short-term memory (ASTM) performance are currently not well understood. To elucidate this issue, we presented a novel, challenging auditory delayed match-to-sample task while recording MEG. Human participants listened to 'scenes' comprising three concurrent tone pip streams. The task was to indicate, after a delay, whether a probe stream was present in the just-heard scene. We present three key findings: First, behavioural performance revealed faster responses in correct versus incorrect trials as well as in 'probe present' versus 'probe absent' trials, consistent with ASTM search. Second, successful compared with unsuccessful ASTM performance was associated with a significant enhancement of event-related fields and oscillatory activity in the theta, alpha and beta frequency ranges. This extends previous findings of an overall increase of persistent activity during short-term memory performance. Third, using distributed source modelling, we found these effects to be confined mostly to sensory areas during encoding, presumably related to ASTM contents per se. Parietal and frontal sources then became relevant during the maintenance stage, indicating that effective STM operation also relies on ongoing inhibitory processes suppressing task-irrelevant information. In summary, our results deliver a detailed account of the neural patterns that differentiate successful from unsuccessful ASTM performance in the context of a complex, multi-object auditory scene.


Assuntos
Córtex Auditivo , Memória de Curto Prazo , Humanos , Memória de Curto Prazo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Som , Córtex Auditivo/fisiologia
8.
J Neurosci ; 43(26): 4856-4866, 2023 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-37127361

RESUMO

Listening in noisy environments requires effort- the active engagement of attention and other cognitive abilities- as well as increased arousal. The ability to separately quantify the contribution of these components is key to understanding the dynamics of effort and how it may change across listening situations and in certain populations. We concurrently measured two types of ocular data in young participants (both sexes): pupil dilation (PD; thought to index arousal aspects of effort) and microsaccades (MS; hypothesized to reflect automatic visual exploratory sampling), while they performed a speech-in-noise task under high- (HL) and low- (LL) listening load conditions. Sentences were manipulated so that the behaviorally relevant information (keywords) appeared at the end (Experiment 1) or beginning (Experiment 2) of the sentence, resulting in different temporal demands on focused attention. In line with previous reports, PD effects were associated with increased dilation under load. We observed a sustained difference between HL and LL conditions, consistent with increased phasic and tonic arousal. Importantly we show that MS rate was also modulated by listening load. This was manifested as a reduced MS rate in HL relative to LL. Critically, in contrast to the sustained difference seen for PD, MS effects were localized in time, specifically during periods when demands on auditory attention were greatest. These results demonstrate that auditory selective attention interfaces with the mechanisms controlling MS generation, establishing MS as an informative measure, complementary to PD, with which to quantify the temporal dynamics of auditory attentional processing under effortful listening conditions.SIGNIFICANCE STATEMENT Listening effort, reflecting the "cognitive bandwidth" deployed to effectively process sound in adverse environments, contributes critically to listening success. Understanding listening effort and the processes involved in its allocation is a major challenge in auditory neuroscience. Here, we demonstrate that microsaccade rate can be used to index a specific subcomponent of listening effort, the allocation of instantaneous auditory attention, that is distinct from the modulation of arousal indexed by pupil dilation (currently the dominant measure of listening effort). These results reveal the push-pull process through which auditory attention interfaces with the (visual) attention network that controls microsaccades, establishing microsaccades as a powerful tool for measuring auditory attention and its deficits.


Assuntos
Pupila , Percepção da Fala , Masculino , Feminino , Humanos , Percepção Auditiva , Ruído , Nível de Alerta
9.
Trends Hear ; 25: 23312165211025941, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34170748

RESUMO

Online recruitment platforms are increasingly used for experimental research. Crowdsourcing is associated with numerous benefits but also notable constraints, including lack of control over participants' environment and engagement. In the context of auditory experiments, these limitations may be particularly detrimental to threshold-based tasks that require effortful listening. Here, we ask whether incorporating a performance-based monetary bonus improves speech reception performance of online participants. In two experiments, participants performed an adaptive matrix-type speech-in-noise task (where listeners select two key words out of closed sets). In Experiment 1, our results revealed worse performance in online (N = 49) compared with in-lab (N = 81) groups. Specifically, relative to the in-lab cohort, significantly fewer participants in the online group achieved very low thresholds. In Experiment 2 (N = 200), we show that a monetary reward improved listeners' thresholds to levels similar to those observed in the lab setting. Overall, the results suggest that providing a small performance-based bonus increases participants' task engagement, facilitating a more accurate estimation of auditory ability under challenging listening conditions.


Assuntos
Percepção da Fala , Percepção Auditiva , Limiar Auditivo , Humanos , Ruído , Recompensa
10.
J Neurosci ; 2021 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-34083259

RESUMO

The brain is highly sensitive to auditory regularities and exploits the predictable order of sounds in many situations, from parsing complex auditory scenes, to the acquisition of language. To understand the impact of stimulus predictability on perception, it is important to determine how the detection of predictable structure influences processing and attention. Here we use pupillometry to gain insight into the effect of sensory regularity on arousal. Pupillometry is a commonly used measure of salience and processing effort, with more perceptually salient or perceptually demanding stimuli consistently associated with larger pupil diameters.In two experiments we tracked human listeners' pupil dynamics while they listened to sequences of 50ms tone pips of different frequencies. The order of the tone pips was either random, contained deterministic (fully predictable) regularities (experiment 1, n = 18, 11 female) or had a probabilistic regularity structure (experiment 2, n = 20, 17 female). The sequences were rapid, preventing conscious tracking of sequence structure thus allowing us to focus on the automatic extraction of different types of regularities. We hypothesized that if regularity facilitates processing by reducing processing demands, a smaller pupil diameter would be seen in response to regular relative to random patterns. Conversely, if regularity is associated with heightened arousal and attention (i.e. engages processing resources) the opposite pattern would be expected. In both experiments we observed a smaller sustained (tonic) pupil diameter for regular compared with random sequences, consistent with the former hypothesis and confirming that predictability facilitates sequence processing.SIGNIFICANCE STATEMENTThe brain is highly sensitive to auditory regularities. To appreciate the impact that the presence of predictability has on perception, we need to better understand how a predictable structure influences processing and attention. We recorded listeners' pupil responses to sequences of tones that followed either a predictable or unpredictable pattern, as the pupil can be used to implicitly tap into these different cognitive processes. We found that the pupil showed a smaller sustained diameter to predictable sequences, indicating that predictability eased processing rather than boosted attention. The findings suggest that the pupil response can be used to study the automatic extraction of regularities, and that the effects are most consistent with predictability helping the listener to efficiently process upcoming sounds.

11.
PLoS Comput Biol ; 17(5): e1008995, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34038404

RESUMO

[This corrects the article DOI: 10.1371/journal.pcbi.1008304.].

13.
Hear Res ; 400: 108111, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33333425

RESUMO

The notion that sensitivity to the statistical structure of the environment is pivotal to perception has recently garnered considerable attention. Here we investigated this issue in the context of hearing. Building on previous work (Sohoglu and Chait, 2016a; elife), stimuli were artificial 'soundscapes' populated by multiple (up to 14) simultaneous streams ('auditory objects') comprised of tone-pip sequences, each with a distinct frequency and pattern of amplitude modulation. Sequences were either temporally regular or random. We show that listeners' ability to detect abrupt appearance or disappearance of a stream is facilitated when scene streams were characterized by a temporally regular fluctuation pattern. The regularity of the changing stream as well as that of the background (non-changing) streams contribute independently to this effect. Remarkably, listeners benefit from regularity even when they are not consciously aware of it. These findings establish that perception of complex acoustic scenes relies on the availability of detailed representations of the regularities automatically extracted from multiple concurrent streams.


Assuntos
Percepção Auditiva , Estimulação Acústica , Atenção , Audição
14.
Behav Res Methods ; 53(4): 1551-1562, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33300103

RESUMO

Online experimental platforms can be used as an alternative to, or complement, lab-based research. However, when conducting auditory experiments via online methods, the researcher has limited control over the participants' listening environment. We offer a new method to probe one aspect of that environment, headphone use. Headphones not only provide better control of sound presentation but can also "shield" the listener from background noise. Here we present a rapid (< 3 min) headphone screening test based on Huggins Pitch (HP), a perceptual phenomenon that can only be detected when stimuli are presented dichotically. We validate this test using a cohort of "Trusted" online participants who completed the test using both headphones and loudspeakers. The same participants were also used to test an existing headphone test (AP test; Woods et al., 2017, Attention Perception Psychophysics). We demonstrate that compared to the AP test, the HP test has a higher selectivity for headphone users, rendering it as a compelling alternative to existing methods. Overall, the new HP test correctly detects 80% of headphone users and has a false-positive rate of 20%. Moreover, we demonstrate that combining the HP test with an additional test-either the AP test or an alternative based on a beat test (BT)-can lower the false-positive rate to ~ 7%. This should be useful in situations where headphone use is particularly critical (e.g., dichotic or spatial manipulations). Code for implementing the new tests is publicly available in JavaScript and through Gorilla (gorilla.sc).


Assuntos
Percepção Auditiva , Ruído , Estimulação Acústica , Humanos , Psicofísica , Som
15.
Hear Res ; 399: 108074, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33041093

RESUMO

The auditory system plays a critical role in supporting our ability to detect abrupt changes in our surroundings. Here we study how this capacity is affected in the course of healthy ageing. Artifical acoustic 'scenes', populated by multiple concurrent streams of pure tones ('sources') were used to capture the challenges of listening in complex acoustic environments. Two scene conditions were included: REG scenes consisted of sources characterized by a regular temporal structure. Matched RAND scenes contained sources which were temporally random. Changes, manifested as the abrupt disappearance of one of the sources, were introduced to a subset of the trials and participants ('young' group N = 41, age 20-38 years; 'older' group N = 41, age 60-82 years) were instructed to monitor the scenes for these events. Previous work demonstrated that young listeners exhibit better change detection performance in REG scenes, reflecting sensitivity to temporal structure. Here we sought to determine: (1) Whether 'baseline' change detection ability (i.e. in RAND scenes) is affected by age. (2) Whether aging affects listeners' sensitivity to temporal regularity. (3) How change detection capacity relates to listeners' hearing and cognitive profile (a battery of tests that capture hearing and cognitive abilities hypothesized to be affected by aging). The results demonstrated that healthy aging is associated with reduced sensitivity to abrupt scene changes in RAND scenes but that performance does not correlate with age or standard audiological measures such as pure tone audiometry or speech in noise performance. Remarkably older listeners' change detection performance improved substantially (up to the level exhibited by young listeners) in REG relative to RAND scenes. This suggests that the ability to extract and track the regularity associated with scene sources, even in crowded acoustic environments, is relatively preserved in older listeners.


Assuntos
Envelhecimento Saudável , Estimulação Acústica , Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Percepção Auditiva , Audição , Humanos , Pessoa de Meia-Idade , Adulto Jovem
16.
PLoS Comput Biol ; 16(11): e1008304, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33147209

RESUMO

Statistical learning and probabilistic prediction are fundamental processes in auditory cognition. A prominent computational model of these processes is Prediction by Partial Matching (PPM), a variable-order Markov model that learns by internalizing n-grams from training sequences. However, PPM has limitations as a cognitive model: in particular, it has a perfect memory that weights all historic observations equally, which is inconsistent with memory capacity constraints and recency effects observed in human cognition. We address these limitations with PPM-Decay, a new variant of PPM that introduces a customizable memory decay kernel. In three studies-one with artificially generated sequences, one with chord sequences from Western music, and one with new behavioral data from an auditory pattern detection experiment-we show how this decay kernel improves the model's predictive performance for sequences whose underlying statistics change over time, and enables the model to capture effects of memory constraints on auditory pattern detection. The resulting model is available in our new open-source R package, ppm (https://github.com/pmcharrison/ppm).


Assuntos
Percepção Auditiva , Simulação por Computador , Memória , Algoritmos , Humanos , Música
17.
J Acoust Soc Am ; 147(6): 3814, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32611180

RESUMO

A study by Tóth, Kocsis, Háden, Szerafin, Shinn-Cunningham, and Winkler [Neuroimage 141, 108 - 119 (2016)] reported that spatial cues (such as interaural differences or ITDs) that differentiate the perceived sound source directions of a target tone sequence (figure) from simultaneous distracting tones (background) did not improve the ability of participants to detect the target sequence. The present study aims to investigate more systematically whether spatially separating a complex auditory "figure" from the background auditory stream may enhance the detection of a target in a cluttered auditory scene. Results of the presented experiment suggest that the previous negative results arose because of the specific experimental conditions tested. Here the authors find that ITDs provide a clear benefit for detecting a target tone sequence amid a mixture of other simultaneous tone bursts.


Assuntos
Sinais (Psicologia) , Localização de Som , Estimulação Acústica , Percepção Auditiva , Humanos , Som
18.
Elife ; 92020 05 18.
Artigo em Inglês | MEDLINE | ID: mdl-32420868

RESUMO

Memory, on multiple timescales, is critical to our ability to discover the structure of our surroundings, and efficiently interact with the environment. We combined behavioural manipulation and modelling to investigate the dynamics of memory formation for rarely reoccurring acoustic patterns. In a series of experiments, participants detected the emergence of regularly repeating patterns within rapid tone-pip sequences. Unbeknownst to them, a few patterns reoccurred every ~3 min. All sequences consisted of the same 20 frequencies and were distinguishable only by the order of tone-pips. Despite this, reoccurring patterns were associated with a rapidly growing detection-time advantage over novel patterns. This effect was implicit, robust to interference, and persisted for 7 weeks. The results implicate an interplay between short (a few seconds) and long-term (over many minutes) integration in memory formation and demonstrate the remarkable sensitivity of the human auditory system to sporadically reoccurring structure within the acoustic environment.


Patterns of sound ­ such as the noise of footsteps approaching or a person speaking ­ often provide valuable information. To recognize these patterns, our memory must hold each part of the sound sequence long enough to perceive how they fit together. This ability is necessary in many situations: from discriminating between random noises in the woods to understanding language and appreciating music. Memory traces left by each sound are crucial for discovering new patterns and recognizing patterns we have previously encountered. However, it remained unclear whether sounds that reoccur sporadically can stick in our memory, and under what conditions this happens. To answer this question, Bianco et al. conducted a series of experiments where human volunteers listened to rapid sequences of 20 random tones interspersed with repeated patterns. Participants were asked to press a button as soon as they detected a repeating pattern. Most of the patterns were new but some reoccurred every three minutes or so unbeknownst to the listener. Bianco et al. found that participants became progressively faster at recognizing a repeated pattern each time it reoccurred, gradually forming an enduring memory which lasted at least seven weeks after the initial training. The volunteers did not recognize these retained patterns in other tests suggesting they were unaware of these memories. This suggests that as well as remembering meaningful sounds, like the melody of a song, people can also unknowingly memorize the complex pattern of arbitrary sounds, including ones they rarely encounter. These findings provide new insights into how humans discover and recognize sound patterns which could help treat diseases associated with impaired memory and hearing. More studies are needed to understand what exactly happens in the brain as these memories of sound patterns are created, and whether this also happens for other senses and in other species.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Memória de Longo Prazo/fisiologia , Adulto , Feminino , Humanos , Masculino , Testes de Memória e Aprendizagem , Tempo de Reação/fisiologia , Adulto Jovem
19.
Neuroimage ; 217: 116661, 2020 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-32081785

RESUMO

Using fMRI and multivariate pattern analysis, we determined whether spectral and temporal acoustic features are represented by independent or integrated multivoxel codes in human cortex. Listeners heard band-pass noise varying in frequency (spectral) and amplitude-modulation (AM) rate (temporal) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, multivoxel representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features (albeit weakly). Direct between-region comparisons show that whereas independent coding of frequency weakened with increasing levels of the hierarchy, such a progression for AM and integrated coding was less fine-grained and only evident in the higher hierarchical levels from non-core to parietal cortex (with AM coding weakening and integrated coding strengthening). Our findings support the notion that primary auditory cortex can represent spectral and temporal acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of sensory input.


Assuntos
Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Adolescente , Adulto , Algoritmos , Mapeamento Encefálico , Análise por Conglomerados , Feminino , Lateralidade Funcional/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Análise Multivariada , Ruído , Lobo Parietal/diagnóstico por imagem , Lobo Parietal/fisiologia , Adulto Jovem
20.
Sci Rep ; 9(1): 15570, 2019 10 30.
Artigo em Inglês | MEDLINE | ID: mdl-31666553

RESUMO

Human listeners exhibit marked sensitivity to familiar music, perhaps most readily revealed by popular "name that tune" games, in which listeners often succeed in recognizing a familiar song based on extremely brief presentation. In this work, we used electroencephalography (EEG) and pupillometry to reveal the temporal signatures of the brain processes that allow differentiation between a familiar, well liked, and unfamiliar piece of music. In contrast to previous work, which has quantified gradual changes in pupil diameter (the so-called "pupil dilation response"), here we focus on the occurrence of pupil dilation events. This approach is substantially more sensitive in the temporal domain and allowed us to tap early activity with the putative salience network. Participants (N = 10) passively listened to snippets (750 ms) of a familiar, personally relevant and, an acoustically matched, unfamiliar song, presented in random order. A group of control participants (N = 12), who were unfamiliar with all of the songs, was also tested. We reveal a rapid differentiation between snippets from familiar and unfamiliar songs: Pupil responses showed greater dilation rate to familiar music from 100-300 ms post-stimulus-onset, consistent with a faster activation of the autonomic salience network. Brain responses measured with EEG showed a later differentiation between familiar and unfamiliar music from 350 ms post onset. Remarkably, the cluster pattern identified in the EEG response is very similar to that commonly found in the classic old/new memory retrieval paradigms, suggesting that the recognition of brief, randomly presented, music snippets, draws on similar processes.


Assuntos
Encéfalo/fisiologia , Eletroencefalografia , Música/psicologia , Pupila/fisiologia , Reconhecimento Psicológico , Percepção Auditiva , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA