Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
J Neurosci ; 44(14)2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38350998

ABSTRACT

Human listeners possess an innate capacity to discern patterns within rapidly unfolding sensory input. Core questions, guiding ongoing research, focus on the mechanisms through which these representations are acquired and whether the brain prioritizes or suppresses predictable sensory signals. Previous work, using fast auditory sequences (tone-pips presented at a rate of 20 Hz), revealed sustained response effects that appear to track the dynamic predictability of the sequence. Here, we extend the investigation to slower sequences (4 Hz), permitting the isolation of responses to individual tones. Stimuli were 50 ms tone-pips, ordered into random (RND) and regular (REG; a repeating pattern of 10 frequencies) sequences; Two timing profiles were created: in "fast" sequences, tone-pips were presented in direct succession (20 Hz); in "slow" sequences, tone-pips were separated by a 200 ms silent gap (4 Hz). Naive participants (N = 22; both sexes) passively listened to these sequences, while brain responses were recorded using magnetoencephalography (MEG). Results unveiled a heightened magnitude of sustained brain responses in REG when compared to RND patterns. This manifested from three tones after the onset of the pattern repetition, even in the context of slower sequences characterized by extended pattern durations (2,500 ms). This observation underscores the remarkable implicit sensitivity of the auditory brain to acoustic regularities. Importantly, brain responses evoked by single tones exhibited the opposite pattern-stronger responses to tones in RND than REG sequences. The demonstration of simultaneous but opposing sustained and evoked response effects reveals concurrent processes that shape the representation of unfolding auditory patterns.


Subject(s)
Auditory Cortex , Auditory Perception , Male , Female , Humans , Acoustic Stimulation/methods , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Brain/physiology , Magnetoencephalography , Auditory Cortex/physiology
2.
Curr Biol ; 34(2): 444-450.e5, 2024 01 22.
Article in English | MEDLINE | ID: mdl-38176416

ABSTRACT

The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.


Subject(s)
Music , Animals , Pitch Perception/physiology , Motivation , Electroencephalography/methods , Primates , Acoustic Stimulation , Auditory Perception/physiology
4.
Curr Res Neurobiol ; 5: 100115, 2023.
Article in English | MEDLINE | ID: mdl-38020808

ABSTRACT

Any listening task, from sound recognition to sound-based communication, rests on auditory memory which is known to decline in healthy ageing. However, how this decline maps onto multiple components and stages of auditory memory remains poorly characterised. In an online unsupervised longitudinal study, we tested ageing effects on implicit auditory memory for rapid tone patterns. The test required participants (younger, aged 20-30, and older adults aged 60-70) to quickly respond to rapid regularly repeating patterns emerging from random sequences. Patterns were novel in most trials (REGn), but unbeknownst to the participants, a few distinct patterns reoccurred identically throughout the sessions (REGr). After correcting for processing speed, the response times (RT) to REGn should reflect the information held in echoic and short-term memory before detecting the pattern; long-term memory formation and retention should be reflected by the RT advantage (RTA) to REGr vs REGn which is expected to grow with exposure. Older participants were slower than younger adults in detecting REGn and exhibited a smaller RTA to REGr. Computational simulations using a model of auditory sequence memory indicated that these effects reflect age-related limitations both in early and long-term memory stages. In contrast to ageing-related accelerated forgetting of verbal material, here older adults maintained stable memory traces for REGr patterns up to 6 months after the first exposure. The results demonstrate that ageing is associated with reduced short-term memory and long-term memory formation for tone patterns, but not with forgetting, even over surprisingly long timescales.

5.
Front Psychol ; 14: 1175682, 2023.
Article in English | MEDLINE | ID: mdl-38034280

ABSTRACT

Predictability plays an important role in the experience of musical pleasure. By leveraging expectations, music induces pleasure through tension and surprise. However, musical predictions draw on both prior knowledge and immediate context. Similarly, musical pleasure, which has been shown to depend on predictability, may also vary relative to the individual and context. Although research has demonstrated the influence of both long-term knowledge and stimulus features in influencing expectations, it is unclear how perceptions of a melody are influenced by comparisons to other music pieces heard in the same context. To examine the effects of context we compared how listeners' judgments of two distinct sets of stimuli differed when they were presented alone or in combination. Stimuli were excerpts from a repertoire of Western music and a set of experimenter created melodies. Separate groups of participants rated liking and predictability for each set of stimuli alone and in combination. We found that when heard together, the Repertoire stimuli were more liked and rated as less predictable than if they were heard alone, with the opposite pattern being observed for the Experimental stimuli. This effect was driven by a change in ratings between the Alone and Combined conditions for each stimulus set. These findings demonstrate a context-based shift of predictability ratings and derived pleasure, suggesting that judgments stem not only from the physical properties of the stimulus, but also vary relative to other options available in the immediate context.

6.
Trends Hear ; 27: 23312165231190688, 2023.
Article in English | MEDLINE | ID: mdl-37828868

ABSTRACT

A growing literature is demonstrating a link between working memory (WM) and speech-in-noise (SiN) perception. However, the nature of this correlation and which components of WM might underlie it, are being debated. We investigated how SiN reception links with auditory sensory memory (aSM) - the low-level processes that support the short-term maintenance of temporally unfolding sounds. A large sample of old (N = 199, 60-79 yo) and young (N = 149, 20-35 yo) participants was recruited online and performed a coordinate response measure-based speech-in-babble task that taps listeners' ability to track a speech target in background noise. We used two tasks to investigate implicit and explicit aSM. Both were based on tone patterns overlapping in processing time scales with speech (presentation rate of tones 20 Hz; of patterns 2 Hz). We hypothesised that a link between SiN and aSM may be particularly apparent in older listeners due to age-related reduction in both SiN reception and aSM. We confirmed impaired SiN reception in the older cohort and demonstrated reduced aSM performance in those listeners. However, SiN and aSM did not share variability. Across the two age groups, SiN performance was predicted by a binaural processing test and age. The results suggest that previously observed links between WM and SiN may relate to the executive components and other cognitive demands of the used tasks. This finding helps to constrain the search for the perceptual and cognitive factors that explain individual variability in SiN performance.


Subject(s)
Speech Perception , Speech , Humans , Aged , Speech/physiology , Hearing/physiology , Noise/adverse effects , Speech Perception/physiology , Memory, Short-Term/physiology
7.
Cereb Cortex ; 32(18): 3878-3895, 2022 09 04.
Article in English | MEDLINE | ID: mdl-34965579

ABSTRACT

Complex sequential behaviors, such as speaking or playing music, entail flexible rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into movements. We combined music production with multimodal neuroimaging to dissociate high-level structural and low-level motor planning. Pianists played novel musical chord sequences on a muted MR-compatible piano by imitating a model hand on screen. Chord sequences were manipulated in terms of musical harmony and context length to assess structural planning, and in terms of fingers used for playing to assess motor planning. A model of probabilistic sequence processing confirmed temporally extended dependencies between chords, as opposed to local dependencies between movements. Violations of structural plans activated the left inferior frontal and middle temporal gyrus, and the fractional anisotropy of the ventral pathway connecting these two regions positively predicted behavioral measures of structural planning. A bilateral frontoparietal network was instead activated by violations of motor plans. Both structural and motor networks converged in lateral prefrontal cortex, with anterior regions contributing to musical structure building, and posterior areas to movement planning. These results establish a promising approach to study sequence production at different levels of action representation.


Subject(s)
Music , Brain , Hand , Movement , Prefrontal Cortex/diagnostic imaging
8.
Trends Hear ; 25: 23312165211025941, 2021.
Article in English | MEDLINE | ID: mdl-34170748

ABSTRACT

Online recruitment platforms are increasingly used for experimental research. Crowdsourcing is associated with numerous benefits but also notable constraints, including lack of control over participants' environment and engagement. In the context of auditory experiments, these limitations may be particularly detrimental to threshold-based tasks that require effortful listening. Here, we ask whether incorporating a performance-based monetary bonus improves speech reception performance of online participants. In two experiments, participants performed an adaptive matrix-type speech-in-noise task (where listeners select two key words out of closed sets). In Experiment 1, our results revealed worse performance in online (N = 49) compared with in-lab (N = 81) groups. Specifically, relative to the in-lab cohort, significantly fewer participants in the online group achieved very low thresholds. In Experiment 2 (N = 200), we show that a monetary reward improved listeners' thresholds to levels similar to those observed in the lab setting. Overall, the results suggest that providing a small performance-based bonus increases participants' task engagement, facilitating a more accurate estimation of auditory ability under challenging listening conditions.


Subject(s)
Speech Perception , Auditory Perception , Auditory Threshold , Humans , Noise , Reward
9.
PLoS Comput Biol ; 17(5): e1008995, 2021 May.
Article in English | MEDLINE | ID: mdl-34038404

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pcbi.1008304.].

10.
Behav Res Methods ; 53(4): 1551-1562, 2021 08.
Article in English | MEDLINE | ID: mdl-33300103

ABSTRACT

Online experimental platforms can be used as an alternative to, or complement, lab-based research. However, when conducting auditory experiments via online methods, the researcher has limited control over the participants' listening environment. We offer a new method to probe one aspect of that environment, headphone use. Headphones not only provide better control of sound presentation but can also "shield" the listener from background noise. Here we present a rapid (< 3 min) headphone screening test based on Huggins Pitch (HP), a perceptual phenomenon that can only be detected when stimuli are presented dichotically. We validate this test using a cohort of "Trusted" online participants who completed the test using both headphones and loudspeakers. The same participants were also used to test an existing headphone test (AP test; Woods et al., 2017, Attention Perception Psychophysics). We demonstrate that compared to the AP test, the HP test has a higher selectivity for headphone users, rendering it as a compelling alternative to existing methods. Overall, the new HP test correctly detects 80% of headphone users and has a false-positive rate of 20%. Moreover, we demonstrate that combining the HP test with an additional test-either the AP test or an alternative based on a beat test (BT)-can lower the false-positive rate to ~ 7%. This should be useful in situations where headphone use is particularly critical (e.g., dichotic or spatial manipulations). Code for implementing the new tests is publicly available in JavaScript and through Gorilla (gorilla.sc).


Subject(s)
Auditory Perception , Noise , Acoustic Stimulation , Humans , Psychophysics , Sound
11.
PLoS Comput Biol ; 16(11): e1008304, 2020 11.
Article in English | MEDLINE | ID: mdl-33147209

ABSTRACT

Statistical learning and probabilistic prediction are fundamental processes in auditory cognition. A prominent computational model of these processes is Prediction by Partial Matching (PPM), a variable-order Markov model that learns by internalizing n-grams from training sequences. However, PPM has limitations as a cognitive model: in particular, it has a perfect memory that weights all historic observations equally, which is inconsistent with memory capacity constraints and recency effects observed in human cognition. We address these limitations with PPM-Decay, a new variant of PPM that introduces a customizable memory decay kernel. In three studies-one with artificially generated sequences, one with chord sequences from Western music, and one with new behavioral data from an auditory pattern detection experiment-we show how this decay kernel improves the model's predictive performance for sequences whose underlying statistics change over time, and enables the model to capture effects of memory constraints on auditory pattern detection. The resulting model is available in our new open-source R package, ppm (https://github.com/pmcharrison/ppm).


Subject(s)
Auditory Perception , Computer Simulation , Memory , Algorithms , Humans , Music
12.
Brain Struct Funct ; 225(7): 1997-2015, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32591927

ABSTRACT

The ability to generate complex hierarchical structures is a crucial component of human cognition which can be expressed in the musical domain in the form of hierarchical melodic relations. The neural underpinnings of this ability have been investigated by comparing the perception of well-formed melodies with unexpected sequences of tones. However, these contrasts do not target specifically the representation of rules generating hierarchical structure. Here, we present a novel paradigm in which identical melodic sequences are generated in four steps, according to three different rules: The Recursive rule, generating new hierarchical levels at each step; The Iterative rule, adding tones within a fixed hierarchical level without generating new levels; and a control rule that simply repeats the third step. Using fMRI, we compared brain activity across these rules when participants are imagining the fourth step after listening to the third (generation phase), and when participants listened to a fourth step (test sound phase), either well-formed or a violation. We found that, in comparison with Repetition and Iteration, imagining the fourth step using the Recursive rule activated the superior temporal gyrus (STG). During the test sound phase, we found fronto-temporo-parietal activity and hippocampal de-activation when processing violations, but no differences between rules. STG activation during the generation phase suggests that generating new hierarchical levels from previous steps might rely on retrieving appropriate melodic hierarchy schemas. Previous findings highlighting the role of hippocampus and inferior frontal gyrus may reflect processing of unexpected melodic sequences, rather than hierarchy generation per se.


Subject(s)
Auditory Perception/physiology , Brain/diagnostic imaging , Music , Adult , Brain/physiology , Brain Mapping , Cognition/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
13.
Elife ; 92020 05 18.
Article in English | MEDLINE | ID: mdl-32420868

ABSTRACT

Memory, on multiple timescales, is critical to our ability to discover the structure of our surroundings, and efficiently interact with the environment. We combined behavioural manipulation and modelling to investigate the dynamics of memory formation for rarely reoccurring acoustic patterns. In a series of experiments, participants detected the emergence of regularly repeating patterns within rapid tone-pip sequences. Unbeknownst to them, a few patterns reoccurred every ~3 min. All sequences consisted of the same 20 frequencies and were distinguishable only by the order of tone-pips. Despite this, reoccurring patterns were associated with a rapidly growing detection-time advantage over novel patterns. This effect was implicit, robust to interference, and persisted for 7 weeks. The results implicate an interplay between short (a few seconds) and long-term (over many minutes) integration in memory formation and demonstrate the remarkable sensitivity of the human auditory system to sporadically reoccurring structure within the acoustic environment.


Patterns of sound ­ such as the noise of footsteps approaching or a person speaking ­ often provide valuable information. To recognize these patterns, our memory must hold each part of the sound sequence long enough to perceive how they fit together. This ability is necessary in many situations: from discriminating between random noises in the woods to understanding language and appreciating music. Memory traces left by each sound are crucial for discovering new patterns and recognizing patterns we have previously encountered. However, it remained unclear whether sounds that reoccur sporadically can stick in our memory, and under what conditions this happens. To answer this question, Bianco et al. conducted a series of experiments where human volunteers listened to rapid sequences of 20 random tones interspersed with repeated patterns. Participants were asked to press a button as soon as they detected a repeating pattern. Most of the patterns were new but some reoccurred every three minutes or so unbeknownst to the listener. Bianco et al. found that participants became progressively faster at recognizing a repeated pattern each time it reoccurred, gradually forming an enduring memory which lasted at least seven weeks after the initial training. The volunteers did not recognize these retained patterns in other tests suggesting they were unaware of these memories. This suggests that as well as remembering meaningful sounds, like the melody of a song, people can also unknowingly memorize the complex pattern of arbitrary sounds, including ones they rarely encounter. These findings provide new insights into how humans discover and recognize sound patterns which could help treat diseases associated with impaired memory and hearing. More studies are needed to understand what exactly happens in the brain as these memories of sound patterns are created, and whether this also happens for other senses and in other species.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Memory, Long-Term/physiology , Adult , Female , Humans , Male , Memory and Learning Tests , Reaction Time/physiology , Young Adult
14.
Elife ; 92020 03 03.
Article in English | MEDLINE | ID: mdl-32122465

ABSTRACT

Humans engagement in music rests on underlying elements such as the listeners' cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing. Here we recorded cortical signals as participants listened to Bach melodies. We assessed the relative contributions of acoustic versus melodic components of the music to the neural signal. Melodic features included information on pitch progressions and their tempo, which were extracted from a predictive model of musical structure based on Markov chains. We related the music to brain activity with temporal response functions demonstrating, for the first time, distinct cortical encoding of pitch and note-onset expectations during naturalistic music listening. This encoding was most pronounced at response latencies up to 350 ms, and in both planum temporale and Heschl's gyrus.


Subject(s)
Auditory Perception/physiology , Music , Temporal Lobe/physiology , Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory/physiology , Humans , Reaction Time
15.
Brain Cogn ; 138: 103621, 2020 02.
Article in English | MEDLINE | ID: mdl-31862512

ABSTRACT

Humans automatically detect events that, in deviating from their expectations, may signal prediction failure and a need to reorient behaviour. The pupil dilation response (PDR) to violations has been associated with subcortical signals of arousal and prediction resetting. However, it is unclear how the context in which a deviant occurs affects the size of the PDR. Using ecological musical stimuli that we characterised using a computational model, we showed that the PDR to pitch deviants is sensitive to contextual uncertainty (quantified as entropy), whereby the PDR was greater in low than high entropy contexts. The PDR was also positively correlated with the unexpectedness of notes. No effects of music expertise were found, suggesting a ceiling effect due to enculturation. These results show that the same sudden environmental change can lead to differing arousal levels depending on contextual factors, providing evidence for a sensitivity of the PDR to long-term context.


Subject(s)
Anticipation, Psychological/physiology , Music , Pitch Perception/physiology , Pupil/physiology , Adult , Female , Humans , Male , Middle Aged , Young Adult
16.
Hum Brain Mapp ; 40(9): 2623-2638, 2019 06 15.
Article in English | MEDLINE | ID: mdl-30834624

ABSTRACT

Generation of hierarchical structures, such as the embedding of subordinate elements into larger structures, is a core feature of human cognition. Processing of hierarchies is thought to rely on lateral prefrontal cortex (PFC). However, the neural underpinnings supporting active generation of new hierarchical levels remain poorly understood. Here, we created a new motor paradigm to isolate this active generative process by means of fMRI. Participants planned and executed identical movement sequences by using different rules: a Recursive hierarchical embedding rule, generating new hierarchical levels; an Iterative rule linearly adding items to existing hierarchical levels, without generating new levels; and a Repetition condition tapping into short term memory, without a transformation rule. We found that planning involving generation of new hierarchical levels (Recursive condition vs. both Iterative and Repetition) activated a bilateral motor imagery network, including cortical and subcortical structures. No evidence was found for lateral PFC involvement in the generation of new hierarchical levels. Activity in basal ganglia persisted through execution of the motor sequences in the contrast Recursive versus Iteration, but also Repetition versus Iteration, suggesting a role of these structures in motor short term memory. These results showed that the motor network is involved in the generation of new hierarchical levels during motor sequence planning, while lateral PFC activity was neither robust nor specific. We hypothesize that lateral PFC might be important to parse hierarchical sequences in a multi-domain fashion but not to generate new hierarchical levels.


Subject(s)
Imagination/physiology , Memory, Short-Term/physiology , Motor Activity/physiology , Nerve Net/physiology , Prefrontal Cortex/physiology , Psychomotor Performance/physiology , Serial Learning/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Nerve Net/diagnostic imaging , Prefrontal Cortex/diagnostic imaging , Young Adult
17.
J Cogn Neurosci ; 28(1): 41-54, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26351994

ABSTRACT

Complex human behavior is hierarchically organized. Whether or not syntax plays a role in this organization is currently under debate. The present ERP study uses piano performance to isolate syntactic operations in action planning and to demonstrate their priority over nonsyntactic levels of movement selection. Expert pianists were asked to execute chord progressions on a mute keyboard by copying the posture of a performing model hand shown in sequences of photos. We manipulated the final chord of each sequence in terms of Syntax (congruent/incongruent keys) and Manner (conventional/unconventional fingering), as well as the strength of its predictability by varying the length of the Context (five-chord/two-chord progressions). The production of syntactically incongruent compared to congruent chords showed a response delay that was larger in the long compared to the short context. This behavioral effect was accompanied by a centroparietal negativity in the long but not in the short context, suggesting that a syntax-based motor plan was prepared ahead. Conversely, the execution of the unconventional manner was not delayed as a function of Context and elicited an opposite electrophysiological pattern (a posterior positivity). The current data support the hypothesis that motor plans operate at the level of musical syntax and are incrementally translated to lower levels of movement selection.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Evoked Potentials/physiology , Movement , Music , Acoustic Stimulation , Adult , Analysis of Variance , Electroencephalography , Female , Fourier Analysis , Humans , Male , Motor Skills/physiology , Photic Stimulation , Reaction Time , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...