Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
Add more filters










Publication year range
1.
Ann N Y Acad Sci ; 1533(1): 169-180, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38319962

ABSTRACT

Perceptual pleasure and its concomitant hedonic value play an essential role in everyday life, motivating behavior and thus influencing how individuals choose to spend their time and resources. However, how pleasure arises from perception of sensory information remains relatively poorly understood. In particular, research has neglected the question of how perceptual representations mediate the relationships between stimulus properties and liking (e.g., stimulus symmetry can only affect liking if it is perceived). The present research addresses this gap for the first time, analyzing perceptual and liking ratings of 96 nonmusicians (power of 0.99) and finding that perceptual representations mediate effects of feature-based and information-based stimulus properties on liking for a novel set of melodies varying in balance, contour, symmetry, or complexity. Moreover, variability due to individual differences and stimuli accounts for most of the variance in liking. These results have broad implications for psychological research on sensory valuation, advocating a more explicit account of random variability and the mediating role of perceptual representations of stimulus properties.


Subject(s)
Music , Humans , Music/psychology , Emotions , Pleasure
2.
Philos Trans R Soc Lond B Biol Sci ; 379(1895): 20220420, 2024 Jan 29.
Article in English | MEDLINE | ID: mdl-38104601

ABSTRACT

Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.


Subject(s)
Music , Pleasure , Auditory Perception , Music/psychology , Motivation , Bayes Theorem , Cognition , Acoustic Stimulation/methods
3.
Front Neurosci ; 17: 1209398, 2023.
Article in English | MEDLINE | ID: mdl-37928727

ABSTRACT

Enjoying music consistently engages key structures of the neural auditory and reward systems such as the right superior temporal gyrus (R STG) and ventral striatum (VS). Expectations seem to play a central role in this effect, as preferences reliably vary according to listeners' uncertainty about the musical future and surprise about the musical past. Accordingly, VS activity reflects the pleasure of musical surprise, and exhibits stronger correlations with R STG activity as pleasure grows. Yet the reward value of musical surprise - and thus the reason for these surprises engaging the reward system - remains an open question. Recent models of predictive neural processing and learning suggest that forming, testing, and updating hypotheses about one's environment may be intrinsically rewarding, and that the constantly evolving structure of musical patterns could provide ample opportunity for this procedure. Consistent with these accounts, our group previously found that listeners tend to prefer melodic excerpts taken from real music when it either validates their uncertain melodic predictions (i.e., is high in uncertainty and low in surprise) or when it challenges their highly confident ones (i.e., is low in uncertainty and high in surprise). An independent research group (Cheung et al., 2019) replicated these results with musical chord sequences, and identified their fMRI correlates in the STG, amygdala, and hippocampus but not the VS, raising new questions about the neural mechanisms of musical pleasure that the present study seeks to address. Here, we assessed concurrent liking ratings and hemodynamic fMRI signals as 24 participants listened to 50 naturalistic, real-world musical excerpts that varied across wide spectra of computationally modeled uncertainty and surprise. As in previous studies, liking ratings exhibited an interaction between uncertainty and surprise, with the strongest preferences for high uncertainty/low surprise and low uncertainty/high surprise. FMRI results also replicated previous findings, with music liking effects in the R STG and VS. Furthermore, we identify interactions between uncertainty and surprise on the one hand, and liking and surprise on the other, in VS activity. Altogether, these results provide important support for the hypothesized role of the VS in deriving pleasure from learning about musical structure.

4.
Curr Res Neurobiol ; 5: 100115, 2023.
Article in English | MEDLINE | ID: mdl-38020808

ABSTRACT

Any listening task, from sound recognition to sound-based communication, rests on auditory memory which is known to decline in healthy ageing. However, how this decline maps onto multiple components and stages of auditory memory remains poorly characterised. In an online unsupervised longitudinal study, we tested ageing effects on implicit auditory memory for rapid tone patterns. The test required participants (younger, aged 20-30, and older adults aged 60-70) to quickly respond to rapid regularly repeating patterns emerging from random sequences. Patterns were novel in most trials (REGn), but unbeknownst to the participants, a few distinct patterns reoccurred identically throughout the sessions (REGr). After correcting for processing speed, the response times (RT) to REGn should reflect the information held in echoic and short-term memory before detecting the pattern; long-term memory formation and retention should be reflected by the RT advantage (RTA) to REGr vs REGn which is expected to grow with exposure. Older participants were slower than younger adults in detecting REGn and exhibited a smaller RTA to REGr. Computational simulations using a model of auditory sequence memory indicated that these effects reflect age-related limitations both in early and long-term memory stages. In contrast to ageing-related accelerated forgetting of verbal material, here older adults maintained stable memory traces for REGr patterns up to 6 months after the first exposure. The results demonstrate that ageing is associated with reduced short-term memory and long-term memory formation for tone patterns, but not with forgetting, even over surprisingly long timescales.

5.
J Exp Psychol Learn Mem Cogn ; 48(2): 284-303, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35420873

ABSTRACT

Processing of emotional meaning is crucial in many areas of psychology, including language and music processing. This issue takes on particular significance in bilinguals because it has been suggested that bilinguals process affective words differently in their first (L1) and second, later acquired languages (L2). We undertook a series of five experiments examining affective priming between emotionally valenced language and emotionally valenced music. Adult English monolinguals and two groups of proficient adult late bilinguals (German-English and Italian-English) with recent L2 exposure were examined. Priming effects were investigated using music to prime word targets and words to prime music targets. For both groups of bilinguals, music showed equivalent affective priming of L1 and L2 words, suggesting no difference in deliberate processing of affective meaning. Conversely, when words primed music, L2 words lacked the affective priming strength of L1 words for both late bilingual groups. Among various language background factors, only greater length of residence in the L2 context was positively related to the affective priming strength of L2 words. These results show strong activation of emotional meaning in the L1 of late bilinguals but reduced activation in the L2, where level of activation depends on the duration of everyday exposure to the L2. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Multilingualism , Adult , Emotions , Humans , Language
6.
J Exp Psychol Gen ; 151(3): 555-577, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34582231

ABSTRACT

Statistical learning plays an important role in acquiring the structure of cultural communication signals such as speech and music, which are both perceived and reproduced. However, statistical learning is typically investigated through passive exposure to structured signals, followed by offline explicit recognition tasks assessing the degree of learning. Such experimental approaches fail to capture statistical learning as it takes place and require post hoc conscious reflection on what is thought to be an implicit process of knowledge acquisition. To better understand the process of statistical learning in active contexts while addressing these shortcomings, we introduce a novel, processing-based measure of statistical learning based on the position of errors in sequence reproduction. Across five experiments, we employed this new technique to assess statistical learning using artificial pure-tone or environmental-sound languages with controlled statistical properties in passive exposure, active reproduction, and explicit recognition tasks. The new error position measure provided a robust, online indicator of statistical learning during reproduction, with little carryover from prior statistical learning via passive exposure and no correlation with recognition-based estimates of statistical learning. Error position effects extended consistently across auditory domains, including sequences of pure tones and environmental sounds. Whereas recall performance showed significant variability across experiments, and little evidence of being improved by statistical learning, the error position effect was highly consistent for all participant groups, including musicians and nonmusicians. We discuss the implications of these results for understanding psychological mechanisms underlying statistical learning and compare the evidence provided by different experimental measures. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Music , Speech Perception , Humans , Learning , Recognition, Psychology , Reproduction
7.
Iperception ; 12(4): 20416695211024680, 2021.
Article in English | MEDLINE | ID: mdl-34377428

ABSTRACT

Chills experienced in response to music listening have been linked to both happiness and sadness expressed by music. To investigate these conflicting effects of valence on chills, we conducted a computational analysis on a corpus of 988 tracks previously reported to elicit chills, by comparing them with a control set of tracks matched by artist, duration, and popularity. We analysed track-level audio features obtained with the Spotify Web API across the two sets of tracks, resulting in confirmatory findings that tracks which cause chills were sadder than matched tracks and exploratory findings that they were also slower, less intense, and more instrumental than matched tracks on average. We also found that the audio characteristics of chills tracks were related to the direction and magnitude of the difference in valence between the two sets of tracks. We discuss these results in light of the current literature on valence and chills in music, provide a new interpretation in terms of personality correlates of musical preference, and review the advantages and limitations of our computational approach.

8.
Psychol Sci ; 32(9): 1416-1425, 2021 09.
Article in English | MEDLINE | ID: mdl-34409898

ABSTRACT

Anticipating the future is essential for efficient perception and action planning. Yet the role of anticipation in event segmentation is understudied because empirical research has focused on retrospective cues such as surprise. We address this concern in the context of perception of musical-phrase boundaries. A computational model of cognitive sequence processing was used to control the information-dynamic properties of tone sequences. In an implicit, self-paced listening task (N = 38), undergraduates dwelled longer on tones generating high entropy (i.e., high uncertainty) than on those generating low entropy (i.e., low uncertainty). Similarly, sequences that ended on tones generating high entropy were rated as sounding more complete (N = 31 undergraduates). These entropy effects were independent of both the surprise (i.e., information content) and phrase position of target tones in the original musical stimuli. Our results indicate that events generating high entropy prospectively contribute to segmentation processes in auditory sequence perception, independently of the properties of the subsequent event.


Subject(s)
Music , Auditory Perception , Cues , Humans , Retrospective Studies , Uncertainty
9.
PLoS Comput Biol ; 17(5): e1008995, 2021 May.
Article in English | MEDLINE | ID: mdl-34038404

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pcbi.1008304.].

10.
Brain Cogn ; 151: 105729, 2021 07.
Article in English | MEDLINE | ID: mdl-33887654

ABSTRACT

Evaluative judgment-i.e., assessing to what degree a stimulus is liked or disliked-is a fundamental aspect of cognition, facilitating comparison and choosing among alternatives, deciding, and prioritizing actions. Neuroimaging studies have shown that evaluative judgment involves the projection of sensory information to the reward circuit. To investigate whether evaluative judgments are based on modality-specific or modality-general attributes, we compared the extent to which balance, contour, symmetry, and complexity affect liking responses in the auditory and visual modalities. We found no significant correlation for any of the four attributes across sensory modalities, except for contour. This suggests that evaluative judgments primarily rely on modality-specific sensory representations elaborated in the brain's sensory cortices and relayed to the reward circuit, rather than abstract modality-general representations. The individual traits art experience, openness to experience, and desire for aesthetics were associated with the extent to which design or compositional attributes influenced liking, but inconsistently across sensory modalities and attributes, also suggesting modality-specific influences.


Subject(s)
Emotions , Judgment , Cognition , Esthetics , Humans
11.
PLoS Comput Biol ; 16(11): e1008304, 2020 11.
Article in English | MEDLINE | ID: mdl-33147209

ABSTRACT

Statistical learning and probabilistic prediction are fundamental processes in auditory cognition. A prominent computational model of these processes is Prediction by Partial Matching (PPM), a variable-order Markov model that learns by internalizing n-grams from training sequences. However, PPM has limitations as a cognitive model: in particular, it has a perfect memory that weights all historic observations equally, which is inconsistent with memory capacity constraints and recency effects observed in human cognition. We address these limitations with PPM-Decay, a new variant of PPM that introduces a customizable memory decay kernel. In three studies-one with artificially generated sequences, one with chord sequences from Western music, and one with new behavioral data from an auditory pattern detection experiment-we show how this decay kernel improves the model's predictive performance for sequences whose underlying statistics change over time, and enables the model to capture effects of memory constraints on auditory pattern detection. The resulting model is available in our new open-source R package, ppm (https://github.com/pmcharrison/ppm).


Subject(s)
Auditory Perception , Computer Simulation , Memory , Algorithms , Humans , Music
12.
J Cogn Neurosci ; 32(12): 2241-2259, 2020 12.
Article in English | MEDLINE | ID: mdl-32762519

ABSTRACT

It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.


Subject(s)
Music , Acoustic Stimulation , Auditory Perception , Cues , Humans , Learning
13.
Elife ; 92020 05 18.
Article in English | MEDLINE | ID: mdl-32420868

ABSTRACT

Memory, on multiple timescales, is critical to our ability to discover the structure of our surroundings, and efficiently interact with the environment. We combined behavioural manipulation and modelling to investigate the dynamics of memory formation for rarely reoccurring acoustic patterns. In a series of experiments, participants detected the emergence of regularly repeating patterns within rapid tone-pip sequences. Unbeknownst to them, a few patterns reoccurred every ~3 min. All sequences consisted of the same 20 frequencies and were distinguishable only by the order of tone-pips. Despite this, reoccurring patterns were associated with a rapidly growing detection-time advantage over novel patterns. This effect was implicit, robust to interference, and persisted for 7 weeks. The results implicate an interplay between short (a few seconds) and long-term (over many minutes) integration in memory formation and demonstrate the remarkable sensitivity of the human auditory system to sporadically reoccurring structure within the acoustic environment.


Patterns of sound ­ such as the noise of footsteps approaching or a person speaking ­ often provide valuable information. To recognize these patterns, our memory must hold each part of the sound sequence long enough to perceive how they fit together. This ability is necessary in many situations: from discriminating between random noises in the woods to understanding language and appreciating music. Memory traces left by each sound are crucial for discovering new patterns and recognizing patterns we have previously encountered. However, it remained unclear whether sounds that reoccur sporadically can stick in our memory, and under what conditions this happens. To answer this question, Bianco et al. conducted a series of experiments where human volunteers listened to rapid sequences of 20 random tones interspersed with repeated patterns. Participants were asked to press a button as soon as they detected a repeating pattern. Most of the patterns were new but some reoccurred every three minutes or so unbeknownst to the listener. Bianco et al. found that participants became progressively faster at recognizing a repeated pattern each time it reoccurred, gradually forming an enduring memory which lasted at least seven weeks after the initial training. The volunteers did not recognize these retained patterns in other tests suggesting they were unaware of these memories. This suggests that as well as remembering meaningful sounds, like the melody of a song, people can also unknowingly memorize the complex pattern of arbitrary sounds, including ones they rarely encounter. These findings provide new insights into how humans discover and recognize sound patterns which could help treat diseases associated with impaired memory and hearing. More studies are needed to understand what exactly happens in the brain as these memories of sound patterns are created, and whether this also happens for other senses and in other species.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Memory, Long-Term/physiology , Adult , Female , Humans , Male , Memory and Learning Tests , Reaction Time/physiology , Young Adult
14.
Behav Res Methods ; 52(4): 1491-1509, 2020 08.
Article in English | MEDLINE | ID: mdl-32052354

ABSTRACT

We present a novel set of 200 Western tonal musical stimuli (MUST) to be used in research on perception and appreciation of music. It consists of four subsets of 50 stimuli varying in balance, contour, symmetry, or complexity. All are 4 s long and designed to be musically appealing and experimentally controlled. We assessed them behaviorally and computationally. The behavioral assessment (Study 1) aimed to determine whether musically untrained participants could identify variations in each attribute. Forty-three participants rated the stimuli in each subset on the corresponding attribute. We found that inter-rater reliability was high and that the ratings mirrored the design features well. Participants' ratings also served to create an abridged set of 24 stimuli per subset. The computational assessment (Study 2) required the development of a specific battery of computational measures describing the structural properties of each stimulus. We distilled nonredundant composite measures for each attribute and examined whether they predicted participants' ratings. Our results show that the composite measures indeed predicted participants' ratings. Moreover, the composite complexity measure predicted complexity ratings as well as existing models of musical complexity. We conclude that the four subsets are suitable for use in studies that require presenting participants with short musical motifs varying in balance, contour, symmetry, or complexity, and that the stimuli and the computational measures are valuable resources for research in music psychology, empirical aesthetics, music information retrieval, and musicology. The MUST set and MATLAB toolbox codifying the computational measures are freely available at osf.io/bfxz7.


Subject(s)
Auditory Perception , Music , Humans , Reproducibility of Results
15.
Neuroimage ; 206: 116311, 2020 02 01.
Article in English | MEDLINE | ID: mdl-31669411

ABSTRACT

Human creativity is intricately linked to acquired knowledge. However, to date learning a new musical style and subsequent musical creativity have largely been studied in isolation. We introduced a novel experimental paradigm combining behavioural, electrophysiological, and computational methods, to examine the neural correlates of unfamiliar music learning, and to investigate how neural and computational measures can predict human creativity. We investigated music learning by training non-musicians (N = 40) on an artificial music grammar. Participants' knowledge of the grammar was tested before and after three training sessions on separate days by assessing explicit recognition of the notes of the grammar, while additionally recording their EEG. After each training session, participants created their own musical compositions, which were later evaluated by human experts. A computational model of auditory expectation was used to quantify the statistical properties of both the grammar and the compositions. Results showed that participants successfully learned the new grammar. This was also reflected in the N100, P200, and P3a components, which were higher in response to incorrect than correct notes. The delta band (2.5-4.5 Hz) power in response to grammatical notes during first exposure to the grammar positively correlated with learning, suggesting a potential neural mechanism of encoding. On the other hand, better learning was associated with lower alpha and higher beta band power after training, potentially reflecting neural mechanisms of retrieval. Importantly, learning was a significant predictor of creativity, as judged by experts. There was also an inverted U-shaped relationship between percentage of correct intervals and creativity, as compositions with an intermediate proportion of correct intervals were associated with the highest creativity. Finally, the P200 in response to incorrect notes was predictive of creativity, suggesting a link between the neural correlates of learning, and creativity. Overall, our findings shed light on the neural mechanisms of learning an unfamiliar music grammar, and offer novel contributions to the associations between learning measures and creative compositions based on learned materials.


Subject(s)
Auditory Perception/physiology , Brain Waves/physiology , Cerebral Cortex/physiology , Creativity , Evoked Potentials/physiology , Mental Recall/physiology , Music , Probability Learning , Adult , Event-Related Potentials, P300/physiology , Female , Humans , Judgment/physiology , Male , Young Adult
16.
Psychol Rev ; 127(2): 216-244, 2020 03.
Article in English | MEDLINE | ID: mdl-31868392

ABSTRACT

Simultaneous consonance is a salient perceptual phenomenon corresponding to the perceived pleasantness of simultaneously sounding musical tones. Various competing theories of consonance have been proposed over the centuries, but recently a consensus has developed that simultaneous consonance is primarily driven by harmonicity perception. Here we question this view, substantiating our argument by critically reviewing historic consonance research from a broad variety of disciplines, reanalyzing consonance perception data from 4 previous behavioral studies representing more than 500 participants, and modeling three Western musical corpora representing more than 100,000 compositions. We conclude that simultaneous consonance is a composite phenomenon that derives in large part from three phenomena: interference, periodicity/harmonicity, and cultural familiarity. We formalize this conclusion with a computational model that predicts a musical chord's simultaneous consonance from these three features, and release this model in an open-source R package, incon, alongside 15 other computational models also evaluated in this paper. We hope that this package will facilitate further psychological and musicological research into simultaneous consonance. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Auditory Perception/physiology , Models, Psychological , Music , Pleasure/physiology , Recognition, Psychology/physiology , Culture , Humans , Time Factors
17.
Curr Biol ; 29(23): 4084-4092.e4, 2019 12 02.
Article in English | MEDLINE | ID: mdl-31708393

ABSTRACT

Listening to music often evokes intense emotions [1, 2]. Recent research suggests that musical pleasure comes from positive reward prediction errors, which arise when what is heard proves to be better than expected [3]. Central to this view is the engagement of the nucleus accumbens-a brain region that processes reward expectations-to pleasurable music and surprising musical events [4-8]. However, expectancy violations along multiple musical dimensions (e.g., harmony and melody) have failed to implicate the nucleus accumbens [9-11], and it is unknown how music reward value is assigned [12]. Whether changes in musical expectancy elicit pleasure has thus remained elusive [11]. Here, we demonstrate that pleasure varies nonlinearly as a function of the listener's uncertainty when anticipating a musical event, and the surprise it evokes when it deviates from expectations. Taking Western tonal harmony as a model of musical syntax, we used a machine-learning model [13] to mathematically quantify the uncertainty and surprise of 80,000 chords in US Billboard pop songs. Behaviorally, we found that chords elicited high pleasure ratings when they deviated substantially from what the listener had expected (low uncertainty, high surprise) or, conversely, when they conformed to expectations in an uninformative context (high uncertainty, low surprise). Neurally, we found using fMRI that activity in the amygdala, hippocampus, and auditory cortex reflected this interaction, while the nucleus accumbens only reflected uncertainty. These findings challenge current neurocognitive models of music-evoked pleasure and highlight the synergistic interplay between prospective and retrospective states of expectation in the musical experience. VIDEO ABSTRACT.


Subject(s)
Auditory Perception/physiology , Music , Pleasure , Uncertainty , Adult , Amygdala/physiology , Auditory Cortex/physiology , Female , Hippocampus/physiology , Humans , Male , Nucleus Accumbens/physiology , Young Adult
18.
J Neurosci ; 39(47): 9397-9409, 2019 11 20.
Article in English | MEDLINE | ID: mdl-31636112

ABSTRACT

Music ranks among the greatest human pleasures. It consistently engages the reward system, and converging evidence implies it exploits predictions to do so. Both prediction confirmations and errors are essential for understanding one's environment, and music offers many of each as it manipulates interacting patterns across multiple timescales. Learning models suggest that a balance of these outcomes (i.e., intermediate complexity) optimizes the reduction of uncertainty to rewarding and pleasurable effect. Yet evidence of a similar pattern in music is mixed, hampered by arbitrary measures of complexity. In the present studies, we applied a well-validated information-theoretic model of auditory expectation to systematically measure two key aspects of musical complexity: predictability (operationalized as information content [IC]), and uncertainty (entropy). In Study 1, we evaluated how these properties affect musical preferences in 43 male and female participants; in Study 2, we replicated Study 1 in an independent sample of 27 people and assessed the contribution of veridical predictability by presenting the same stimuli seven times. Both studies revealed significant quadratic effects of IC and entropy on liking that outperformed linear effects, indicating reliable preferences for music of intermediate complexity. An interaction between IC and entropy further suggested preferences for more predictability during more uncertain contexts, which would facilitate uncertainty reduction. Repeating stimuli decreased liking ratings but did not disrupt the preference for intermediate complexity. Together, these findings support long-hypothesized optimal zones of predictability and uncertainty in musical pleasure with formal modeling, relating the pleasure of music listening to the intrinsic reward of learning.SIGNIFICANCE STATEMENT Abstract pleasures, such as music, claim much of our time, energy, and money despite lacking any clear adaptive benefits like food or shelter. Yet as music manipulates patterns of melody, rhythm, and more, it proficiently exploits our expectations. Given the importance of anticipating and adapting to our ever-changing environments, making and evaluating uncertain predictions can have strong emotional effects. Accordingly, we present evidence that listeners consistently prefer music of intermediate predictive complexity, and that preferences shift toward expected musical outcomes in more uncertain contexts. These results are consistent with theories that emphasize the intrinsic reward of learning, both by updating inaccurate predictions and validating accurate ones, which is optimal in environments that present manageable predictive challenges (i.e., reducible uncertainty).


Subject(s)
Auditory Perception/physiology , Learning/physiology , Music/psychology , Pleasure/physiology , Reward , Uncertainty , Acoustic Stimulation/methods , Adolescent , Female , Forecasting , Humans , Male , Random Allocation , Young Adult
20.
Cortex ; 120: 181-200, 2019 11.
Article in English | MEDLINE | ID: mdl-31323458

ABSTRACT

Theories of predictive processing propose that prediction error responses are modulated by the certainty of the predictive model or precision. While there is some evidence for this phenomenon in the visual and, to a lesser extent, the auditory modality, little is known about whether it operates in the complex auditory contexts of daily life. Here, we examined how prediction error responses behave in a more complex and ecologically valid auditory context than those typically studied. We created musical tone sequences with different degrees of pitch uncertainty to manipulate the precision of participants' auditory expectations. Magnetoencephalography was used to measure the magnetic counterpart of the mismatch negativity (MMNm) as a neural marker of prediction error in a multi-feature paradigm. Pitch, slide, intensity and timbre deviants were included. We compared high-entropy stimuli, consisting of a set of non-repetitive melodies, with low-entropy stimuli consisting of a simple, repetitive pitch pattern. Pitch entropy was quantitatively assessed with an information-theoretic model of auditory expectation. We found a reduction in pitch and slide MMNm amplitudes in the high-entropy as compared to the low-entropy context. No significant differences were found for intensity and timbre MMNm amplitudes. Furthermore, in a separate behavioral experiment investigating the detection of pitch deviants, similar decreases were found for accuracy measures in response to more fine-grained increases in pitch entropy. Our results are consistent with a precision modulation of auditory prediction error in a musical context, and suggest that this effect is specific to features that depend on the manipulated dimension-pitch information, in this case.


Subject(s)
Music/psychology , Psychomotor Performance/physiology , Uncertainty , Acoustic Stimulation , Adolescent , Adult , Algorithms , Auditory Perception/physiology , Entropy , Evoked Potentials, Auditory , Female , Humans , Magnetoencephalography , Male , Pitch Perception/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...