Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Nat Hum Behav ; 8(5): 846-877, 2024 May.
Article in English | MEDLINE | ID: mdl-38438653

ABSTRACT

Music is present in every known society but varies from place to place. What, if anything, is universal to music cognition? We measured a signature of mental representations of rhythm in 39 participant groups in 15 countries, spanning urban societies and Indigenous populations. Listeners reproduced random 'seed' rhythms; their reproductions were fed back as the stimulus (as in the game of 'telephone'), such that their biases (the prior) could be estimated from the distribution of reproductions. Every tested group showed a sparse prior with peaks at integer-ratio rhythms. However, the importance of different integer ratios varied across groups, often reflecting local musical practices. Our results suggest a common feature of music cognition: discrete rhythm 'categories' at small-integer ratios. These discrete representations plausibly stabilize musical systems in the face of cultural transmission but interact with culture-specific traditions to yield the diversity that is evident when mental representations are probed across many cultures.


Subject(s)
Auditory Perception , Cross-Cultural Comparison , Music , Music/psychology , Humans , Male , Adult , Female , Auditory Perception/physiology , Young Adult , Cognition/physiology
2.
Nat Commun ; 15(1): 1482, 2024 Feb 19.
Article in English | MEDLINE | ID: mdl-38369535

ABSTRACT

The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.


Subject(s)
Music , Humans , Psychoacoustics , Music/psychology , Auditory Perception , Emotions , Judgment , Acoustic Stimulation
3.
Philos Trans R Soc Lond B Biol Sci ; 379(1895): 20220420, 2024 Jan 29.
Article in English | MEDLINE | ID: mdl-38104601

ABSTRACT

Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.


Subject(s)
Music , Pleasure , Auditory Perception , Music/psychology , Motivation , Bayes Theorem , Cognition , Acoustic Stimulation/methods
4.
Curr Biol ; 33(8): 1472-1486.e12, 2023 04 24.
Article in English | MEDLINE | ID: mdl-36958332

ABSTRACT

Speech and song have been transmitted orally for countless human generations, changing over time under the influence of biological, cognitive, and cultural pressures. Cross-cultural regularities and diversities in human song are thought to emerge from this transmission process, but testing how underlying mechanisms contribute to musical structures remains a key challenge. Here, we introduce an automatic online pipeline that streamlines large-scale cultural transmission experiments using a sophisticated and naturalistic modality: singing. We quantify the evolution of 3,424 melodies orally transmitted across 1,797 participants in the United States and India. This approach produces a high-resolution characterization of how oral transmission shapes melody, revealing the emergence of structures that are consistent with widespread musical features observed cross-culturally (small pitch sets, small pitch intervals, and arch-shaped melodic contours). We show how the emergence of these structures is constrained by individual biases in our participants-vocal constraints, working memory, and cultural exposure-which determine the size, shape, and complexity of evolving melodies. However, their ultimate effect on population-level structures depends on social dynamics taking place during cultural transmission. When participants recursively imitate their own productions (individual transmission), musical structures evolve slowly and heterogeneously, reflecting idiosyncratic musical biases. When participants instead imitate others' productions (social transmission), melodies rapidly shift toward homogeneous structures, reflecting shared structural biases that may underpin cross-cultural variation. These results provide the first quantitative characterization of the rich collection of biases that oral transmission imposes on music evolution, giving us a new understanding of how human song structures emerge via cultural transmission.


Subject(s)
Music , Singing , Voice , Humans , Memory, Short-Term , Speech
5.
Behav Res Methods ; 54(5): 2271-2285, 2022 10.
Article in English | MEDLINE | ID: mdl-35149980

ABSTRACT

Sensorimotor synchronization (SMS), the rhythmic coordination of perception and action, is a fundamental human skill that supports many behaviors, including music and dance (Repp, 2005; Repp & Su, 2013). Traditionally, SMS experiments have been performed in the laboratory using finger tapping paradigms, and have required equipment with high temporal fidelity to capture the asynchronies between the time of the tap and the corresponding cue event. Thus, SMS is particularly challenging to study with online research, where variability in participants' hardware and software can introduce uncontrolled latency and jitter into recordings. Here we present REPP (Rhythm ExPeriment Platform), a novel technology for measuring SMS in online experiments that can work efficiently using the built-in microphone and speakers of standard laptop computers. In a series of calibration and behavioral experiments, we demonstrate that REPP achieves high temporal accuracy (latency and jitter within 2 ms on average), high test-retest reliability both in the laboratory (r = .87) and online (r = .80), and high concurrent validity (r = .94). We also show that REPP is fully automated and customizable, enabling researchers to monitor experiments in real time and to implement a wide variety of SMS paradigms. We discuss online methods for ensuring high recruiting efficiency and data quality, including pre-screening tests and automatic procedures for quality monitoring. REPP can therefore open new avenues for research on SMS that would be nearly impossible in the laboratory, reducing experimental costs while massively increasing the reach, scalability, and speed of data collection.


Subject(s)
Music , Psychomotor Performance , Humans , Reproducibility of Results
6.
Behav Brain Sci ; 44: e76, 2021 09 30.
Article in English | MEDLINE | ID: mdl-34588059

ABSTRACT

Savage et al. and Mehr et al. provide well-substantiated arguments that the evolution of musicality was shaped by adaptive functions of social bonding and credible signalling. However, they are too quick to dismiss byproduct explanations of music evolution, and to present their theories as complete unitary accounts of the phenomenon.


Subject(s)
Music , Biological Evolution , Humans
7.
PLoS Comput Biol ; 17(5): e1008995, 2021 May.
Article in English | MEDLINE | ID: mdl-34038404

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pcbi.1008304.].

8.
Psychol Res ; 85(3): 1201-1220, 2021 Apr.
Article in English | MEDLINE | ID: mdl-32356009

ABSTRACT

The ability to silently hear music in the mind has been argued to be fundamental to musicality. Objective measurements of this subjective imagery experience are needed if this link between imagery ability and musicality is to be investigated. However, previous tests of musical imagery either rely on self-report, rely on melodic memory, or do not cater in range of abilities. The Pitch Imagery Arrow Task (PIAT) was designed to address these shortcomings; however, it is impractically long. In this paper, we shorten the PIAT using adaptive testing and automatic item generation. We interrogate the cognitive processes underlying the PIAT through item response modelling. The result is an efficient online test of auditory mental imagery ability (adaptive Pitch Imagery Arrow Task: aPIAT) that takes 8 min to complete, is adaptive to participant's individual ability, and so can be used to test participants with a range of musical backgrounds. Performance on the aPIAT showed positive moderate-to-strong correlations with measures of non-musical and musical working memory, self-reported musical training, and general musical sophistication. Ability on the task was best predicted by the ability to maintain and manipulate tones in mental imagery, as well as to resist perceptual biases that can lead to incorrect responses. As such, the aPIAT is the ideal tool in which to investigate the relationship between pitch imagery ability and musicality.


Subject(s)
Auditory Perception/physiology , Memory, Short-Term/physiology , Music/psychology , Adolescent , Adult , Female , Humans , Male , Middle Aged , United Kingdom , Young Adult
9.
PLoS Comput Biol ; 16(11): e1008304, 2020 11.
Article in English | MEDLINE | ID: mdl-33147209

ABSTRACT

Statistical learning and probabilistic prediction are fundamental processes in auditory cognition. A prominent computational model of these processes is Prediction by Partial Matching (PPM), a variable-order Markov model that learns by internalizing n-grams from training sequences. However, PPM has limitations as a cognitive model: in particular, it has a perfect memory that weights all historic observations equally, which is inconsistent with memory capacity constraints and recency effects observed in human cognition. We address these limitations with PPM-Decay, a new variant of PPM that introduces a customizable memory decay kernel. In three studies-one with artificially generated sequences, one with chord sequences from Western music, and one with new behavioral data from an auditory pattern detection experiment-we show how this decay kernel improves the model's predictive performance for sequences whose underlying statistics change over time, and enables the model to capture effects of memory constraints on auditory pattern detection. The resulting model is available in our new open-source R package, ppm (https://github.com/pmcharrison/ppm).


Subject(s)
Auditory Perception , Computer Simulation , Memory , Algorithms , Humans , Music
10.
J Cogn Neurosci ; 32(12): 2241-2259, 2020 12.
Article in English | MEDLINE | ID: mdl-32762519

ABSTRACT

It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.


Subject(s)
Music , Acoustic Stimulation , Auditory Perception , Cues , Humans , Learning
11.
Psychol Rev ; 127(2): 216-244, 2020 03.
Article in English | MEDLINE | ID: mdl-31868392

ABSTRACT

Simultaneous consonance is a salient perceptual phenomenon corresponding to the perceived pleasantness of simultaneously sounding musical tones. Various competing theories of consonance have been proposed over the centuries, but recently a consensus has developed that simultaneous consonance is primarily driven by harmonicity perception. Here we question this view, substantiating our argument by critically reviewing historic consonance research from a broad variety of disciplines, reanalyzing consonance perception data from 4 previous behavioral studies representing more than 500 participants, and modeling three Western musical corpora representing more than 100,000 compositions. We conclude that simultaneous consonance is a composite phenomenon that derives in large part from three phenomena: interference, periodicity/harmonicity, and cultural familiarity. We formalize this conclusion with a computational model that predicts a musical chord's simultaneous consonance from these three features, and release this model in an open-source R package, incon, alongside 15 other computational models also evaluated in this paper. We hope that this package will facilitate further psychological and musicological research into simultaneous consonance. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Auditory Perception/physiology , Models, Psychological , Music , Pleasure/physiology , Recognition, Psychology/physiology , Culture , Humans , Time Factors
12.
Neuroimage ; 206: 116311, 2020 02 01.
Article in English | MEDLINE | ID: mdl-31669411

ABSTRACT

Human creativity is intricately linked to acquired knowledge. However, to date learning a new musical style and subsequent musical creativity have largely been studied in isolation. We introduced a novel experimental paradigm combining behavioural, electrophysiological, and computational methods, to examine the neural correlates of unfamiliar music learning, and to investigate how neural and computational measures can predict human creativity. We investigated music learning by training non-musicians (N = 40) on an artificial music grammar. Participants' knowledge of the grammar was tested before and after three training sessions on separate days by assessing explicit recognition of the notes of the grammar, while additionally recording their EEG. After each training session, participants created their own musical compositions, which were later evaluated by human experts. A computational model of auditory expectation was used to quantify the statistical properties of both the grammar and the compositions. Results showed that participants successfully learned the new grammar. This was also reflected in the N100, P200, and P3a components, which were higher in response to incorrect than correct notes. The delta band (2.5-4.5 Hz) power in response to grammatical notes during first exposure to the grammar positively correlated with learning, suggesting a potential neural mechanism of encoding. On the other hand, better learning was associated with lower alpha and higher beta band power after training, potentially reflecting neural mechanisms of retrieval. Importantly, learning was a significant predictor of creativity, as judged by experts. There was also an inverted U-shaped relationship between percentage of correct intervals and creativity, as compositions with an intermediate proportion of correct intervals were associated with the highest creativity. Finally, the P200 in response to incorrect notes was predictive of creativity, suggesting a link between the neural correlates of learning, and creativity. Overall, our findings shed light on the neural mechanisms of learning an unfamiliar music grammar, and offer novel contributions to the associations between learning measures and creative compositions based on learned materials.


Subject(s)
Auditory Perception/physiology , Brain Waves/physiology , Cerebral Cortex/physiology , Creativity , Evoked Potentials/physiology , Mental Recall/physiology , Music , Probability Learning , Adult , Event-Related Potentials, P300/physiology , Female , Humans , Judgment/physiology , Male , Young Adult
13.
Curr Biol ; 29(23): 4084-4092.e4, 2019 12 02.
Article in English | MEDLINE | ID: mdl-31708393

ABSTRACT

Listening to music often evokes intense emotions [1, 2]. Recent research suggests that musical pleasure comes from positive reward prediction errors, which arise when what is heard proves to be better than expected [3]. Central to this view is the engagement of the nucleus accumbens-a brain region that processes reward expectations-to pleasurable music and surprising musical events [4-8]. However, expectancy violations along multiple musical dimensions (e.g., harmony and melody) have failed to implicate the nucleus accumbens [9-11], and it is unknown how music reward value is assigned [12]. Whether changes in musical expectancy elicit pleasure has thus remained elusive [11]. Here, we demonstrate that pleasure varies nonlinearly as a function of the listener's uncertainty when anticipating a musical event, and the surprise it evokes when it deviates from expectations. Taking Western tonal harmony as a model of musical syntax, we used a machine-learning model [13] to mathematically quantify the uncertainty and surprise of 80,000 chords in US Billboard pop songs. Behaviorally, we found that chords elicited high pleasure ratings when they deviated substantially from what the listener had expected (low uncertainty, high surprise) or, conversely, when they conformed to expectations in an uninformative context (high uncertainty, low surprise). Neurally, we found using fMRI that activity in the amygdala, hippocampus, and auditory cortex reflected this interaction, while the nucleus accumbens only reflected uncertainty. These findings challenge current neurocognitive models of music-evoked pleasure and highlight the synergistic interplay between prospective and retrospective states of expectation in the musical experience. VIDEO ABSTRACT.


Subject(s)
Auditory Perception/physiology , Music , Pleasure , Uncertainty , Adult , Amygdala/physiology , Auditory Cortex/physiology , Female , Hippocampus/physiology , Humans , Male , Nucleus Accumbens/physiology , Young Adult
15.
Behav Res Methods ; 51(2): 663-675, 2019 04.
Article in English | MEDLINE | ID: mdl-30924106

ABSTRACT

An important aspect of the perceived quality of vocal music is the degree to which the vocalist sings in tune. Although most listeners seem sensitive to vocal mistuning, little is known about the development of this perceptual ability or how it differs between listeners. Motivated by a lack of suitable preexisting measures, we introduce in this article an adaptive and ecologically valid test of mistuning perception ability. The stimulus material consisted of short excerpts (6 to 12 s in length) from pop music performances (obtained from MedleyDB; Bittner et al., 2014) for which the vocal track was pitch-shifted relative to the instrumental tracks. In a first experiment, 333 listeners were tested on a two-alternative forced choice task that tested discrimination between a pitch-shifted and an unaltered version of the same audio clip. Explanatory item response modeling was then used to calibrate an adaptive version of the test. A subsequent validation experiment applied this adaptive test to 66 participants with a broad range of musical expertise, producing evidence of the test's reliability, convergent validity, and divergent validity. The test is ready to be deployed as an experimental tool and should make an important contribution to our understanding of the human ability to judge mistuning.


Subject(s)
Auditory Perception/physiology , Music , Pitch Perception/physiology , Singing , Adolescent , Adult , Aged , Female , Humans , Male , Middle Aged , Reproducibility of Results , Young Adult
16.
Sci Rep ; 8(1): 12395, 2018 08 17.
Article in English | MEDLINE | ID: mdl-30120265

ABSTRACT

Beat perception is increasingly being recognised as a fundamental musical ability. A number of psychometric instruments have been developed to assess this ability, but these tests do not take advantage of modern psychometric techniques, and rarely receive systematic validation. The present research addresses this gap in the literature by developing and validating a new test, the Computerised Adaptive Beat Alignment Test (CA-BAT), a variant of the Beat Alignment Test (BAT) that leverages recent advances in psychometric theory, including item response theory, adaptive testing, and automatic item generation. The test is constructed and validated in four empirical studies. The results support the reliability and validity of the CA-BAT for laboratory testing, but suggest that the test is not well-suited to online testing, owing to its reliance on fine perceptual discrimination.


Subject(s)
Models, Theoretical , Psychometrics/methods , Adaptation, Psychological , Algorithms , Humans
17.
Sci Rep ; 7(1): 3618, 2017 06 15.
Article in English | MEDLINE | ID: mdl-28620165

ABSTRACT

Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.


Subject(s)
Discrimination, Psychological , Models, Psychological , Music , Psychometrics , Adult , Cognition , Female , Humans , Male , Middle Aged , Monte Carlo Method , Psychometrics/methods , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...