Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
J Acoust Soc Am ; 156(2): 1111-1122, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39145812

RESUMEN

Previous psychological studies have shown that musical consonance is not only determined by the frequency ratios between tones, but also by the frequency spectra of those tones. However, these prior studies used artificial tones, specifically tones built from a small number of pure tones, which do not match the acoustic complexity of real musical instruments. The present experiment therefore investigates tones recorded from a real musical instrument, the Westerkerk Carillon, conducting a "dense rating" experiment where participants (N = 113) rated musical intervals drawn from the continuous range 0-15 semitones. Results show that the traditional consonances of the major third and the minor sixth become dissonances in the carillon and that small intervals (in particular 0.5-2.5 semitones) also become particularly dissonant. Computational modelling shows that these effects are primarily caused by interference between partials (e.g., beating), but that preference for harmonicity is also necessary to produce an accurate overall account of participants' preferences. The results support musicians' writings about the carillon and contribute to ongoing debates about the psychological mechanisms underpinning consonance perception, in particular disputing the recent claim that interference is largely irrelevant to consonance perception.


Asunto(s)
Estimulación Acústica , Música , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Simulación por Computador , Espectrografía del Sonido , Adolescente , Percepción de la Altura Tonal , Factores de Tiempo , Acústica , Persona de Mediana Edad , Percepción Auditiva
2.
PLoS Comput Biol ; 17(5): e1008995, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-34038404

RESUMEN

[This corrects the article DOI: 10.1371/journal.pcbi.1008304.].

3.
Behav Res Methods ; 54(5): 2271-2285, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35149980

RESUMEN

Sensorimotor synchronization (SMS), the rhythmic coordination of perception and action, is a fundamental human skill that supports many behaviors, including music and dance (Repp, 2005; Repp & Su, 2013). Traditionally, SMS experiments have been performed in the laboratory using finger tapping paradigms, and have required equipment with high temporal fidelity to capture the asynchronies between the time of the tap and the corresponding cue event. Thus, SMS is particularly challenging to study with online research, where variability in participants' hardware and software can introduce uncontrolled latency and jitter into recordings. Here we present REPP (Rhythm ExPeriment Platform), a novel technology for measuring SMS in online experiments that can work efficiently using the built-in microphone and speakers of standard laptop computers. In a series of calibration and behavioral experiments, we demonstrate that REPP achieves high temporal accuracy (latency and jitter within 2 ms on average), high test-retest reliability both in the laboratory (r = .87) and online (r = .80), and high concurrent validity (r = .94). We also show that REPP is fully automated and customizable, enabling researchers to monitor experiments in real time and to implement a wide variety of SMS paradigms. We discuss online methods for ensuring high recruiting efficiency and data quality, including pre-screening tests and automatic procedures for quality monitoring. REPP can therefore open new avenues for research on SMS that would be nearly impossible in the laboratory, reducing experimental costs while massively increasing the reach, scalability, and speed of data collection.


Asunto(s)
Música , Desempeño Psicomotor , Humanos , Reproducibilidad de los Resultados
4.
PLoS Comput Biol ; 16(11): e1008304, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-33147209

RESUMEN

Statistical learning and probabilistic prediction are fundamental processes in auditory cognition. A prominent computational model of these processes is Prediction by Partial Matching (PPM), a variable-order Markov model that learns by internalizing n-grams from training sequences. However, PPM has limitations as a cognitive model: in particular, it has a perfect memory that weights all historic observations equally, which is inconsistent with memory capacity constraints and recency effects observed in human cognition. We address these limitations with PPM-Decay, a new variant of PPM that introduces a customizable memory decay kernel. In three studies-one with artificially generated sequences, one with chord sequences from Western music, and one with new behavioral data from an auditory pattern detection experiment-we show how this decay kernel improves the model's predictive performance for sequences whose underlying statistics change over time, and enables the model to capture effects of memory constraints on auditory pattern detection. The resulting model is available in our new open-source R package, ppm (https://github.com/pmcharrison/ppm).


Asunto(s)
Percepción Auditiva , Simulación por Computador , Memoria , Algoritmos , Humanos , Música
5.
Psychol Res ; 85(3): 1201-1220, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32356009

RESUMEN

The ability to silently hear music in the mind has been argued to be fundamental to musicality. Objective measurements of this subjective imagery experience are needed if this link between imagery ability and musicality is to be investigated. However, previous tests of musical imagery either rely on self-report, rely on melodic memory, or do not cater in range of abilities. The Pitch Imagery Arrow Task (PIAT) was designed to address these shortcomings; however, it is impractically long. In this paper, we shorten the PIAT using adaptive testing and automatic item generation. We interrogate the cognitive processes underlying the PIAT through item response modelling. The result is an efficient online test of auditory mental imagery ability (adaptive Pitch Imagery Arrow Task: aPIAT) that takes 8 min to complete, is adaptive to participant's individual ability, and so can be used to test participants with a range of musical backgrounds. Performance on the aPIAT showed positive moderate-to-strong correlations with measures of non-musical and musical working memory, self-reported musical training, and general musical sophistication. Ability on the task was best predicted by the ability to maintain and manipulate tones in mental imagery, as well as to resist perceptual biases that can lead to incorrect responses. As such, the aPIAT is the ideal tool in which to investigate the relationship between pitch imagery ability and musicality.


Asunto(s)
Percepción Auditiva/fisiología , Memoria a Corto Plazo/fisiología , Música/psicología , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reino Unido , Adulto Joven
6.
Behav Brain Sci ; 44: e76, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588059

RESUMEN

Savage et al. and Mehr et al. provide well-substantiated arguments that the evolution of musicality was shaped by adaptive functions of social bonding and credible signalling. However, they are too quick to dismiss byproduct explanations of music evolution, and to present their theories as complete unitary accounts of the phenomenon.


Asunto(s)
Música , Evolución Biológica , Humanos
7.
J Cogn Neurosci ; 32(12): 2241-2259, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32762519

RESUMEN

It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.


Asunto(s)
Música , Estimulación Acústica , Percepción Auditiva , Señales (Psicología) , Humanos , Aprendizaje
8.
Neuroimage ; 206: 116311, 2020 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-31669411

RESUMEN

Human creativity is intricately linked to acquired knowledge. However, to date learning a new musical style and subsequent musical creativity have largely been studied in isolation. We introduced a novel experimental paradigm combining behavioural, electrophysiological, and computational methods, to examine the neural correlates of unfamiliar music learning, and to investigate how neural and computational measures can predict human creativity. We investigated music learning by training non-musicians (N = 40) on an artificial music grammar. Participants' knowledge of the grammar was tested before and after three training sessions on separate days by assessing explicit recognition of the notes of the grammar, while additionally recording their EEG. After each training session, participants created their own musical compositions, which were later evaluated by human experts. A computational model of auditory expectation was used to quantify the statistical properties of both the grammar and the compositions. Results showed that participants successfully learned the new grammar. This was also reflected in the N100, P200, and P3a components, which were higher in response to incorrect than correct notes. The delta band (2.5-4.5 Hz) power in response to grammatical notes during first exposure to the grammar positively correlated with learning, suggesting a potential neural mechanism of encoding. On the other hand, better learning was associated with lower alpha and higher beta band power after training, potentially reflecting neural mechanisms of retrieval. Importantly, learning was a significant predictor of creativity, as judged by experts. There was also an inverted U-shaped relationship between percentage of correct intervals and creativity, as compositions with an intermediate proportion of correct intervals were associated with the highest creativity. Finally, the P200 in response to incorrect notes was predictive of creativity, suggesting a link between the neural correlates of learning, and creativity. Overall, our findings shed light on the neural mechanisms of learning an unfamiliar music grammar, and offer novel contributions to the associations between learning measures and creative compositions based on learned materials.


Asunto(s)
Percepción Auditiva/fisiología , Ondas Encefálicas/fisiología , Corteza Cerebral/fisiología , Creatividad , Potenciales Evocados/fisiología , Recuerdo Mental/fisiología , Música , Aprendizaje por Probabilidad , Adulto , Potenciales Relacionados con Evento P300/fisiología , Femenino , Humanos , Juicio/fisiología , Masculino , Adulto Joven
9.
Behav Res Methods ; 51(2): 663-675, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30924106

RESUMEN

An important aspect of the perceived quality of vocal music is the degree to which the vocalist sings in tune. Although most listeners seem sensitive to vocal mistuning, little is known about the development of this perceptual ability or how it differs between listeners. Motivated by a lack of suitable preexisting measures, we introduce in this article an adaptive and ecologically valid test of mistuning perception ability. The stimulus material consisted of short excerpts (6 to 12 s in length) from pop music performances (obtained from MedleyDB; Bittner et al., 2014) for which the vocal track was pitch-shifted relative to the instrumental tracks. In a first experiment, 333 listeners were tested on a two-alternative forced choice task that tested discrimination between a pitch-shifted and an unaltered version of the same audio clip. Explanatory item response modeling was then used to calibrate an adaptive version of the test. A subsequent validation experiment applied this adaptive test to 66 participants with a broad range of musical expertise, producing evidence of the test's reliability, convergent validity, and divergent validity. The test is ready to be deployed as an experimental tool and should make an important contribution to our understanding of the human ability to judge mistuning.


Asunto(s)
Percepción Auditiva/fisiología , Música , Percepción de la Altura Tonal/fisiología , Canto , Adolescente , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Adulto Joven
11.
Nat Commun ; 15(1): 1482, 2024 Feb 19.
Artículo en Inglés | MEDLINE | ID: mdl-38369535

RESUMEN

The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.


Asunto(s)
Música , Humanos , Psicoacústica , Música/psicología , Percepción Auditiva , Emociones , Juicio , Estimulación Acústica
12.
Sci Rep ; 14(1): 19048, 2024 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-39152203

RESUMEN

Aesthetic preference is intricately linked to learning and creativity. Previous studies have largely examined the perception of novelty in terms of pleasantness and the generation of novelty via creativity separately. The current study examines the connection between perception and generation of novelty in music; specifically, we investigated how pleasantness judgements and brain responses to musical notes of varying probability (estimated by a computational model of auditory expectation) are linked to learning and creativity. To facilitate learning de novo, 40 non-musicians were trained on an unfamiliar artificial music grammar. After learning, participants evaluated the pleasantness of the final notes of melodies, which varied in probability, while their EEG was recorded. They also composed their own musical pieces using the learned grammar which were subsequently assessed by experts. As expected, there was an inverted U-shaped relationship between liking and probability: participants were more likely to rate the notes with intermediate probabilities as pleasant. Further, intermediate probability notes elicited larger N100 and P200 at posterior and frontal sites, respectively, associated with prediction error processing. Crucially, individuals who produced less creative compositions preferred higher probability notes, whereas individuals who composed more creative pieces preferred notes with intermediate probability. Finally, evoked brain responses to note probability were relatively independent of learning and creativity, suggesting that these higher-level processes are not mediated by brain responses related to performance monitoring. Overall, our findings shed light on the relationship between perception and generation of novelty, offering new insights into aesthetic preference and its neural correlates.


Asunto(s)
Percepción Auditiva , Creatividad , Electroencefalografía , Aprendizaje , Música , Humanos , Música/psicología , Masculino , Femenino , Aprendizaje/fisiología , Adulto , Adulto Joven , Percepción Auditiva/fisiología , Encéfalo/fisiología , Estimulación Acústica
13.
Philos Trans R Soc Lond B Biol Sci ; 379(1895): 20220420, 2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38104601

RESUMEN

Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.


Asunto(s)
Música , Placer , Percepción Auditiva , Música/psicología , Motivación , Teorema de Bayes , Cognición , Estimulación Acústica/métodos
14.
Nat Hum Behav ; 8(5): 846-877, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38438653

RESUMEN

Music is present in every known society but varies from place to place. What, if anything, is universal to music cognition? We measured a signature of mental representations of rhythm in 39 participant groups in 15 countries, spanning urban societies and Indigenous populations. Listeners reproduced random 'seed' rhythms; their reproductions were fed back as the stimulus (as in the game of 'telephone'), such that their biases (the prior) could be estimated from the distribution of reproductions. Every tested group showed a sparse prior with peaks at integer-ratio rhythms. However, the importance of different integer ratios varied across groups, often reflecting local musical practices. Our results suggest a common feature of music cognition: discrete rhythm 'categories' at small-integer ratios. These discrete representations plausibly stabilize musical systems in the face of cultural transmission but interact with culture-specific traditions to yield the diversity that is evident when mental representations are probed across many cultures.


Asunto(s)
Percepción Auditiva , Comparación Transcultural , Música , Música/psicología , Humanos , Masculino , Adulto , Femenino , Percepción Auditiva/fisiología , Adulto Joven , Cognición/fisiología
15.
Curr Biol ; 33(8): 1472-1486.e12, 2023 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-36958332

RESUMEN

Speech and song have been transmitted orally for countless human generations, changing over time under the influence of biological, cognitive, and cultural pressures. Cross-cultural regularities and diversities in human song are thought to emerge from this transmission process, but testing how underlying mechanisms contribute to musical structures remains a key challenge. Here, we introduce an automatic online pipeline that streamlines large-scale cultural transmission experiments using a sophisticated and naturalistic modality: singing. We quantify the evolution of 3,424 melodies orally transmitted across 1,797 participants in the United States and India. This approach produces a high-resolution characterization of how oral transmission shapes melody, revealing the emergence of structures that are consistent with widespread musical features observed cross-culturally (small pitch sets, small pitch intervals, and arch-shaped melodic contours). We show how the emergence of these structures is constrained by individual biases in our participants-vocal constraints, working memory, and cultural exposure-which determine the size, shape, and complexity of evolving melodies. However, their ultimate effect on population-level structures depends on social dynamics taking place during cultural transmission. When participants recursively imitate their own productions (individual transmission), musical structures evolve slowly and heterogeneously, reflecting idiosyncratic musical biases. When participants instead imitate others' productions (social transmission), melodies rapidly shift toward homogeneous structures, reflecting shared structural biases that may underpin cross-cultural variation. These results provide the first quantitative characterization of the rich collection of biases that oral transmission imposes on music evolution, giving us a new understanding of how human song structures emerge via cultural transmission.


Asunto(s)
Música , Canto , Voz , Humanos , Memoria a Corto Plazo , Habla
16.
Psychol Rev ; 127(2): 216-244, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31868392

RESUMEN

Simultaneous consonance is a salient perceptual phenomenon corresponding to the perceived pleasantness of simultaneously sounding musical tones. Various competing theories of consonance have been proposed over the centuries, but recently a consensus has developed that simultaneous consonance is primarily driven by harmonicity perception. Here we question this view, substantiating our argument by critically reviewing historic consonance research from a broad variety of disciplines, reanalyzing consonance perception data from 4 previous behavioral studies representing more than 500 participants, and modeling three Western musical corpora representing more than 100,000 compositions. We conclude that simultaneous consonance is a composite phenomenon that derives in large part from three phenomena: interference, periodicity/harmonicity, and cultural familiarity. We formalize this conclusion with a computational model that predicts a musical chord's simultaneous consonance from these three features, and release this model in an open-source R package, incon, alongside 15 other computational models also evaluated in this paper. We hope that this package will facilitate further psychological and musicological research into simultaneous consonance. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Percepción Auditiva/fisiología , Modelos Psicológicos , Música , Placer/fisiología , Reconocimiento en Psicología/fisiología , Cultura , Humanos , Factores de Tiempo
17.
Curr Biol ; 29(23): 4084-4092.e4, 2019 12 02.
Artículo en Inglés | MEDLINE | ID: mdl-31708393

RESUMEN

Listening to music often evokes intense emotions [1, 2]. Recent research suggests that musical pleasure comes from positive reward prediction errors, which arise when what is heard proves to be better than expected [3]. Central to this view is the engagement of the nucleus accumbens-a brain region that processes reward expectations-to pleasurable music and surprising musical events [4-8]. However, expectancy violations along multiple musical dimensions (e.g., harmony and melody) have failed to implicate the nucleus accumbens [9-11], and it is unknown how music reward value is assigned [12]. Whether changes in musical expectancy elicit pleasure has thus remained elusive [11]. Here, we demonstrate that pleasure varies nonlinearly as a function of the listener's uncertainty when anticipating a musical event, and the surprise it evokes when it deviates from expectations. Taking Western tonal harmony as a model of musical syntax, we used a machine-learning model [13] to mathematically quantify the uncertainty and surprise of 80,000 chords in US Billboard pop songs. Behaviorally, we found that chords elicited high pleasure ratings when they deviated substantially from what the listener had expected (low uncertainty, high surprise) or, conversely, when they conformed to expectations in an uninformative context (high uncertainty, low surprise). Neurally, we found using fMRI that activity in the amygdala, hippocampus, and auditory cortex reflected this interaction, while the nucleus accumbens only reflected uncertainty. These findings challenge current neurocognitive models of music-evoked pleasure and highlight the synergistic interplay between prospective and retrospective states of expectation in the musical experience. VIDEO ABSTRACT.


Asunto(s)
Percepción Auditiva/fisiología , Música , Placer , Incertidumbre , Adulto , Amígdala del Cerebelo/fisiología , Corteza Auditiva/fisiología , Femenino , Hipocampo/fisiología , Humanos , Masculino , Núcleo Accumbens/fisiología , Adulto Joven
18.
Sci Rep ; 8(1): 12395, 2018 08 17.
Artículo en Inglés | MEDLINE | ID: mdl-30120265

RESUMEN

Beat perception is increasingly being recognised as a fundamental musical ability. A number of psychometric instruments have been developed to assess this ability, but these tests do not take advantage of modern psychometric techniques, and rarely receive systematic validation. The present research addresses this gap in the literature by developing and validating a new test, the Computerised Adaptive Beat Alignment Test (CA-BAT), a variant of the Beat Alignment Test (BAT) that leverages recent advances in psychometric theory, including item response theory, adaptive testing, and automatic item generation. The test is constructed and validated in four empirical studies. The results support the reliability and validity of the CA-BAT for laboratory testing, but suggest that the test is not well-suited to online testing, owing to its reliance on fine perceptual discrimination.


Asunto(s)
Modelos Teóricos , Psicometría/métodos , Adaptación Psicológica , Algoritmos , Humanos
19.
Sci Rep ; 7(1): 3618, 2017 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-28620165

RESUMEN

Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.


Asunto(s)
Discriminación en Psicología , Modelos Psicológicos , Música , Psicometría , Adulto , Cognición , Femenino , Humanos , Masculino , Persona de Mediana Edad , Método de Montecarlo , Psicometría/métodos , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA