Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Front Psychol ; 15: 1269820, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38659690

RESUMEN

More than a century ago, Darwin proposed a putative role for music in sexual attraction (i.e., sex appeal), a hypothesis that has recently gained traction in the field of music psychology. In his writings, Darwin particularly emphasized the charming aspects of music. Across a broad range of cultures, music has a profound impact on humans' feelings, thoughts and behavior. Human mate choice is determined by the interplay of several factors. A number of studies have shown that music and musicality (i.e., the ability to produce and enjoy music) exert a positive influence on the evaluation of potential sexual partners. Here, we critically review the latest empirical literature on how and why music and musicality affect sexual attraction by considering the role of music-induced emotion and arousal in listeners as well as other socio-biological mechanisms. Following a short overview of current theories about the origins of musicality, we present studies that examine the impact of music and musicality on sexual attraction in different social settings. We differentiate between emotion-based influences related to the subjective experience of music as sound and effects associated with perceived musical ability or creativity in a potential partner. By integrating studies using various behavioral methods, we link current research strands that investigate how music influences sexual attraction and suggest promising avenues for future research.

2.
J Voice ; 2023 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-37080891

RESUMEN

OBJECTIVES/HYPOTHESIS: Vibrato is a core aesthetic element in singing. It varies considerably by both genre and era. Though studied extensively in Western classical singing over the years, there is a dearth of studies on vibrato in contemporary commercial music. In addressing this research gap, the objective of this study was to find and investigate common crossover song material from the opera, operetta, and Schlager singing styles from the historical early 20th to the contemporary 21st century epochs. STUDY DESIGN/METHODS: A total of 51 commercial recordings of two songs, "Es muss was Wunderbares sein" by Ralph Benatzky, and "Die ganze Welt ist himmelblau" by Robert Stolz, from "The White Horse Inn" ("Im weißen Rößl") were collected from opera, operetta, and Schlager singers. Each sample was annotated using Praat and analyzed in a custom Matlab- and Python-based algorithmic approach of singing voice separation and sine wave fitting novel to vibrato research. RESULTS: With respect to vibrato rate and extent, the three most notable findings were that (1) fo and vibrato were inherently connected; (2) Schlager, as a historical aesthetic category, has unique vibrato characteristics, with higher overall rate and lower overall extent; and (3) fo and vibrato extent varied over time based on the historical or contemporary recording year for each genre. CONCLUSIONS: Though these results should be interpreted with caution due to the limited sample size, conducting such acoustical analysis is relevant for voice pedagogy. This study sheds light on the complexity of vocal vibrato production physiology and acoustics while providing insight into various aesthetic choices when performing music of different genres and stylistic time periods. In the age of crossover singing training and commercially available recordings, this investigation reveals important distinctions regarding vocal vibrato across genres and eras that bear beneficial implications for singers and teachers of singing.

3.
Front Psychol ; 13: 862468, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36726505

RESUMEN

Sensorimotor synchronization is a longstanding paradigm in the analysis of isochronous beat tapping. Assessing the finger tapping of complex rhythmic patterns is far less explored and considerably more complex to analyze. Hence, whereas several instruments to assess tempo or beat tapping ability exist, there is at present a shortage of paradigms and tools for the assessment of the ability to tap to complex rhythmic patterns. To redress this limitation, we developed a standardized rhythm tapping test comprising test items of different complexity. The items were taken from the rhythm and tempo subtests of the Profile of Music Perception Skills (PROMS), and administered as tapping items to 40 participants (20 women). Overall, results showed satisfactory psychometric properties for internal consistency and test-retest reliability. Convergent, discriminant, and criterion validity correlations fell in line with expectations. Specifically, performance in rhythm tapping was correlated more strongly with performance in rhythm perception than in tempo perception, whereas performance in tempo tapping was more strongly correlated with performance in tempo than rhythm perception. Both tapping tasks were only marginally correlated with non-temporal perception tasks. In combination, the tapping tasks explained variance in external indicators of musical proficiency above and beyond the perceptual PROMS tasks. This tool allows for the assessment of complex rhythmic tapping skills in about 15 min, thus providing a useful addition to existing music aptitude batteries.

4.
Behav Brain Sci ; 44: e72, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588057

RESUMEN

Music's efficacy as a credible signal and/or as a tool for social bonding piggybacks on a diverse set of biological and cognitive processes, implying different proximate mechanisms. It is likely this multiplicity of mechanisms that explains why it is so difficult to account for music's putative biological role(s), as well as its possible origins, by proposing a single adaptive function.


Asunto(s)
Música , Humanos
5.
Brain Struct Funct ; 225(7): 1997-2015, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32591927

RESUMEN

The ability to generate complex hierarchical structures is a crucial component of human cognition which can be expressed in the musical domain in the form of hierarchical melodic relations. The neural underpinnings of this ability have been investigated by comparing the perception of well-formed melodies with unexpected sequences of tones. However, these contrasts do not target specifically the representation of rules generating hierarchical structure. Here, we present a novel paradigm in which identical melodic sequences are generated in four steps, according to three different rules: The Recursive rule, generating new hierarchical levels at each step; The Iterative rule, adding tones within a fixed hierarchical level without generating new levels; and a control rule that simply repeats the third step. Using fMRI, we compared brain activity across these rules when participants are imagining the fourth step after listening to the third (generation phase), and when participants listened to a fourth step (test sound phase), either well-formed or a violation. We found that, in comparison with Repetition and Iteration, imagining the fourth step using the Recursive rule activated the superior temporal gyrus (STG). During the test sound phase, we found fronto-temporo-parietal activity and hippocampal de-activation when processing violations, but no differences between rules. STG activation during the generation phase suggests that generating new hierarchical levels from previous steps might rely on retrieving appropriate melodic hierarchy schemas. Previous findings highlighting the role of hippocampus and inferior frontal gyrus may reflect processing of unexpected melodic sequences, rather than hierarchy generation per se.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/diagnóstico por imagen , Música , Adulto , Encéfalo/fisiología , Mapeo Encefálico , Cognición/fisiología , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
6.
PLoS One ; 12(9): e0183531, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28892486

RESUMEN

Several theories about the origins of music have emphasized its biological and social functions, including in courtship. Music may act as a courtship display due to its capacity to vary in complexity and emotional content. Support for music's reproductive function comes from the recent finding that only women in the fertile phase of the reproductive cycle prefer composers of complex melodies to composers of simple ones as short-term sexual partners, which is also in line with the ovulatory shift hypothesis. However, the precise mechanisms by which music may influence sexual attraction are unknown, specifically how music may interact with visual attractiveness cues and affect perception and behaviour in both genders. Using a crossmodal priming paradigm, we examined whether listening to music influences ratings of facial attractiveness and dating desirability of opposite-sex faces. We also tested whether misattribution of arousal or pleasantness underlies these effects, and explored whether sex differences and menstrual cycle phase may be moderators. Our sample comprised 64 women in the fertile or infertile phase (no hormonal contraception use) and 32 men, carefully matched for mood, relationship status, and musical preferences. Musical primes (25 s) varied in arousal and pleasantness, and targets were photos of faces with neutral expressions (2 s). Group-wise analyses indicated that women, but not men, gave significantly higher ratings of facial attractiveness and dating desirability after having listened to music than in the silent control condition. High-arousing, complex music yielded the largest effects, suggesting that music may affect human courtship behaviour through induced arousal, which calls for further studies on the mechanisms by which music affects sexual attraction in real-life social contexts.


Asunto(s)
Nivel de Alerta , Cortejo , Música/psicología , Adulto , Afecto , Análisis de Varianza , Expresión Facial , Femenino , Humanos , Masculino , Adulto Joven
7.
Cognition ; 161: 31-45, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28103526

RESUMEN

The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes.


Asunto(s)
Percepción Auditiva , Cognición , Fractales , Música/psicología , Estimulación Acústica , Adulto , Femenino , Humanos , Aprendizaje , Masculino , Memoria , Adulto Joven
8.
Acta Psychol (Amst) ; 166: 54-63, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-27058166

RESUMEN

The context in which a stimulus is presented shapes the way it is processed. This effect has been studied extensively in the field of visual perception. Our understanding of how context affects the processing of auditory stimuli is, however, rather limited. Western music is primarily built on melodies (succession of pitches) typically accompanied by chords (harmonic context), which provides a natural template for the study of context effects in auditory processing. Here, we investigated whether pitch class equivalence judgments of tones are affected by the harmonic context within which the target tones are embedded. Nineteen musicians and 19 non-musicians completed a change detection task in which they were asked to determine whether two successively presented target tones, heard either in isolation or with a chordal accompaniment (same or different chords), belonged to the same pitch class. Both musicians and non-musicians were most accurate when the chords remained the same, less so in the absence of chordal accompaniment, and least when the chords differed between both target tones. Further analysis investigating possible mechanisms underpinning these effects of harmonic context on task performance revealed that both a change in gestalt (change in either chord or pitch class), as well as incongruency between change in target tone pitch class and change in chords, led to reduced accuracy and longer reaction times. Our results demonstrate that, similarly to visual processing, auditory processing is influenced by gestalt and congruency effects.


Asunto(s)
Estimulación Acústica/métodos , Teoría Gestáltica , Música , Percepción de la Altura Tonal , Tiempo de Reacción , Adulto , Femenino , Humanos , Juicio , Masculino , Análisis y Desempeño de Tareas , Adulto Joven
9.
Ethology ; 122(4): 329-342, 2016 04.
Artículo en Inglés | MEDLINE | ID: mdl-27065507

RESUMEN

Determining whether a species' vocal communication system is graded or discrete requires definition of its vocal repertoire. In this context, research on domestic pig (Sus scrofa domesticus) vocalizations, for example, has led to significant advances in our understanding of communicative functions. Despite their close relation to domestic pigs, little is known about wild boar (Sus scrofa) vocalizations. The few existing studies, conducted in the 1970s, relied on visual inspections of spectrograms to quantify acoustic parameters and lacked statistical analysis. Here, we use objective signal processing techniques and advanced statistical approaches to classify 616 calls recorded from semi-free ranging animals. Based on four spectral and temporal acoustic parameters-quartile Q25, duration, spectral flux, and spectral flatness-extracted from a multivariate analysis, we refine and extend the conclusions drawn from previous work and present a statistically validated classification of the wild boar vocal repertoire into four call types: grunts, grunt-squeals, squeals, and trumpets. While the majority of calls could be sorted into these categories using objective criteria, we also found evidence supporting a graded interpretation of some wild boar vocalizations as acoustically continuous, with the extremes representing discrete call types. The use of objective criteria based on modern techniques and statistics in respect to acoustic continuity advances our understanding of vocal variation. Integrating our findings with recent studies on domestic pig vocal behavior and emotions, we emphasize the importance of grunt-squeals for acoustic approaches to animal welfare and underline the need of further research investigating the role of domestication on animal vocal communication.

10.
J Exp Psychol Hum Percept Perform ; 42(4): 594-609, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26594881

RESUMEN

This research explored the relations between the predictability of musical structure, expressive timing in performance, and listeners' perceived musical tension. Studies analyzing the influence of expressive timing on listeners' affective responses have been constrained by the fact that, in most pieces, the notated durations limit performers' interpretive freedom. To circumvent this issue, we focused on the unmeasured prelude, a semi-improvisatory genre without notated durations. In Experiment 1, 12 professional harpsichordists recorded an unmeasured prelude on a harpsichord equipped with a MIDI console. Melodic expectation was assessed using a probabilistic model (IDyOM [Information Dynamics of Music]) whose expectations have been previously shown to match closely those of human listeners. Performance timing information was extracted from the MIDI data using a score-performance matching algorithm. Time-series analyses showed that, in a piece with unspecified note durations, the predictability of melodic structure measurably influenced tempo fluctuations in performance. In Experiment 2, another 10 harpsichordists, 20 nonharpsichordist musicians, and 20 nonmusicians listened to the recordings from Experiment 1 and rated the perceived tension continuously. Granger causality analyses were conducted to investigate predictive relations among melodic expectation, expressive timing, and perceived tension. Although melodic expectation, as modeled by IDyOM, modestly predicted perceived tension for all participant groups, neither of its components, information content or entropy, was Granger causal. In contrast, expressive timing was a strong predictor and was Granger causal. However, because melodic expectation was also predictive of expressive timing, our results outline a complete chain of influence from predictability of melodic structure via expressive performance timing to perceived musical tension. (PsycINFO Database Record


Asunto(s)
Percepción Auditiva , Música/psicología , Percepción del Tiempo , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
11.
Front Hum Neurosci ; 9: 619, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26617511

RESUMEN

Pupillary responses are a well-known indicator of emotional arousal but have not yet been systematically investigated in response to music. Here, we measured pupillary dilations evoked by short musical excerpts normalized for intensity and selected for their stylistic uniformity. Thirty participants (15 females) provided subjective ratings of music-induced felt arousal, tension, pleasantness, and familiarity for 80 classical music excerpts. The pupillary responses evoked by these excerpts were measured in another thirty participants (15 females). We probed the role of listener-specific characteristics such as mood, stress reactivity, self-reported role of music in life, liking for the selected excerpts, as well as of subjective responses to music, in pupillary responses. Linear mixed model analyses showed that a greater role of music in life was associated with larger dilations, and that larger dilations were also predicted for excerpts rated as more arousing or tense. However, an interaction between arousal and liking for the excerpts suggested that pupillary responses were modulated less strongly by arousal when the excerpts were particularly liked. An analogous interaction was observed between tension and liking. Additionally, males exhibited larger dilations than females. Overall, these findings suggest a complex interplay between bottom-up and top-down influences on pupillary responses to music.

12.
Curr Biol ; 25(19): R819-20, 2015 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-26439331
13.
Neuropsychologia ; 78: 207-20, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26455803

RESUMEN

Congenital amusia is a neurodevelopmental disorder characterized by impaired pitch processing. Although pitch simultaneities are among the fundamental building blocks of Western tonal music, affective responses to simultaneities such as isolated dyads varying in consonance/dissonance or chords varying in major/minor quality have rarely been studied in amusic individuals. Thirteen amusics and thirteen matched controls enculturated to Western tonal music provided pleasantness ratings of sine-tone dyads and complex-tone dyads in piano timbre as well as perceived happiness/sadness ratings of sine-tone triads and complex-tone triads in piano timbre. Acoustical analyses of roughness and harmonicity were conducted to determine whether similar acoustic information contributed to these evaluations in amusics and controls. Amusic individuals' pleasantness ratings indicated sensitivity to consonance and dissonance for complex-tone (piano timbre) dyads and, to a lesser degree, sine-tone dyads, whereas controls showed sensitivity when listening to both tone types. Furthermore, amusic individuals showed some sensitivity to the happiness-major association in the complex-tone condition, but not in the sine-tone condition. Controls rated major chords as happier than minor chords in both tone types. Linear regression analyses revealed that affective ratings of dyads and triads by amusic individuals were predicted by roughness but not harmonicity, whereas affective ratings by controls were predicted by both roughness and harmonicity. We discuss affective sensitivity in congenital amusia in view of theories of affective responses to isolated chords in Western listeners.


Asunto(s)
Trastornos de la Percepción Auditiva/psicología , Emociones , Música , Percepción de la Altura Tonal , Estimulación Acústica , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Detección de Señal Psicológica
15.
Philos Trans R Soc Lond B Biol Sci ; 370(1664): 20140092, 2015 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-25646515

RESUMEN

Advances in molecular technologies make it possible to pinpoint genomic factors associated with complex human traits. For cognition and behaviour, identification of underlying genes provides new entry points for deciphering the key neurobiological pathways. In the past decade, the search for genetic correlates of musicality has gained traction. Reports have documented familial clustering for different extremes of ability, including amusia and absolute pitch (AP), with twin studies demonstrating high heritability for some music-related skills, such as pitch perception. Certain chromosomal regions have been linked to AP and musical aptitude, while individual candidate genes have been investigated in relation to aptitude and creativity. Most recently, researchers in this field started performing genome-wide association scans. Thus far, studies have been hampered by relatively small sample sizes and limitations in defining components of musicality, including an emphasis on skills that can only be assessed in trained musicians. With opportunities to administer standardized aptitude tests online, systematic large-scale assessment of musical abilities is now feasible, an important step towards high-powered genome-wide screens. Here, we offer a synthesis of existing literatures and outline concrete suggestions for the development of comprehensive operational tools for the analysis of musical phenotypes.


Asunto(s)
Trastornos de la Percepción Auditiva/genética , Música , Estudio de Asociación del Genoma Completo , Humanos , Investigación
16.
Proc Natl Acad Sci U S A ; 111(46): 16616-21, 2014 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-25368163

RESUMEN

Many human musical scales, including the diatonic major scale prevalent in Western music, are built partially or entirely from intervals (ratios between adjacent frequencies) corresponding to small-integer proportions drawn from the harmonic series. Scientists have long debated the extent to which principles of scale generation in human music are biologically or culturally determined. Data from animal "song" may provide new insights into this discussion. Here, by examining pitch relationships using both a simple linear regression model and a Bayesian generative model, we show that most songs of the hermit thrush (Catharus guttatus) favor simple frequency ratios derived from the harmonic (or overtone) series. Furthermore, we show that this frequency selection results not from physical constraints governing peripheral production mechanisms but from active selection at a central level. These data provide the most rigorous empirical evidence to date of a bird song that makes use of the same mathematical principles that underlie Western and many non-Western musical scales, demonstrating surprising convergence between human and animal "song cultures." Although there is no evidence that the songs of most bird species follow the overtone series, our findings add to a small but growing body of research showing that a preference for small-integer frequency ratios is not unique to humans. These findings thus have important implications for current debates about the origins of human musical systems and may call for a reevaluation of existing theories of musical consonance based on specific human vocal characteristics.


Asunto(s)
Música , Pájaros Cantores/fisiología , Vocalización Animal/fisiología , Acústica , Animales , Teorema de Bayes , Humanos , Análisis de los Mínimos Cuadrados , Modelos Teóricos , Percepción de la Altura Tonal , Placer
17.
19.
PLoS One ; 9(7): e102103, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24999623

RESUMEN

Exposure to repetitive drumming combined with instructions for shamanic journeying has been associated with physiological and therapeutic effects, such as an increase in salivary immunoglobulin A. In order to assess whether the combination of repetitive drumming and shamanic instructions is specifically associated with these effects, we compared the effect of listening to either repetitive drumming or instrumental meditation music for 15 minutes on salivary cortisol concentration and on self-reported physiological and psychological states. For each musical style, two groups of participants were exposed to two conditions: instructions for shamanic journeying or relaxation instructions. A total of 39 participants (24 females) inexperienced in shamanic journeying completed the experiment. Salivary cortisol concentrations were measured before and after exposure to music. In addition, participants filled out a mood questionnaire before and after the experiment and completed a post experiment questionnaire on their experiences. A significant decrease in the concentration in salivary cortisol was observed across all musical styles and instructions, indicating that exposure to 15 minutes of either repetitive drumming or instrumental meditation music, while lying down, was sufficient to induce a decrease in cortisol levels. However, no differences were observed across conditions. Significant differences in reported emotional states and subjective experiences were observed between the groups. Notably, participants exposed to repetitive drumming combined with shamanic instructions reported experiencing heaviness, decreased heart rate, and dreamlike experiences significantly more often than participants exposed to repetitive drumming combined with relaxation instructions. Our findings suggest that the subjective effects specifically attributed to repetitive drumming and shamanic journeying may not be reflected in differential endocrine responses.


Asunto(s)
Hidrocortisona/análisis , Meditación , Terapias Mente-Cuerpo , Musicoterapia/métodos , Saliva/química , Estrés Psicológico/terapia , Adulto , Anciano , Ansiedad/terapia , Emociones/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Proyectos Piloto , Relajación , Adulto Joven
20.
Front Psychol ; 5: 141, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24605104

RESUMEN

Can listeners recognize the individual characteristics of unfamiliar performers playing two different musical pieces on the harpsichord? Six professional harpsichordists, three prize-winners and three non prize-winners, made two recordings of two pieces from the Baroque period (a variation on a Partita by Frescobaldi and a rondo by François Couperin) on an instrument equipped with a MIDI console. Short (8 to 15 s) excerpts from these 24 recordings were subsequently used in a sorting task in which 20 musicians and 20 non-musicians, balanced for gender, listened to these excerpts and grouped together those that they thought had been played by the same performer. Twenty-six participants, including 17 musicians and nine non-musicians, performed significantly better than chance, demonstrating that the excerpts contained sufficient information to enable listeners to recognize the individual characteristics of the performers. The grouping accuracy of musicians was significantly higher than that observed for non-musicians. No significant difference in grouping accuracy was found between prize-winning performers and non-winners or between genders. However, the grouping accuracy was significantly higher for the rondo than for the variation, suggesting that the features of the two pieces differed in a way that affected the listeners' ability to sort them accurately. Furthermore, only musicians performed above chance level when matching variation excerpts with rondo excerpts, suggesting that accurately assigning recordings of different pieces to their performer may require musical training. Comparisons between the MIDI performance data and the results of the sorting task revealed that tempo and, to a lesser extent, note onset asynchrony were the most important predictors of the perceived distance between performers, and that listeners appeared to rely mostly on a holistic percept of the excerpts rather than on a comparison of note-by-note expressive patterns.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...