Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Behav Brain Sci ; 44: e72, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588057

RESUMEN

Music's efficacy as a credible signal and/or as a tool for social bonding piggybacks on a diverse set of biological and cognitive processes, implying different proximate mechanisms. It is likely this multiplicity of mechanisms that explains why it is so difficult to account for music's putative biological role(s), as well as its possible origins, by proposing a single adaptive function.


Asunto(s)
Música , Humanos
2.
Proc Natl Acad Sci U S A ; 111(46): 16616-21, 2014 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-25368163

RESUMEN

Many human musical scales, including the diatonic major scale prevalent in Western music, are built partially or entirely from intervals (ratios between adjacent frequencies) corresponding to small-integer proportions drawn from the harmonic series. Scientists have long debated the extent to which principles of scale generation in human music are biologically or culturally determined. Data from animal "song" may provide new insights into this discussion. Here, by examining pitch relationships using both a simple linear regression model and a Bayesian generative model, we show that most songs of the hermit thrush (Catharus guttatus) favor simple frequency ratios derived from the harmonic (or overtone) series. Furthermore, we show that this frequency selection results not from physical constraints governing peripheral production mechanisms but from active selection at a central level. These data provide the most rigorous empirical evidence to date of a bird song that makes use of the same mathematical principles that underlie Western and many non-Western musical scales, demonstrating surprising convergence between human and animal "song cultures." Although there is no evidence that the songs of most bird species follow the overtone series, our findings add to a small but growing body of research showing that a preference for small-integer frequency ratios is not unique to humans. These findings thus have important implications for current debates about the origins of human musical systems and may call for a reevaluation of existing theories of musical consonance based on specific human vocal characteristics.


Asunto(s)
Música , Pájaros Cantores/fisiología , Vocalización Animal/fisiología , Acústica , Animales , Teorema de Bayes , Humanos , Análisis de los Mínimos Cuadrados , Modelos Teóricos , Percepción de la Altura Tonal , Placer
3.
Front Psychol ; 15: 1269820, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38659690

RESUMEN

More than a century ago, Darwin proposed a putative role for music in sexual attraction (i.e., sex appeal), a hypothesis that has recently gained traction in the field of music psychology. In his writings, Darwin particularly emphasized the charming aspects of music. Across a broad range of cultures, music has a profound impact on humans' feelings, thoughts and behavior. Human mate choice is determined by the interplay of several factors. A number of studies have shown that music and musicality (i.e., the ability to produce and enjoy music) exert a positive influence on the evaluation of potential sexual partners. Here, we critically review the latest empirical literature on how and why music and musicality affect sexual attraction by considering the role of music-induced emotion and arousal in listeners as well as other socio-biological mechanisms. Following a short overview of current theories about the origins of musicality, we present studies that examine the impact of music and musicality on sexual attraction in different social settings. We differentiate between emotion-based influences related to the subjective experience of music as sound and effects associated with perceived musical ability or creativity in a potential partner. By integrating studies using various behavioral methods, we link current research strands that investigate how music influences sexual attraction and suggest promising avenues for future research.

4.
BMC Evol Biol ; 13: 134, 2013 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-23815403

RESUMEN

BACKGROUND: Anuran vocalizations, especially their advertisement calls, are largely species-specific and can be used to identify taxonomic affiliations. Because anurans are not vocal learners, their vocalizations are generally assumed to have a strong genetic component. This suggests that the degree of similarity between advertisement calls may be related to large-scale phylogenetic relationships. To test this hypothesis, advertisement calls from 90 species belonging to four large clades (Bufo, Hylinae, Leptodactylus, and Rana) were analyzed. Phylogenetic distances were estimated based on the DNA sequences of the 12S mitochondrial ribosomal RNA gene, and, for a subset of 49 species, on the rhodopsin gene. Mean values for five acoustic parameters (coefficient of variation of root-mean-square amplitude, dominant frequency, spectral flux, spectral irregularity, and spectral flatness) were computed for each species. We then tested for phylogenetic signal on the body-size-corrected residuals of these five parameters, using three statistical tests (Moran's I, Mantel, and Blomberg's K) and three models of genetic distance (pairwise distances, Abouheif's proximities, and the variance-covariance matrix derived from the phylogenetic tree). RESULTS: A significant phylogenetic signal was detected for most acoustic parameters on the 12S dataset, across statistical tests and genetic distance models, both for the entire sample of 90 species and within clades in several cases. A further analysis on a subset of 49 species using genetic distances derived from rhodopsin and from 12S broadly confirmed the results obtained on the larger sample, indicating that the phylogenetic signals observed in these acoustic parameters can be detected using a variety of genetic distance models derived either from a variable mitochondrial sequence or from a conserved nuclear gene. CONCLUSIONS: We found a robust relationship, in a large number of species, between anuran phylogenetic relatedness and acoustic similarity in the advertisement calls in a taxon with no evidence for vocal learning, even after correcting for the effect of body size. This finding, covering a broad sample of species whose vocalizations are fairly diverse, indicates that the intense selection on certain call characteristics observed in many anurans does not eliminate all acoustic indicators of relatedness. Our approach could potentially be applied to other vocal taxa.


Asunto(s)
Anuros/clasificación , Anuros/fisiología , Filogenia , Vocalización Animal , Secuencia de Aminoácidos , Proteínas Anfibias/genética , Animales , Anuros/genética , Evolución Biológica , ADN Mitocondrial/genética , Bases de Datos de Ácidos Nucleicos , Datos de Secuencia Molecular , Rodopsina/genética , Especificidad de la Especie
5.
J Acoust Soc Am ; 133(1): 547-59, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23297926

RESUMEN

The vocalizations of anurans are innate in structure and may therefore contain indicators of phylogenetic history. Thus, advertisement calls of species which are more closely related phylogenetically are predicted to be more similar than those of distant species. This hypothesis was evaluated by comparing several widely used machine-learning algorithms. Recordings of advertisement calls from 142 species belonging to four genera were analyzed. A logistic regression model, using mean values for dominant frequency, coefficient of variation of root-mean square energy, and spectral flux, correctly classified advertisement calls with regard to genus with an accuracy above 70%. Similar accuracy rates were obtained using these parameters with a support vector machine model, a K-nearest neighbor algorithm, and a multivariate Gaussian distribution classifier, whereas a Gaussian mixture model performed slightly worse. In contrast, models based on mel-frequency cepstral coefficients did not fare as well. Comparable accuracy levels were obtained on out-of-sample recordings from 52 of the 142 original species. The results suggest that a combination of low-level acoustic attributes is sufficient to discriminate efficiently between the vocalizations of these four genera, thus supporting the initial premise and validating the use of high-throughput algorithms on animal vocalizations to evaluate phylogenetic hypotheses.


Asunto(s)
Algoritmos , Anuros/fisiología , Modelos Estadísticos , Procesamiento de Señales Asistido por Computador , Vocalización Animal , Animales , Anuros/clasificación , Modelos Logísticos , Masculino , Análisis Multivariante , Filogenia , Reproducibilidad de los Resultados , Espectrografía del Sonido , Especificidad de la Especie , Máquina de Vectores de Soporte , Factores de Tiempo
6.
Sensors (Basel) ; 13(8): 9790-820, 2013 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-23912427

RESUMEN

The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments.


Asunto(s)
Acelerometría/instrumentación , Actigrafía/instrumentación , Cognición/fisiología , Monitoreo Ambulatorio/instrumentación , Movimiento/fisiología , Música , Reconocimiento de Normas Patrones Automatizadas/métodos , Acelerometría/veterinaria , Acústica/instrumentación , Actigrafía/veterinaria , Animales , Módulo de Elasticidad , Diseño de Equipo , Análisis de Falla de Equipo , Sistemas Microelectromecánicos/instrumentación , Pan troglodytes , Espectrografía del Sonido/instrumentación , Espectrografía del Sonido/veterinaria , Transductores
7.
J Voice ; 2023 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-37080891

RESUMEN

OBJECTIVES/HYPOTHESIS: Vibrato is a core aesthetic element in singing. It varies considerably by both genre and era. Though studied extensively in Western classical singing over the years, there is a dearth of studies on vibrato in contemporary commercial music. In addressing this research gap, the objective of this study was to find and investigate common crossover song material from the opera, operetta, and Schlager singing styles from the historical early 20th to the contemporary 21st century epochs. STUDY DESIGN/METHODS: A total of 51 commercial recordings of two songs, "Es muss was Wunderbares sein" by Ralph Benatzky, and "Die ganze Welt ist himmelblau" by Robert Stolz, from "The White Horse Inn" ("Im weißen Rößl") were collected from opera, operetta, and Schlager singers. Each sample was annotated using Praat and analyzed in a custom Matlab- and Python-based algorithmic approach of singing voice separation and sine wave fitting novel to vibrato research. RESULTS: With respect to vibrato rate and extent, the three most notable findings were that (1) fo and vibrato were inherently connected; (2) Schlager, as a historical aesthetic category, has unique vibrato characteristics, with higher overall rate and lower overall extent; and (3) fo and vibrato extent varied over time based on the historical or contemporary recording year for each genre. CONCLUSIONS: Though these results should be interpreted with caution due to the limited sample size, conducting such acoustical analysis is relevant for voice pedagogy. This study sheds light on the complexity of vocal vibrato production physiology and acoustics while providing insight into various aesthetic choices when performing music of different genres and stylistic time periods. In the age of crossover singing training and commercially available recordings, this investigation reveals important distinctions regarding vocal vibrato across genres and eras that bear beneficial implications for singers and teachers of singing.

8.
Front Psychol ; 13: 862468, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36726505

RESUMEN

Sensorimotor synchronization is a longstanding paradigm in the analysis of isochronous beat tapping. Assessing the finger tapping of complex rhythmic patterns is far less explored and considerably more complex to analyze. Hence, whereas several instruments to assess tempo or beat tapping ability exist, there is at present a shortage of paradigms and tools for the assessment of the ability to tap to complex rhythmic patterns. To redress this limitation, we developed a standardized rhythm tapping test comprising test items of different complexity. The items were taken from the rhythm and tempo subtests of the Profile of Music Perception Skills (PROMS), and administered as tapping items to 40 participants (20 women). Overall, results showed satisfactory psychometric properties for internal consistency and test-retest reliability. Convergent, discriminant, and criterion validity correlations fell in line with expectations. Specifically, performance in rhythm tapping was correlated more strongly with performance in rhythm perception than in tempo perception, whereas performance in tempo tapping was more strongly correlated with performance in tempo than rhythm perception. Both tapping tasks were only marginally correlated with non-temporal perception tasks. In combination, the tapping tasks explained variance in external indicators of musical proficiency above and beyond the perceptual PROMS tasks. This tool allows for the assessment of complex rhythmic tapping skills in about 15 min, thus providing a useful addition to existing music aptitude batteries.

9.
Brain Struct Funct ; 225(7): 1997-2015, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32591927

RESUMEN

The ability to generate complex hierarchical structures is a crucial component of human cognition which can be expressed in the musical domain in the form of hierarchical melodic relations. The neural underpinnings of this ability have been investigated by comparing the perception of well-formed melodies with unexpected sequences of tones. However, these contrasts do not target specifically the representation of rules generating hierarchical structure. Here, we present a novel paradigm in which identical melodic sequences are generated in four steps, according to three different rules: The Recursive rule, generating new hierarchical levels at each step; The Iterative rule, adding tones within a fixed hierarchical level without generating new levels; and a control rule that simply repeats the third step. Using fMRI, we compared brain activity across these rules when participants are imagining the fourth step after listening to the third (generation phase), and when participants listened to a fourth step (test sound phase), either well-formed or a violation. We found that, in comparison with Repetition and Iteration, imagining the fourth step using the Recursive rule activated the superior temporal gyrus (STG). During the test sound phase, we found fronto-temporo-parietal activity and hippocampal de-activation when processing violations, but no differences between rules. STG activation during the generation phase suggests that generating new hierarchical levels from previous steps might rely on retrieving appropriate melodic hierarchy schemas. Previous findings highlighting the role of hippocampus and inferior frontal gyrus may reflect processing of unexpected melodic sequences, rather than hierarchy generation per se.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/diagnóstico por imagen , Música , Adulto , Encéfalo/fisiología , Mapeo Encefálico , Cognición/fisiología , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
10.
PLoS One ; 12(9): e0183531, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28892486

RESUMEN

Several theories about the origins of music have emphasized its biological and social functions, including in courtship. Music may act as a courtship display due to its capacity to vary in complexity and emotional content. Support for music's reproductive function comes from the recent finding that only women in the fertile phase of the reproductive cycle prefer composers of complex melodies to composers of simple ones as short-term sexual partners, which is also in line with the ovulatory shift hypothesis. However, the precise mechanisms by which music may influence sexual attraction are unknown, specifically how music may interact with visual attractiveness cues and affect perception and behaviour in both genders. Using a crossmodal priming paradigm, we examined whether listening to music influences ratings of facial attractiveness and dating desirability of opposite-sex faces. We also tested whether misattribution of arousal or pleasantness underlies these effects, and explored whether sex differences and menstrual cycle phase may be moderators. Our sample comprised 64 women in the fertile or infertile phase (no hormonal contraception use) and 32 men, carefully matched for mood, relationship status, and musical preferences. Musical primes (25 s) varied in arousal and pleasantness, and targets were photos of faces with neutral expressions (2 s). Group-wise analyses indicated that women, but not men, gave significantly higher ratings of facial attractiveness and dating desirability after having listened to music than in the silent control condition. High-arousing, complex music yielded the largest effects, suggesting that music may affect human courtship behaviour through induced arousal, which calls for further studies on the mechanisms by which music affects sexual attraction in real-life social contexts.


Asunto(s)
Nivel de Alerta , Cortejo , Música/psicología , Adulto , Afecto , Análisis de Varianza , Expresión Facial , Femenino , Humanos , Masculino , Adulto Joven
11.
Cognition ; 161: 31-45, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28103526

RESUMEN

The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes.


Asunto(s)
Percepción Auditiva , Cognición , Fractales , Música/psicología , Estimulación Acústica , Adulto , Femenino , Humanos , Aprendizaje , Masculino , Memoria , Adulto Joven
12.
Acta Psychol (Amst) ; 166: 54-63, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-27058166

RESUMEN

The context in which a stimulus is presented shapes the way it is processed. This effect has been studied extensively in the field of visual perception. Our understanding of how context affects the processing of auditory stimuli is, however, rather limited. Western music is primarily built on melodies (succession of pitches) typically accompanied by chords (harmonic context), which provides a natural template for the study of context effects in auditory processing. Here, we investigated whether pitch class equivalence judgments of tones are affected by the harmonic context within which the target tones are embedded. Nineteen musicians and 19 non-musicians completed a change detection task in which they were asked to determine whether two successively presented target tones, heard either in isolation or with a chordal accompaniment (same or different chords), belonged to the same pitch class. Both musicians and non-musicians were most accurate when the chords remained the same, less so in the absence of chordal accompaniment, and least when the chords differed between both target tones. Further analysis investigating possible mechanisms underpinning these effects of harmonic context on task performance revealed that both a change in gestalt (change in either chord or pitch class), as well as incongruency between change in target tone pitch class and change in chords, led to reduced accuracy and longer reaction times. Our results demonstrate that, similarly to visual processing, auditory processing is influenced by gestalt and congruency effects.


Asunto(s)
Estimulación Acústica/métodos , Teoría Gestáltica , Música , Percepción de la Altura Tonal , Tiempo de Reacción , Adulto , Femenino , Humanos , Juicio , Masculino , Análisis y Desempeño de Tareas , Adulto Joven
13.
J Exp Psychol Hum Percept Perform ; 42(4): 594-609, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26594881

RESUMEN

This research explored the relations between the predictability of musical structure, expressive timing in performance, and listeners' perceived musical tension. Studies analyzing the influence of expressive timing on listeners' affective responses have been constrained by the fact that, in most pieces, the notated durations limit performers' interpretive freedom. To circumvent this issue, we focused on the unmeasured prelude, a semi-improvisatory genre without notated durations. In Experiment 1, 12 professional harpsichordists recorded an unmeasured prelude on a harpsichord equipped with a MIDI console. Melodic expectation was assessed using a probabilistic model (IDyOM [Information Dynamics of Music]) whose expectations have been previously shown to match closely those of human listeners. Performance timing information was extracted from the MIDI data using a score-performance matching algorithm. Time-series analyses showed that, in a piece with unspecified note durations, the predictability of melodic structure measurably influenced tempo fluctuations in performance. In Experiment 2, another 10 harpsichordists, 20 nonharpsichordist musicians, and 20 nonmusicians listened to the recordings from Experiment 1 and rated the perceived tension continuously. Granger causality analyses were conducted to investigate predictive relations among melodic expectation, expressive timing, and perceived tension. Although melodic expectation, as modeled by IDyOM, modestly predicted perceived tension for all participant groups, neither of its components, information content or entropy, was Granger causal. In contrast, expressive timing was a strong predictor and was Granger causal. However, because melodic expectation was also predictive of expressive timing, our results outline a complete chain of influence from predictability of melodic structure via expressive performance timing to perceived musical tension. (PsycINFO Database Record


Asunto(s)
Percepción Auditiva , Música/psicología , Percepción del Tiempo , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
14.
Ethology ; 122(4): 329-342, 2016 04.
Artículo en Inglés | MEDLINE | ID: mdl-27065507

RESUMEN

Determining whether a species' vocal communication system is graded or discrete requires definition of its vocal repertoire. In this context, research on domestic pig (Sus scrofa domesticus) vocalizations, for example, has led to significant advances in our understanding of communicative functions. Despite their close relation to domestic pigs, little is known about wild boar (Sus scrofa) vocalizations. The few existing studies, conducted in the 1970s, relied on visual inspections of spectrograms to quantify acoustic parameters and lacked statistical analysis. Here, we use objective signal processing techniques and advanced statistical approaches to classify 616 calls recorded from semi-free ranging animals. Based on four spectral and temporal acoustic parameters-quartile Q25, duration, spectral flux, and spectral flatness-extracted from a multivariate analysis, we refine and extend the conclusions drawn from previous work and present a statistically validated classification of the wild boar vocal repertoire into four call types: grunts, grunt-squeals, squeals, and trumpets. While the majority of calls could be sorted into these categories using objective criteria, we also found evidence supporting a graded interpretation of some wild boar vocalizations as acoustically continuous, with the extremes representing discrete call types. The use of objective criteria based on modern techniques and statistics in respect to acoustic continuity advances our understanding of vocal variation. Integrating our findings with recent studies on domestic pig vocal behavior and emotions, we emphasize the importance of grunt-squeals for acoustic approaches to animal welfare and underline the need of further research investigating the role of domestication on animal vocal communication.

15.
Oncogene ; 22(17): 2633-42, 2003 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-12730677

RESUMEN

Accumulating evidence suggests that angiotensin II type II (AT(2)) receptor subtype negatively regulates cell proliferation in pathophysiological conditions associated with tissue remodeling. However, the mechanisms through which AT(2) receptor achieves this effect remain poorly understood. In this study, we demonstrate that expression of AT(2) receptor inhibits the proliferation of rat fibroblasts in a ligand-independent manner. The antiproliferative action of AT(2) is dependent on the density of surface receptors. We show that AT(2) receptor expression negatively regulates G1 phase progression in both cycling cells and G0-arrested cells stimulated to re-enter the cell cycle, but has no detectable effect on apoptosis. The delay in cell-cycle progression of AT(2)-expressing cells is associated with downregulation of cyclin E expression, decreased assembly of cyclin E-Cdk2 complexes, and the resulting attenuation of Cdk2 activation. The induction of Cdk4 expression and activity is also markedly attenuated, which likely contributes to the inhibition of cyclin E expression. Ectopic expression of Cdk4 alleviates the proliferation defect of AT(2)-expressing cells. These findings suggest that the growth-inhibitory effects of the AT(2) receptor are attributable in part to its spontaneous inhibitory action on the cell cycle machinery.


Asunto(s)
Ciclo Celular/fisiología , Quinasas Ciclina-Dependientes/genética , Regulación hacia Abajo/fisiología , Proteínas Proto-Oncogénicas , Receptores de Angiotensina/genética , Animales , Apoptosis/fisiología , Ciclina E/metabolismo , Quinasa 4 Dependiente de la Ciclina , Quinasas Ciclina-Dependientes/metabolismo , Fibroblastos , Cinética , Ratas , Receptores de Angiotensina/agonistas , Receptores de Angiotensina/metabolismo
16.
Front Hum Neurosci ; 9: 619, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26617511

RESUMEN

Pupillary responses are a well-known indicator of emotional arousal but have not yet been systematically investigated in response to music. Here, we measured pupillary dilations evoked by short musical excerpts normalized for intensity and selected for their stylistic uniformity. Thirty participants (15 females) provided subjective ratings of music-induced felt arousal, tension, pleasantness, and familiarity for 80 classical music excerpts. The pupillary responses evoked by these excerpts were measured in another thirty participants (15 females). We probed the role of listener-specific characteristics such as mood, stress reactivity, self-reported role of music in life, liking for the selected excerpts, as well as of subjective responses to music, in pupillary responses. Linear mixed model analyses showed that a greater role of music in life was associated with larger dilations, and that larger dilations were also predicted for excerpts rated as more arousing or tense. However, an interaction between arousal and liking for the excerpts suggested that pupillary responses were modulated less strongly by arousal when the excerpts were particularly liked. An analogous interaction was observed between tension and liking. Additionally, males exhibited larger dilations than females. Overall, these findings suggest a complex interplay between bottom-up and top-down influences on pupillary responses to music.

17.
Neuropsychologia ; 78: 207-20, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26455803

RESUMEN

Congenital amusia is a neurodevelopmental disorder characterized by impaired pitch processing. Although pitch simultaneities are among the fundamental building blocks of Western tonal music, affective responses to simultaneities such as isolated dyads varying in consonance/dissonance or chords varying in major/minor quality have rarely been studied in amusic individuals. Thirteen amusics and thirteen matched controls enculturated to Western tonal music provided pleasantness ratings of sine-tone dyads and complex-tone dyads in piano timbre as well as perceived happiness/sadness ratings of sine-tone triads and complex-tone triads in piano timbre. Acoustical analyses of roughness and harmonicity were conducted to determine whether similar acoustic information contributed to these evaluations in amusics and controls. Amusic individuals' pleasantness ratings indicated sensitivity to consonance and dissonance for complex-tone (piano timbre) dyads and, to a lesser degree, sine-tone dyads, whereas controls showed sensitivity when listening to both tone types. Furthermore, amusic individuals showed some sensitivity to the happiness-major association in the complex-tone condition, but not in the sine-tone condition. Controls rated major chords as happier than minor chords in both tone types. Linear regression analyses revealed that affective ratings of dyads and triads by amusic individuals were predicted by roughness but not harmonicity, whereas affective ratings by controls were predicted by both roughness and harmonicity. We discuss affective sensitivity in congenital amusia in view of theories of affective responses to isolated chords in Western listeners.


Asunto(s)
Trastornos de la Percepción Auditiva/psicología , Emociones , Música , Percepción de la Altura Tonal , Estimulación Acústica , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Detección de Señal Psicológica
18.
Philos Trans R Soc Lond B Biol Sci ; 370(1664): 20140092, 2015 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-25646515

RESUMEN

Advances in molecular technologies make it possible to pinpoint genomic factors associated with complex human traits. For cognition and behaviour, identification of underlying genes provides new entry points for deciphering the key neurobiological pathways. In the past decade, the search for genetic correlates of musicality has gained traction. Reports have documented familial clustering for different extremes of ability, including amusia and absolute pitch (AP), with twin studies demonstrating high heritability for some music-related skills, such as pitch perception. Certain chromosomal regions have been linked to AP and musical aptitude, while individual candidate genes have been investigated in relation to aptitude and creativity. Most recently, researchers in this field started performing genome-wide association scans. Thus far, studies have been hampered by relatively small sample sizes and limitations in defining components of musicality, including an emphasis on skills that can only be assessed in trained musicians. With opportunities to administer standardized aptitude tests online, systematic large-scale assessment of musical abilities is now feasible, an important step towards high-powered genome-wide screens. Here, we offer a synthesis of existing literatures and outline concrete suggestions for the development of comprehensive operational tools for the analysis of musical phenotypes.


Asunto(s)
Trastornos de la Percepción Auditiva/genética , Música , Estudio de Asociación del Genoma Completo , Humanos , Investigación
19.
Front Psychol ; 5: 141, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24605104

RESUMEN

Can listeners recognize the individual characteristics of unfamiliar performers playing two different musical pieces on the harpsichord? Six professional harpsichordists, three prize-winners and three non prize-winners, made two recordings of two pieces from the Baroque period (a variation on a Partita by Frescobaldi and a rondo by François Couperin) on an instrument equipped with a MIDI console. Short (8 to 15 s) excerpts from these 24 recordings were subsequently used in a sorting task in which 20 musicians and 20 non-musicians, balanced for gender, listened to these excerpts and grouped together those that they thought had been played by the same performer. Twenty-six participants, including 17 musicians and nine non-musicians, performed significantly better than chance, demonstrating that the excerpts contained sufficient information to enable listeners to recognize the individual characteristics of the performers. The grouping accuracy of musicians was significantly higher than that observed for non-musicians. No significant difference in grouping accuracy was found between prize-winning performers and non-winners or between genders. However, the grouping accuracy was significantly higher for the rondo than for the variation, suggesting that the features of the two pieces differed in a way that affected the listeners' ability to sort them accurately. Furthermore, only musicians performed above chance level when matching variation excerpts with rondo excerpts, suggesting that accurately assigning recordings of different pieces to their performer may require musical training. Comparisons between the MIDI performance data and the results of the sorting task revealed that tempo and, to a lesser extent, note onset asynchrony were the most important predictors of the perceived distance between performers, and that listeners appeared to rely mostly on a holistic percept of the excerpts rather than on a comparison of note-by-note expressive patterns.

20.
PLoS One ; 9(2): e89642, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24586929

RESUMEN

Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of 'musical sophistication' which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI) to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636). Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement.


Asunto(s)
Percepción Auditiva , Música , Adulto , Pruebas de Aptitud , Femenino , Humanos , Masculino , Memoria , Autoinforme
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda