Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Dev Sci ; 26(5): e13346, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36419407

RESUMEN

Music and language are two fundamental forms of human communication. Many studies examine the development of music- and language-specific knowledge, but few studies compare how listeners know they are listening to music or language. Although we readily differentiate these domains, how we distinguish music and language-and especially speech and song- is not obvious. In two studies, we asked how listeners categorize speech and song. Study 1 used online survey data to illustrate that 4- to 17-year-olds and adults have verbalizable distinctions for speech and song. At all ages, listeners described speech and song differences based on acoustic features, but compared with older children, 4- to 7-year-olds more often used volume to describe differences, suggesting that they are still learning to identify the features most useful for differentiating speech from song. Study 2 used a perceptual categorization task to demonstrate that 4-8-year-olds and adults readily categorize speech and song, but this ability improves with age especially for identifying song. Despite generally rating song as more speech-like, 4- and 6-year-olds rated ambiguous speech-song stimuli as more song-like than 8-year-olds and adults. Four acoustic features predicted song ratings: F0 instability, utterance duration, harmonicity, and spectral flux. However, 4- and 6-year-olds' song ratings were better predicted by F0 instability than by harmonicity and utterance duration. These studies characterize how children develop conceptual and perceptual understandings of speech and song and suggest that children under age 8 are still learning what features are important for categorizing utterances as speech or song. RESEARCH HIGHLIGHTS: Children and adults conceptually and perceptually categorize speech and song from age 4. Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song. Acoustic cue weighting changes with age, becoming adult-like at age 8 for perceptual categorization and at age 12 for conceptual differentiation. Young children are still learning to categorize speech and song, which leaves open the possibility that music- and language-specific skills are not so domain-specific.


Asunto(s)
Música , Percepción del Habla , Voz , Adulto , Niño , Humanos , Adolescente , Preescolar , Habla , Percepción Auditiva , Aprendizaje
2.
Behav Brain Sci ; 44: e74, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588027

RESUMEN

Both target papers cite evidence from infancy and early childhood to support the notion of human musicality as a somewhat static suite of capacities; however, in our view they do not adequately acknowledge the critical role of developmental timing, the acquisition process, or the dynamics of social learning, especially during later periods of development such as middle childhood.


Asunto(s)
Música , Evolución Biológica , Niño , Preescolar , Humanos
3.
J Exp Child Psychol ; 159: 159-174, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28288412

RESUMEN

Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy.


Asunto(s)
Percepción Auditiva , Baile/psicología , Discriminación en Psicología , Percepción de Movimiento , Música/psicología , Psicología Infantil , Percepción Visual , Factores de Edad , Atención , Femenino , Humanos , Lactante , Masculino , Percepción del Tiempo
4.
Philos Trans R Soc Lond B Biol Sci ; 379(1908): 20230253, 2024 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-39005036

RESUMEN

Misophonic experiences are common in the general population, and they may shed light on everyday emotional reactions to multi-modal stimuli. We performed an online study of a non-clinical sample to understand the extent to which adults who have misophonic reactions are generally reactive to a range of audio-visual emotion-inducing stimuli. We also hypothesized that musicality might be predictive of one's emotional reactions to these stimuli because music is an activity that involves strong connections between sensory processing and meaningful emotional experiences. Participants completed self-report scales of misophonia and musicality. They also watched videos meant to induce misophonia, autonomous sensory meridian response (ASMR) and musical chills, and were asked to click a button whenever they had any emotional reaction to the video. They also rated the emotional valence and arousal of each video. Reactions to misophonia videos were predicted by reactions to ASMR and chills videos, which could indicate that the frequency with which individuals experience emotional responses varies similarly across both negative and positive emotional contexts. Musicality scores were not correlated with measures of misophonia. These findings could reflect a general phenotype of stronger emotional reactivity to meaningful sensory inputs. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.


Asunto(s)
Emociones , Música , Humanos , Adulto , Femenino , Masculino , Música/psicología , Adulto Joven , Persona de Mediana Edad , Adolescente , Percepción Auditiva , Nivel de Alerta/fisiología
5.
Dev Psychol ; 59(5): 829-844, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36951723

RESUMEN

Sensitivity to auditory rhythmic structures in music and language is evident as early as infancy, but performance on beat perception tasks is often well below adult levels and improves gradually with age. While some research has suggested the ability to perceive musical beat develops early, even in infancy, it remains unclear whether adult-like perception of musical beat is present in children. The capacity to sustain an internal sense of the beat is critical for various rhythmic musical behaviors, yet very little is known about the development of this ability. In this study, 223 participants ranging in age from 4 to 23 years from the Las Vegas, Nevada, community completed a musical beat discrimination task, during which they first listened to a strongly metrical musical excerpt and then attempted to sustain their perception of the musical beat while listening to a repeated, beat-ambiguous rhythm for up to 14.4 s. They then indicated whether a drum probe matched or did not match the beat. Results suggested that the ability to identify the matching probe improved throughout middle childhood (8-9 years) and did not reach adult-like levels until adolescence (12-14 years). Furthermore, scores on the beat perception task were positively related to phonological processing, after accounting for age, short-term memory, and music and dance training. This study lends further support to the notion that children's capacity for beat perception is not fully developed until adolescence and suggests we should reconsider assumptions of musical beat mastery by infants and young children. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Música , Adulto , Adolescente , Humanos , Niño , Preescolar , Adulto Joven , Percepción Auditiva , Lingüística , Lenguaje
6.
J Autism Dev Disord ; 2023 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-37140745

RESUMEN

PURPOSE: Processing real-world sounds requires acoustic and higher-order semantic information. We tested the theory that individuals with autism spectrum disorder (ASD) show enhanced processing of acoustic features and impaired processing of semantic information. METHODS: We used a change deafness task that required detection of speech and non-speech auditory objects being replaced and a speech-in-noise task using spoken sentences that must be comprehended in the presence of background speech to examine the extent to which 7-15 year old children with ASD (n = 27) rely on acoustic and semantic information, compared to age-matched (n = 27) and IQ-matched (n = 27) groups of typically developing (TD) children. Within a larger group of 7-15 year old TD children (n = 105) we correlated IQ, ASD symptoms, and the use of acoustic and semantic information. RESULTS: Children with ASD performed worse overall at the change deafness task relative to the age-matched TD controls, but they did not differ from IQ-matched controls. All groups utilized acoustic and semantic information similarly and displayed an attentional bias towards changes that involved the human voice. Similarly, for the speech-in-noise task, age-matched-but not IQ-matched-TD controls performed better overall than the ASD group. However, all groups used semantic context to a similar degree. Among TD children, neither IQ nor the presence of ASD symptoms predict the use of acoustic or semantic information. CONCLUSION: Children with and without ASD used acoustic and semantic information similarly during auditory change deafness and speech-in-noise tasks.

7.
Psychophysiology ; 59(2): e13963, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34743347

RESUMEN

Synchronization of movement to music is a seemingly universal human capacity that depends on sustained beat perception. Previous research has suggested that listener's conscious perception of the musical structure (e.g., beat and meter) might be reflected in neural responses that follow the frequency of the beat. However, the extent to which these neural responses directly reflect concurrent, listener-reported perception of musical beat versus stimulus-driven activity is understudied. We investigated whether steady state-evoked potentials (SSEPs), measured using electroencephalography (EEG), reflect conscious perception of beat by holding the stimulus constant while contextually manipulating listeners' perception and measuring perceptual responses on every trial. Listeners with minimal music training heard a musical excerpt that strongly supported one of two beat patterns (context phase), followed by a rhythm consistent with either beat pattern (ambiguous phase). During the final phase, listeners indicated whether or not a superimposed drum matched the perceived beat (probe phase). Participants were more likely to indicate that the probe matched the music when that probe matched the original context, suggesting an ability to maintain the beat percept through the ambiguous phase. Likewise, we observed that the spectral amplitude during the ambiguous phase was higher at frequencies that matched the beat of the preceding context. Exploratory analyses investigated whether EEG amplitude at the beat-related SSEPs (steady state-evoked potentials) predicted performance on the beat induction task on a single-trial basis, but were inconclusive. Our findings substantiate the claim that auditory SSEPs reflect conscious perception of musical beat and not just stimulus features.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados/fisiología , Música , Percepción del Tiempo/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
8.
Front Neurosci ; 16: 924806, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36213735

RESUMEN

Misophonia can be characterized both as a condition and as a negative affective experience. Misophonia is described as feeling irritation or disgust in response to hearing certain sounds, such as eating, drinking, gulping, and breathing. Although the earliest misophonic experiences are often described as occurring during childhood, relatively little is known about the developmental pathways that lead to individual variation in these experiences. This literature review discusses evidence of misophonic reactions during childhood and explores the possibility that early heightened sensitivities to both positive and negative sounds, such as to music, might indicate a vulnerability for misophonia and misophonic reactions. We will review when misophonia may develop, how it is distinguished from other auditory conditions (e.g., hyperacusis, phonophobia, or tinnitus), and how it relates to developmental disorders (e.g., autism spectrum disorder or Williams syndrome). Finally, we explore the possibility that children with heightened musicality could be more likely to experience misophonic reactions and develop misophonia.

9.
Front Psychol ; 13: 998321, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36467160

RESUMEN

Listening to groovy music is an enjoyable experience and a common human behavior in some cultures. Specifically, many listeners agree that songs they find to be more familiar and pleasurable are more likely to induce the experience of musical groove. While the pleasurable and dance-inducing effects of musical groove are omnipresent, we know less about how subjective feelings toward music, individual musical or dance experiences, or more objective musical perception abilities are correlated with the way we experience groove. Therefore, the present study aimed to evaluate how musical and dance sophistication relates to musical groove perception. One-hundred 24 participants completed an online study during which they rated 20 songs, considered high- or low-groove, and completed the Goldsmiths Musical Sophistication Index, the Goldsmiths Dance Sophistication Index, the Beat and Meter Sensitivity Task, and a modified short version of the Profile for Music Perception Skills. Our results reveal that measures of perceptual abilities, musical training, and social dancing predicted the difference in groove rating between high- and low-groove music. Overall, these findings support the notion that listeners' individual experiences and predispositions may shape their perception of musical groove, although other causal directions are also possible. This research helps elucidate the correlates and possible causes of musical groove perception in a wide range of listeners.

10.
Dev Sci ; 14(4): 865-72, 2011 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-21676105

RESUMEN

Effects of culture-specific experience on musical rhythm perception are evident by 12 months of age, but the role of culture-general rhythm processing constraints during early infancy has not been explored. Using a habituation procedure with 5- and 7-month-old infants, we investigated effects of temporal interval ratio complexity on discrimination of standard from novel musical patterns containing 200-ms disruptions. Infants were tested in three ratio conditions: simple (2:1), which is typical in Western music, complex (3:2), which is typical in other musical cultures, and highly complex (7:4), which is relatively rare in music throughout the world. Unlike adults and older infants, whose accuracy was predicted by familiarity, younger infants were influenced by ratio complexity, as shown by their successful discrimination in the simple and complex conditions but not in the highly complex condition. The findings suggest that ratio complexity constrains rhythm perception even prior to the acquisition of culture-specific biases.


Asunto(s)
Percepción Auditiva , Cultura , Música , Estimulación Acústica , Desarrollo Infantil , Comparación Transcultural , Femenino , Humanos , Lactante , Masculino , Reconocimiento en Psicología
11.
J Exp Psychol Gen ; 150(2): 314-339, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32852978

RESUMEN

Most music is temporally organized within a metrical hierarchy, having nested periodic patterns that give rise to the experience of stronger (downbeat) and weaker (upbeat) events. Musical meter presumably makes it possible to dance, sing, and play instruments in synchrony with others. It is nevertheless unclear whether or not listeners perceive multiple levels of periodicity simultaneously, and if they do, when and how they learn to do this. We tested children, adolescents, and musically trained and untrained adults with a new meter perception task. We presented excerpts of human-performed music paired with metronomes that matched or mismatched the metrical structure of the music at 2 hierarchical levels (beat and measure), and asked listeners to provide a rating of fit of metronome and music. Fit ratings suggested that adults with and without musical training were sensitive to both levels of meter simultaneously, but ratings were more strongly influenced by beat-level than by measure-level synchrony. Sensitivity to two simultaneous levels of meter was not evident in children or adolescents. Sensitivity to the beat alone was apparent in the youngest children and increased with age, whereas sensitivity to the measure alone was not present in younger children (5- to 8-year-olds). These findings suggest a prolonged period of development and refinement of hierarchical beat perception and surprisingly weak overall ability to attend to 2 beat levels at the same time across all ages. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Percepción Auditiva/fisiología , Música , Estimulación Acústica , Adolescente , Adulto , Femenino , Humanos , Aprendizaje/fisiología , Masculino , Persona de Mediana Edad , Adulto Joven
12.
J Exp Psychol Hum Percept Perform ; 47(11): 1516-1542, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34843358

RESUMEN

Auditory perception of time is superior to visual perception, both for simple intervals and beat-based musical rhythms. To what extent does this auditory advantage characterize perception of different hierarchical levels of musical meter, and how is it related to lifelong experience with music? We paired musical excerpts with auditory and visual metronomes that matched or mismatched the musical meter at the beat level (faster) and measure level (slower) and obtained fit ratings from adults and children (5-10 years). Adults exhibited an auditory advantage in this task for the beat level, but not for the measure level. Children also displayed an auditory advantage that increased with age for the beat level. In both modalities, their overall sensitivity to beat increased with age, but they were not sensitive to measure-level matching at any age. More musical training was related to enhanced sensitivity in both auditory and visual modalities for measure-level matching in adults and beat-level matching in children. These findings provide evidence for auditory superiority of beat perception across development, and they suggest that beat and meter perception develop quite gradually and rely on lifelong acquisition of musical knowledge. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Música , Estimulación Acústica , Adulto , Percepción Auditiva , Niño , Humanos , Percepción Visual
13.
Cortex ; 45(1): 110-8, 2009 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-19058799

RESUMEN

Listeners may favour particular rhythms because of their degree of conformity to culture-specific expectations or because of perceptual constraints that are apparent early in development. In two experiments we examined adults' and 6-month-old infants' detection of subtle rhythmic and melodic changes to two sequences of tones, a conventional rhythm that musically untrained adults rated as rhythmically good and an unconventional rhythm that was rated as poor. Detection of the changes was above chance in all conditions, but adults and infants performed more accurately in the context of the conventional rhythm. Unlike adults, who benefited from rhythmic conventionality only when detecting rhythmic changes, infants benefited when detecting melodic as well as rhythmic changes. The findings point to infant and adult parallels for some aspects of rhythm processing and to integrated perception of rhythm and melody early in life.


Asunto(s)
Envejecimiento/fisiología , Envejecimiento/psicología , Percepción Auditiva/fisiología , Música/psicología , Estimulación Acústica , Adolescente , Adulto , Discriminación en Psicología/fisiología , Femenino , Humanos , Lactante , Masculino , Desempeño Psicomotor/fisiología , Adulto Joven
14.
J Exp Psychol Hum Percept Perform ; 35(4): 1232-44, 2009 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-19653761

RESUMEN

When presented with alternating low and high tones, listeners are more likely to perceive 2 separate streams of tones ("streaming") than a single coherent stream when the frequency separation (Deltaf) between tones is greater and the number of tone presentations is greater ("buildup"). However, the same large-Deltaf sequence reduces streaming for subsequent patterns presented after a gap of up to several seconds. Buildup occurs at a level of neural representation with sharp frequency tuning. The authors used adaptation to demonstrate that the contextual effect of prior Deltaf arose from a representation with broad frequency tuning, unlike buildup. Separate adaptation did not occur in a representation of Deltaf independent of frequency range, suggesting that any frequency-shift detectors undergoing adaptation are also frequency specific. A separate effect of prior perception was observed, dissociating stimulus-related (i.e., Deltaf) and perception-related (i.e., 1 stream vs. 2 streams) adaptation. Viewing a visual analogue to auditory streaming had no effect on subsequent perception of streaming, suggesting adaptation in auditory-specific brain circuits. These results, along with previous findings on buildup, suggest that processing in at least 3 levels of auditory neural representation underlies segregation and formation of auditory streams.


Asunto(s)
Adaptación Fisiológica , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Adulto , Femenino , Humanos , Masculino , Psicoacústica , Percepción Visual/fisiología
15.
Trends Cogn Sci ; 11(11): 466-72, 2007 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-17981074

RESUMEN

Musical structure is complex, consisting of a small set of elements that combine to form hierarchical levels of pitch and temporal structure according to grammatical rules. As with language, different systems use different elements and rules for combination. Drawing on recent findings, we propose that music acquisition begins with basic features, such as peripheral frequency-coding mechanisms and multisensory timing connections, and proceeds through enculturation, whereby everyday exposure to a particular music system creates, in a systematic order of acquisition, culture-specific brain structures and representations. Finally, we propose that formal musical training invokes domain-specific processes that affect salience of musical input and the amount of cortical tissue devoted to its processing, as well as domain-general processes of attention and executive functioning.


Asunto(s)
Educación , Aprendizaje/fisiología , Música/psicología , Corteza Auditiva/fisiología , Cultura , Humanos , Desarrollo del Lenguaje , Percepción de la Altura Tonal/fisiología
16.
J Exp Psychol Hum Percept Perform ; 34(4): 1007-16, 2008 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-18665741

RESUMEN

The authors examined the effect of preceding context on auditory stream segregation. Low tones (A), high tones (B), and silences (-) were presented in an ABA- pattern. Participants indicated whether they perceived 1 or 2 streams of tones. The A tone frequency was fixed, and the B tone was the same as the A tone or had 1 of 3 higher frequencies. Perception of 2 streams in the current trial increased with greater frequency separation between the A and B tones (Delta f). Larger Delta f in previous trials modified this pattern, causing less streaming in the current trial. This occurred even when listeners were asked to bias their perception toward hearing 1 stream or 2 streams. The effect of previous Delta f was not due to response bias because simply perceiving 2 streams in the previous trial did not cause less streaming in the current trial. Finally, the effect of previous ?f was diminished, though still present, when the silent duration between trials was increased to 5.76 s. The time course of this context effect on streaming implicates the involvement of auditory sensory memory or neural adaptation.


Asunto(s)
Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Desempeño Psicomotor/fisiología , Estimulación Acústica/métodos , Adulto , Atención/fisiología , Corteza Auditiva/fisiología , Umbral Auditivo/fisiología , Femenino , Humanos , Masculino , Memoria/fisiología , Persona de Mediana Edad , Modelos Neurológicos , Enmascaramiento Perceptual/fisiología , Discriminación de la Altura Tonal/fisiología , Psicoacústica , Factores de Tiempo
17.
Infancy ; 22(4): 421-435, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-31772509

RESUMEN

The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research-especially with infant participants-also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.

18.
Cognition ; 100(1): 73-99, 2006 May.
Artículo en Inglés | MEDLINE | ID: mdl-16380107

RESUMEN

We review the literature on infants' perception of pitch and temporal patterns, relating it to comparable research with human adult and non-human listeners. Although there are parallels in relative pitch processing across age and species, there are notable differences. Infants accomplish such tasks with ease, but non-human listeners require extensive training to achieve very modest levels of performance. In general, human listeners process auditory sequences in a holistic manner, and non-human listeners focus on absolute aspects of individual tones. Temporal grouping processes and categorization on the basis of rhythm are evident in non-human listeners and in human infants and adults. Although synchronization to sound patterns is thought to be uniquely human, tapping to music, synchronous firefly flashing, and other cyclic behaviors can be described by similar mathematical principles. We conclude that infants' music perception skills are a product of general perceptual mechanisms that are neither music- nor species-specific. Along with general-purpose mechanisms for the perceptual foundations of music, we suggest unique motivational mechanisms that can account for the perpetuation of musical behavior in all human societies.


Asunto(s)
Percepción Auditiva , Desarrollo Infantil , Música/psicología , Psicología Infantil , Percepción Auditiva/fisiología , Humanos , Lactante , Percepción de la Altura Tonal , Percepción del Tiempo
19.
Psychon Bull Rev ; 23(5): 1553-1558, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-26732385

RESUMEN

Numerous studies have shown that formal musical training is associated with sensory, motor, and cognitive advantages in individuals of various ages. However, the nature of the observed differences between musicians and nonmusicians is poorly understood, and little is known about the listening skills of individuals who engage in alternative types of everyday musical activities. Here, we show that people who have frequently played music video games outperform nonmusicians controls on a battery of music perception tests. These findings reveal that enhanced musical aptitude can be found among individuals who play music video games, raising the possibility that music video games could potentially enhance music perception skills in individuals across a broad spectrum of society who are otherwise unable to invest the time and/or money required to learn a musical instrument.


Asunto(s)
Aptitud , Percepción Auditiva , Música/psicología , Juegos de Video/psicología , Adolescente , Adulto , Anciano , Femenino , Humanos , Aprendizaje , Masculino , Persona de Mediana Edad , Personalidad , Adulto Joven
20.
Front Psychol ; 7: 939, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27445907

RESUMEN

The available evidence indicates that the music of a culture reflects the speech rhythm of the prevailing language. The normalized pairwise variability index (nPVI) is a measure of durational contrast between successive events that can be applied to vowels in speech and to notes in music. Music-language parallels may have implications for the acquisition of language and music, but it is unclear whether native-language rhythms are reflected in children's songs. In general, children's songs exhibit greater rhythmic regularity than adults' songs, in line with their caregiving goals and frequent coordination with rhythmic movement. Accordingly, one might expect lower nPVI values (i.e., lower variability) for such songs regardless of culture. In addition to their caregiving goals, children's songs may serve an intuitive didactic function by modeling culturally relevant content and structure for music and language. One might therefore expect pronounced rhythmic parallels between children's songs and language of origin. To evaluate these predictions, we analyzed a corpus of 269 English and French songs from folk and children's music anthologies. As in prior work, nPVI values were significantly higher for English than for French children's songs. For folk songs (i.e., songs not for children), the difference in nPVI for English and French songs was small and in the expected direction but non-significant. We subsequently collected ratings from American and French monolingual and bilingual adults, who rated their familiarity with each song, how much they liked it, and whether or not they thought it was a children's song. Listeners gave higher familiarity and liking ratings to songs from their own culture, and they gave higher familiarity and preference ratings to children's songs than to other songs. Although higher child-directedness ratings were given to children's than to folk songs, French listeners drove this effect, and their ratings were uniquely predicted by nPVI. Together, these findings suggest that language-based rhythmic structures are evident in children's songs, and that listeners expect exaggerated language-based rhythms in children's songs. The implications of these findings for enculturation processes and for the acquisition of music and language are discussed.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA