Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
Add more filters










Publication year range
1.
Psychol Music ; 51(1): 172-187, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36532618

ABSTRACT

We examined pitch-error detection in well-known songs sung with or without meaningful lyrics. In Experiment 1, adults heard the initial phrase of familiar songs sung with lyrics or repeating syllables (la) and judged whether they heard an out-of-tune note. Half of the renditions had a single pitch error (50 or 100 cents); half were in tune. Listeners were poorer at pitch-error detection in songs with lyrics. In Experiment 2, within-note pitch fluctuations in the same performances were eliminated by auto-tuning. Again, pitch-error detection was worse for renditions with lyrics (50 cents), suggesting adverse effects of semantic processing. In Experiment 3, songs were sung with repeating syllables or scat syllables to ascertain the role of phonetic variability. Performance was poorer for scat than for repeating syllables, indicating adverse effects of phonetic variability, but overall performance exceeded Experiment 1. In Experiment 4, listeners evaluated songs in all styles (repeating syllables, scat, lyrics) within the same session. Performance was best with repeating syllables (50 cents) and did not differ between scat or lyric versions. In short, tracking the pitches of highly familiar songs was impaired by the presence of words, an impairment stemming primarily from phonetic variability rather than interference from semantic processing.

2.
Nat Hum Behav ; 6(11): 1545-1556, 2022 11.
Article in English | MEDLINE | ID: mdl-35851843

ABSTRACT

When interacting with infants, humans often alter their speech and song in ways thought to support communication. Theories of human child-rearing, informed by data on vocal signalling across species, predict that such alterations should appear globally. Here, we show acoustic differences between infant-directed and adult-directed vocalizations across cultures. We collected 1,615 recordings of infant- and adult-directed speech and song produced by 410 people in 21 urban, rural and small-scale societies. Infant-directedness was reliably classified from acoustic features only, with acoustic profiles of infant-directedness differing across language and music but in consistent fashions. We then studied listener sensitivity to these acoustic features. We played the recordings to 51,065 people from 187 countries, recruited via an English-language website, who guessed whether each vocalization was infant-directed. Their intuitions were more accurate than chance, predictable in part by common sets of acoustic features and robust to the effects of linguistic relatedness between vocalizer and listener. These findings inform hypotheses of the psychological functions and evolution of human communication.


Subject(s)
Music , Voice , Humans , Adult , Infant , Speech , Language , Acoustics
3.
Behav Brain Sci ; 44: e117, 2021 09 30.
Article in English | MEDLINE | ID: mdl-34588056

ABSTRACT

I challenge Mehr et al.'s contention that ancestral mothers were reluctant to provide all the attention demanded by their infants. The societies in which music emerged likely involved foraging mothers who engaged in extensive infant carrying, feeding, and soothing. Accordingly, their singing was multimodal, its rhythms aligned with maternal movements, with arousal regulatory consequences for singers and listeners.


Subject(s)
Music , Singing , Arousal , Attention , Female , Humans , Infant , Mothers
4.
Autism Res ; 14(6): 1127-1133, 2021 06.
Article in English | MEDLINE | ID: mdl-33398938

ABSTRACT

Adults and children with typical development (TD) remember vocal melodies (without lyrics) better than instrumental melodies, which is attributed to the biological and social significance of human vocalizations. Here we asked whether children with autism spectrum disorder (ASD), who have persistent difficulties with communication and social interaction, and adolescents and adults with Williams syndrome (WS), who are highly sociable, even indiscriminately friendly, exhibit a memory advantage for vocal melodies like that observed in individuals with TD. We tested 26 children with ASD, 26 adolescents and adults with WS of similar mental age, and 26 children with TD on their memory for vocal and instrumental (piano, marimba) melodies. After exposing them to 12 unfamiliar folk melodies with different timbres, we required them to indicate whether each of 24 melodies (half heard previously) was old (heard before) or new (not heard before) during an unexpected recognition test. Although the groups successfully distinguished the old from the new melodies, they differed in overall memory. Nevertheless, they exhibited a comparable advantage for vocal melodies. In short, individuals with ASD and WS show enhanced processing of socially significant auditory signals in the context of music. LAY SUMMARY: Typically developing children and adults remember vocal melodies better than instrumental melodies. In this study, we found that children with Autistic Spectrum Disorder, who have severe social processing deficits, and children and adults with Williams syndrome, who are highly sociable, exhibit comparable memory advantages for vocal melodies. The results have implications for musical interventions with these populations.


Subject(s)
Autism Spectrum Disorder , Music , Voice , Williams Syndrome , Adolescent , Adult , Auditory Perception , Autism Spectrum Disorder/complications , Child , Humans , Williams Syndrome/complications
5.
Dev Psychol ; 56(5): 861-868, 2020 May.
Article in English | MEDLINE | ID: mdl-32162936

ABSTRACT

Parents commonly vocalize to infants to mitigate their distress, especially when holding them is not possible. Here we examined the relative efficacy of parents' speech and singing (familiar and unfamiliar songs) in alleviating the distress of 8- and 10-month-old infants (n = 68 per age group). Parent-infant dyads participated in 3 trials of the Still Face procedure, featuring a 2-min Play Phase, a Still Face phase (parents immobile and unresponsive for 1 min or until infants became visibly distressed), and a 2-min Reunion Phase in which caregivers attempted to reverse infant distress by (a) singing a highly familiar song, (b) singing an unfamiliar song, or (c) expressive talking (order counterbalanced across dyads). In the Reunion Phase, talking led to increased negative affect in both age groups, in contrast to singing familiar or unfamiliar songs, which increased infant attention to parent and decreased negative affect. The favorable consequences were greatest for familiar songs, which also generated increased smiling. Skin conductance recorded from a subset of infants (n = 36 younger, 41 older infants) revealed that arousal levels were highest for the talking reunion, lowest for unfamiliar songs, and intermediate for familiar songs. The arousal effects, considered in conjunction with the behavioral effects, confirm that songs are more effective than speech at mitigating infant distress. We suggest, moreover, that familiar songs generate higher infant arousal than unfamiliar songs because they evoke excitement, reflected in modestly elevated arousal as well as pleasure, in contrast to more subdued responses to unfamiliar songs. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Attention/physiology , Emotions , Music/psychology , Parents/psychology , Speech , Stress, Psychological/psychology , Adult , Auditory Perception , Female , Humans , Infant , Infant Behavior/psychology , Male
6.
J Cogn Neurosci ; 32(7): 1213-1220, 2020 07.
Article in English | MEDLINE | ID: mdl-30912725

ABSTRACT

Mothers around the world sing to infants, presumably to regulate their mood and arousal. Lullabies and playsongs differ stylistically and have distinctive goals. Mothers sing lullabies to soothe and calm infants and playsongs to engage and excite infants. In this study, mothers repeatedly sang Twinkle, Twinkle, Little Star to their infants (n = 30 dyads), alternating between soothing and playful renditions. Infant attention and mother-infant arousal (i.e., skin conductivity) were recorded continuously. During soothing renditions, mother and infant arousal decreased below initial levels as the singing progressed. During playful renditions, maternal and infant arousal remained stable. Moreover, infants exhibited greater attention to mother during playful renditions than during soothing renditions. Mothers' playful renditions were faster, higher in pitch, louder, and characterized by greater pulse clarity than their soothing renditions. Mothers also produced more energetic rhythmic movements during their playful renditions. These findings highlight the contrastive nature and consequences of lullabies and playsongs.


Subject(s)
Mothers , Singing , Arousal , Emotions , Female , Humans , Infant , Play and Playthings
7.
J Exp Psychol Gen ; 149(4): 634-649, 2020 Apr.
Article in English | MEDLINE | ID: mdl-31512903

ABSTRACT

Many scholars consider preferences for consonance, as defined by Western music theorists, to be based primarily on biological factors, while others emphasize experiential factors, notably the nature of musical exposure. Cross-cultural experiments suggest that consonance preferences are shaped by musical experience, implying that preferences should emerge or become stronger over development for individuals in Western cultures. However, little is known about this developmental trajectory. We measured preferences for the consonance of simultaneous sounds and related acoustic properties in children and adults to characterize their developmental course and dependence on musical experience. In Study 1, adults and children 6 to 10 years of age rated their liking of simultaneous tone combinations (dyads) and affective vocalizations. Preferences for consonance increased with age and were predicted by changing preferences for harmonicity-the degree to which a sound's frequencies are multiples of a common fundamental frequency-but not by evaluations of beating-fluctuations in amplitude that occur when frequencies are close but not identical, producing the sensation of acoustic roughness. In Study 2, musically trained adults and 10-year-old children also rated the same stimuli. Age and musical training were associated with enhanced preference for consonance. Both measures of experience were associated with an enhanced preference for harmonicity, but were unrelated to evaluations of beating stimuli. The findings are consistent with cross-cultural evidence and the effects of musicianship in Western adults in linking Western musical experience to preferences for consonance and harmonicity. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Auditory Perception/physiology , Emotions/physiology , Music/psychology , Acoustic Stimulation , Adult , Child , Female , Humans , Male
8.
Front Psychol ; 10: 1073, 2019.
Article in English | MEDLINE | ID: mdl-31156507

ABSTRACT

Rhythmic movement to music, whether deliberate (e.g., dancing) or inadvertent (e.g., foot-tapping), is ubiquitous. Although parents commonly report that infants move rhythmically to music, especially to familiar music in familiar environments, there has been little systematic study of this behavior. As a preliminary exploration of infants' movement to music in their home environment, we studied V, an infant who began moving rhythmically to music at 6 months of age. Our primary goal was to generate testable hypotheses about movement to music in infancy. Across nine sessions, beginning when V was almost 19 months of age and ending 8 weeks later, she was video-recorded by her mother during the presentation of 60-s excerpts from two familiar and two unfamiliar songs presented at three tempos-the original song tempo as well as faster and slower versions. V exhibited a number of repeated dance movements such as head-bobbing, arm-pumping, torso twists, and bouncing. She danced most to Metallica's Now that We're Dead, a recording that her father played daily in V's presence, often dancing with her while it played. Its high pulse clarity, in conjunction with familiarity, may have increased V's propensity to dance, as reflected in lesser dancing to familiar music with low pulse clarity and to unfamiliar music with high pulse clarity. V moved faster to faster music but only for unfamiliar music, perhaps because arousal drove her movement to familiar music. Her movement to music was positively correlated with smiling, highlighting the pleasurable nature of the experience. Rhythmic movement to music may have enhanced her pleasure, and the joy of listening may have promoted her movement. On the basis of behavior observed in this case study, we propose a scaled-up study to obtain definitive evidence about the effects of song familiarity and specific musical features on infant rhythmic movement, the developmental trajectory of dance skills, and the typical range of variation in such skills.

9.
Prog Brain Res ; 237: 225-242, 2018.
Article in English | MEDLINE | ID: mdl-29779736

ABSTRACT

Across cultures, aspects of music and dance contribute to everyday life in a variety of ways that do not depend on artistry, aesthetics, or expertise. In this chapter, we focus on precursors to music and dance that are evident in infancy: the underlying perceptual abilities, parent-infant musical interactions that are motivated by nonmusical goals, the consequences of such interactions for mood regulation and social regulation, and the emergence of rudimentary singing and rhythmic movement to music. These precursors to music and dance lay the groundwork for our informal engagement with music throughout life and its continuing effects on mood regulation, affiliation, and well-being.


Subject(s)
Auditory Perception , Child Development/physiology , Dancing , Music , Psychomotor Performance/physiology , Affect , Child Behavior , Child, Preschool , Female , Humans , Infant , Male
10.
Ann N Y Acad Sci ; 2018 Mar 07.
Article in English | MEDLINE | ID: mdl-29512877

ABSTRACT

Infants typically experience music through social interactions with others. One such experience involves caregivers singing to infants while holding and bouncing them rhythmically. These highly social interactions shape infant music perception and may also influence social cognition and behavior. Moving in time with others-interpersonal synchrony-can direct infants' social preferences and prosocial behavior. Infants also show social preferences and selective prosociality toward singers of familiar, socially learned melodies. Here, we discuss recent studies of the influence of musical engagement on infant social cognition and behavior, highlighting the importance of rhythmic movement and socially relevant melodies.

11.
J Acoust Soc Am ; 141(5): 3123, 2017 05.
Article in English | MEDLINE | ID: mdl-28599538

ABSTRACT

The present study compared children's and adults' identification and discrimination of declarative questions and statements on the basis of terminal cues alone. Children (8-11 years, n = 41) and adults (n = 21) judged utterances as statements or questions from sentences with natural statement and question endings and with manipulated endings that featured intermediate fundamental frequency (F0) values. The same adults and a different sample of children (n = 22) were also tested on their discrimination of the utterances. Children's judgments shifted more gradually across categories than those of adults, but their category boundaries were comparable. In the discrimination task, adults found cross-boundary comparisons more salient than within-boundary comparisons. Adults' performance on the identification and discrimination tasks is consistent with but not definitive regarding categorical perception of statements and questions. Children, by contrast, discriminated the cross-boundary comparisons no better than other comparisons. The findings indicate age-related sharpening in the perception of statements and questions based on terminal F0 cues and the gradual emergence of distinct perceptual categories.


Subject(s)
Cues , Discrimination, Psychological , Pitch Discrimination , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adult , Age Factors , Audiometry, Speech , Child , Child Behavior , Child Development , Female , Humans , Male , Recognition, Psychology , Young Adult
12.
Lang Speech ; 60(1): 154-166, 2017 03.
Article in English | MEDLINE | ID: mdl-28326993

ABSTRACT

This study investigates the oral gestures of 8-month-old infants in response to audiovisual presentation of lip and tongue smacks. Infants exhibited more lip gestures than tongue gestures following adult lip smacks and more tongue gestures than lip gestures following adult tongue smacks. The findings, which are consistent with predictions from Articulatory Phonology, imply that 8-month-old infants are capable of producing goal-directed oral gestures by matching the articulatory organ of an adult model.


Subject(s)
Child Language , Gestures , Imitative Behavior , Infant Behavior , Lip/physiology , Tongue/physiology , Acoustic Stimulation , Adult , Female , Humans , Infant , Male , Photic Stimulation , Tongue Habits
13.
Front Psychol ; 7: 939, 2016.
Article in English | MEDLINE | ID: mdl-27445907

ABSTRACT

The available evidence indicates that the music of a culture reflects the speech rhythm of the prevailing language. The normalized pairwise variability index (nPVI) is a measure of durational contrast between successive events that can be applied to vowels in speech and to notes in music. Music-language parallels may have implications for the acquisition of language and music, but it is unclear whether native-language rhythms are reflected in children's songs. In general, children's songs exhibit greater rhythmic regularity than adults' songs, in line with their caregiving goals and frequent coordination with rhythmic movement. Accordingly, one might expect lower nPVI values (i.e., lower variability) for such songs regardless of culture. In addition to their caregiving goals, children's songs may serve an intuitive didactic function by modeling culturally relevant content and structure for music and language. One might therefore expect pronounced rhythmic parallels between children's songs and language of origin. To evaluate these predictions, we analyzed a corpus of 269 English and French songs from folk and children's music anthologies. As in prior work, nPVI values were significantly higher for English than for French children's songs. For folk songs (i.e., songs not for children), the difference in nPVI for English and French songs was small and in the expected direction but non-significant. We subsequently collected ratings from American and French monolingual and bilingual adults, who rated their familiarity with each song, how much they liked it, and whether or not they thought it was a children's song. Listeners gave higher familiarity and liking ratings to songs from their own culture, and they gave higher familiarity and preference ratings to children's songs than to other songs. Although higher child-directedness ratings were given to children's than to folk songs, French listeners drove this effect, and their ratings were uniquely predicted by nPVI. Together, these findings suggest that language-based rhythmic structures are evident in children's songs, and that listeners expect exaggerated language-based rhythms in children's songs. The implications of these findings for enculturation processes and for the acquisition of music and language are discussed.

14.
J Exp Psychol Hum Percept Perform ; 42(8): 1061-5, 2016 08.
Article in English | MEDLINE | ID: mdl-27123682

ABSTRACT

Previous research reveals that vocal melodies are remembered better than instrumental renditions. Here we explored the possibility that the voice, as a highly salient stimulus, elicits greater arousal than nonvocal stimuli, resulting in greater pupil dilation for vocal than for instrumental melodies. We also explored the possibility that pupil dilation indexes memory for melodies. We tracked pupil dilation during a single exposure to 24 unfamiliar folk melodies (half sung to la la, half piano) and during a subsequent recognition test in which the previously heard melodies were intermixed with 24 novel melodies (half sung, half piano) from the same corpus. Pupil dilation was greater for vocal melodies than for piano melodies in the exposure phase and in the test phase. It was also greater for previously heard melodies than for novel melodies. Our findings provide the first evidence that pupillometry can be used to measure recognition of stimuli that unfold over several seconds. They also provide the first evidence of enhanced arousal to vocal melodies during encoding and retrieval, thereby supporting the more general notion of the voice as a privileged signal. (PsycINFO Database Record


Subject(s)
Auditory Perception/physiology , Music , Pupil/physiology , Recognition, Psychology/physiology , Voice , Adult , Female , Humans , Male , Young Adult
15.
J Child Lang ; 43(5): 1174-91, 2016 09.
Article in English | MEDLINE | ID: mdl-26374079

ABSTRACT

Young children are slow to master conventional intonation patterns in their yes/no questions, which may stem from imperfect understanding of the links between terminal pitch contours and pragmatic intentions. In Experiment 1, five- to ten-year-old children and adults were required to judge utterances as questions or statements on the basis of intonation alone. Children eight years of age or younger performed above chance levels but less accurately than adult listeners. To ascertain whether the verbal content of utterances interfered with young children's attention to the relevant acoustic cues, low-pass filtered versions of the same utterances were presented to children and adults in Experiment 2. Low-pass filtering reduced performance comparably for all age groups, perhaps because such filtering reduced the salience of critical pitch cues. Young children's difficulty in differentiating declarative questions from statements is not attributable to basic perceptual difficulties but rather to absent or unstable intonation categories.


Subject(s)
Cues , Language Development , Linguistics , Semantics , Speech Acoustics , Speech Perception , Adult , Attention , Child , Child, Preschool , Female , Humans , Male , Sound Spectrography
16.
Proc Natl Acad Sci U S A ; 112(29): 8809-10, 2015 Jul 21.
Article in English | MEDLINE | ID: mdl-26157132
17.
Q J Exp Psychol (Hove) ; 68(5): 866-77, 2015.
Article in English | MEDLINE | ID: mdl-25835127

ABSTRACT

Nonmusicians remember vocal melodies (i.e., sung to la la) better than instrumental melodies. If greater exposure to the voice contributes to those effects, then long-term experience with instrumental timbres should elicit instrument-specific advantages. Here we evaluate this hypothesis by comparing pianists with other musicians and nonmusicians. We also evaluate the possibility that absolute pitch (AP), which involves exceptional memory for isolated pitches, influences melodic memory. Participants heard 24 melodies played in four timbres (voice, piano, banjo, marimba) and were subsequently required to distinguish the melodies heard previously from 24 novel melodies presented in the same timbres. Musicians performed better than nonmusicians, but both groups showed a comparable memory advantage for vocal melodies. Moreover, pianists performed no better on melodies played on piano than on other instruments, and AP musicians performed no differently than non-AP musicians. The findings confirm the robust nature of the voice advantage and rule out explanations based on familiarity, practice, and motor representations.


Subject(s)
Auditory Perception/physiology , Memory/physiology , Music/psychology , Paint , Voice , Acoustic Stimulation , Adult , Analysis of Variance , Female , Humans , Male , Psychoacoustics , Recognition, Psychology/physiology , Young Adult
18.
Ann N Y Acad Sci ; 1337: 186-92, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25773634

ABSTRACT

Adolescents and adults commonly use music for various forms of affect regulation, including relaxation, revitalization, distraction, and elicitation of pleasant memories. Mothers throughout the world also sing to their infants, with affect regulation as the principal goal. To date, the study of maternal singing has focused largely on its acoustic features and its consequences for infant attention. We describe recent laboratory research that explores the consequences of singing for infant affect regulation. Such work reveals that listening to recordings of play songs can maintain 6- to 9-month-old infants in a relatively contented or neutral state considerably longer than recordings of infant-directed or adult-directed speech. When 10-month-old infants fuss or cry and are highly aroused, mothers' multimodal singing is more effective than maternal speech at inducing recovery from such distress. Moreover, play songs are more effective than lullabies at reducing arousal in Western infants. We explore the implications of these findings along with possible practical applications.


Subject(s)
Affect , Mother-Child Relations , Music , Adult , Attention , Auditory Perception , Cultural Characteristics , Humans , Infant , Infant Behavior , Mothers , Play and Playthings , Speech , Stress, Psychological
19.
Philos Trans R Soc Lond B Biol Sci ; 370(1664): 20140088, 2015 Mar 19.
Article in English | MEDLINE | ID: mdl-25646511

ABSTRACT

Musicality can be defined as a natural, spontaneously developing trait based on and constrained by biology and cognition. Music, by contrast, can be defined as a social and cultural construct based on that very musicality. One critical challenge is to delineate the constituent elements of musicality. What biological and cognitive mechanisms are essential for perceiving, appreciating and making music? Progress in understanding the evolution of music cognition depends upon adequate characterization of the constituent mechanisms of musicality and the extent to which they are present in non-human species. We argue for the importance of identifying these mechanisms and delineating their functions and developmental course, as well as suggesting effective means of studying them in human and non-human animals. It is virtually impossible to underpin the evolutionary role of musicality as a whole, but a multicomponent perspective on musicality that emphasizes its constituent capacities, development and neural cognitive specificity is an excellent starting point for a research programme aimed at illuminating the origins and evolution of musical behaviour as an autonomous trait.


Subject(s)
Biological Evolution , Cognition/physiology , Music , Adaptation, Physiological , Culture , Humans
20.
Philos Trans R Soc Lond B Biol Sci ; 370(1664): 20140096, 2015 Mar 19.
Article in English | MEDLINE | ID: mdl-25646519

ABSTRACT

Musical behaviours are universal across human populations and, at the same time, highly diverse in their structures, roles and cultural interpretations. Although laboratory studies of isolated listeners and music-makers have yielded important insights into sensorimotor and cognitive skills and their neural underpinnings, they have revealed little about the broader significance of music for individuals, peer groups and communities. This review presents a sampling of musical forms and coordinated musical activity across cultures, with the aim of highlighting key similarities and differences. The focus is on scholarly and everyday ideas about music--what it is and where it originates--as well the antiquity of music and the contribution of musical behaviour to ritual activity, social organization, caregiving and group cohesion. Synchronous arousal, action synchrony and imitative behaviours are among the means by which music facilitates social bonding. The commonalities and differences in musical forms and functions across cultures suggest new directions for ethnomusicology, music cognition and neuroscience, and a pivot away from the predominant scientific focus on instrumental music in the Western European tradition.


Subject(s)
Cross-Cultural Comparison , Music , Human Activities , Humans , Social Behavior
SELECTION OF CITATIONS
SEARCH DETAIL