Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Behav Brain Sci ; 44: e117, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588056

RESUMEN

I challenge Mehr et al.'s contention that ancestral mothers were reluctant to provide all the attention demanded by their infants. The societies in which music emerged likely involved foraging mothers who engaged in extensive infant carrying, feeding, and soothing. Accordingly, their singing was multimodal, its rhythms aligned with maternal movements, with arousal regulatory consequences for singers and listeners.


Asunto(s)
Música , Canto , Nivel de Alerta , Atención , Femenino , Humanos , Lactante , Madres
2.
J Cogn Neurosci ; 32(7): 1213-1220, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-30912725

RESUMEN

Mothers around the world sing to infants, presumably to regulate their mood and arousal. Lullabies and playsongs differ stylistically and have distinctive goals. Mothers sing lullabies to soothe and calm infants and playsongs to engage and excite infants. In this study, mothers repeatedly sang Twinkle, Twinkle, Little Star to their infants (n = 30 dyads), alternating between soothing and playful renditions. Infant attention and mother-infant arousal (i.e., skin conductivity) were recorded continuously. During soothing renditions, mother and infant arousal decreased below initial levels as the singing progressed. During playful renditions, maternal and infant arousal remained stable. Moreover, infants exhibited greater attention to mother during playful renditions than during soothing renditions. Mothers' playful renditions were faster, higher in pitch, louder, and characterized by greater pulse clarity than their soothing renditions. Mothers also produced more energetic rhythmic movements during their playful renditions. These findings highlight the contrastive nature and consequences of lullabies and playsongs.


Asunto(s)
Madres , Canto , Nivel de Alerta , Emociones , Femenino , Humanos , Lactante , Juego e Implementos de Juego
3.
J Acoust Soc Am ; 141(5): 3123, 2017 05.
Artículo en Inglés | MEDLINE | ID: mdl-28599538

RESUMEN

The present study compared children's and adults' identification and discrimination of declarative questions and statements on the basis of terminal cues alone. Children (8-11 years, n = 41) and adults (n = 21) judged utterances as statements or questions from sentences with natural statement and question endings and with manipulated endings that featured intermediate fundamental frequency (F0) values. The same adults and a different sample of children (n = 22) were also tested on their discrimination of the utterances. Children's judgments shifted more gradually across categories than those of adults, but their category boundaries were comparable. In the discrimination task, adults found cross-boundary comparisons more salient than within-boundary comparisons. Adults' performance on the identification and discrimination tasks is consistent with but not definitive regarding categorical perception of statements and questions. Children, by contrast, discriminated the cross-boundary comparisons no better than other comparisons. The findings indicate age-related sharpening in the perception of statements and questions based on terminal F0 cues and the gradual emergence of distinct perceptual categories.


Asunto(s)
Señales (Psicología) , Discriminación en Psicología , Discriminación de la Altura Tonal , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Factores de Edad , Audiometría del Habla , Niño , Conducta Infantil , Desarrollo Infantil , Femenino , Humanos , Masculino , Reconocimiento en Psicología , Adulto Joven
4.
Lang Speech ; 60(1): 154-166, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-28326993

RESUMEN

This study investigates the oral gestures of 8-month-old infants in response to audiovisual presentation of lip and tongue smacks. Infants exhibited more lip gestures than tongue gestures following adult lip smacks and more tongue gestures than lip gestures following adult tongue smacks. The findings, which are consistent with predictions from Articulatory Phonology, imply that 8-month-old infants are capable of producing goal-directed oral gestures by matching the articulatory organ of an adult model.


Asunto(s)
Lenguaje Infantil , Gestos , Conducta Imitativa , Conducta del Lactante , Labio/fisiología , Lengua/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Lactante , Masculino , Estimulación Luminosa , Hábitos Linguales
5.
J Child Lang ; 43(5): 1174-91, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-26374079

RESUMEN

Young children are slow to master conventional intonation patterns in their yes/no questions, which may stem from imperfect understanding of the links between terminal pitch contours and pragmatic intentions. In Experiment 1, five- to ten-year-old children and adults were required to judge utterances as questions or statements on the basis of intonation alone. Children eight years of age or younger performed above chance levels but less accurately than adult listeners. To ascertain whether the verbal content of utterances interfered with young children's attention to the relevant acoustic cues, low-pass filtered versions of the same utterances were presented to children and adults in Experiment 2. Low-pass filtering reduced performance comparably for all age groups, perhaps because such filtering reduced the salience of critical pitch cues. Young children's difficulty in differentiating declarative questions from statements is not attributable to basic perceptual difficulties but rather to absent or unstable intonation categories.


Asunto(s)
Señales (Psicología) , Desarrollo del Lenguaje , Lingüística , Semántica , Acústica del Lenguaje , Percepción del Habla , Adulto , Atención , Niño , Preescolar , Femenino , Humanos , Masculino , Espectrografía del Sonido
6.
Ear Hear ; 35(1): 118-25, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24213020

RESUMEN

OBJECTIVES: Although the spectrally degraded input provided by cochlear implants (CIs) is sufficient for speech perception in quiet, it poses problems for talker identification. The present study examined the ability of normally hearing (NH) children and child CI users to recognize cartoon voices while listening to spectrally degraded speech. DESIGN: In Experiment 1, 5- to 6-year-old NH children were required to identify familiar cartoon characters in a three-alternative, forced-choice task without feedback. Children heard sentence-length utterances at six levels of spectral degradation (noise-vocoded utterances with 4, 8, 12, 16, and 24 frequency bands and the original or unprocessed stimuli). In Experiment 2, child CI users 4 to 7 years of age and a control sample of 4- to 5-year-old NH children were required to identify the unprocessed stimuli from Experiment 1. RESULTS: NH children in Experiment 1 identified the voices significantly above chance levels, and they performed more accurately with increasing spectral information. Practice with stimuli that had greater spectral information facilitated performance on subsequent stimuli with lesser spectral information. In Experiment 2, child CI users successfully recognized the cartoon voices with slightly lower accuracy (0.90 proportion correct) than NH peers who listened to unprocessed utterances (0.97 proportion correct). CONCLUSIONS: The findings indicate that both NH children and child CI users can identify cartoon voices under conditions of severe spectral degradation. In such circumstances, children may rely on talker-specific phonetic detail to distinguish one talker from another.


Asunto(s)
Sordera/rehabilitación , Patrones de Reconocimiento Fisiológico/fisiología , Percepción del Habla/fisiología , Voz , Estimulación Acústica/métodos , Estudios de Casos y Controles , Niño , Preescolar , Implantación Coclear , Implantes Cocleares , Sordera/fisiopatología , Humanos , Espectrografía del Sonido
7.
Psychol Res ; 77(2): 196-203, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-22367155

RESUMEN

We examined the influence of incidental exposure to varied metrical patterns from different musical cultures on the perception of complex metrical structures from an unfamiliar musical culture. Adults who were familiar with Western music only (i.e., simple meters) and those who also had limited familiarity with non-Western music were tested on their perception of metrical organization in unfamiliar (Turkish) music with simple and complex meters. Adults who were familiar with Western music detected meter-violating changes in Turkish music with simple meter but not in Turkish music with complex meter. Adults with some exposure to non-Western music that was unmetered or metrically complex detected meter-violating changes in Turkish music with both simple and complex meters, but they performed better on patterns with a simple meter. The implication is that familiarity with varied metrical structures, including those with a non-isochronous tactus, enhances sensitivity to the metrical organization of unfamiliar music.


Asunto(s)
Percepción Auditiva/fisiología , Comparación Transcultural , Música/psicología , Reconocimiento en Psicología/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
8.
Proc Natl Acad Sci U S A ; 112(29): 8809-10, 2015 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-26157132
9.
Psychol Music ; 51(1): 172-187, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36532618

RESUMEN

We examined pitch-error detection in well-known songs sung with or without meaningful lyrics. In Experiment 1, adults heard the initial phrase of familiar songs sung with lyrics or repeating syllables (la) and judged whether they heard an out-of-tune note. Half of the renditions had a single pitch error (50 or 100 cents); half were in tune. Listeners were poorer at pitch-error detection in songs with lyrics. In Experiment 2, within-note pitch fluctuations in the same performances were eliminated by auto-tuning. Again, pitch-error detection was worse for renditions with lyrics (50 cents), suggesting adverse effects of semantic processing. In Experiment 3, songs were sung with repeating syllables or scat syllables to ascertain the role of phonetic variability. Performance was poorer for scat than for repeating syllables, indicating adverse effects of phonetic variability, but overall performance exceeded Experiment 1. In Experiment 4, listeners evaluated songs in all styles (repeating syllables, scat, lyrics) within the same session. Performance was best with repeating syllables (50 cents) and did not differ between scat or lyric versions. In short, tracking the pitches of highly familiar songs was impaired by the presence of words, an impairment stemming primarily from phonetic variability rather than interference from semantic processing.

10.
Psychol Sci ; 23(10): 1074-8, 2012 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-22894936

RESUMEN

Across species, there is considerable evidence of preferential processing for biologically significant signals such as conspecific vocalizations and the calls of individual conspecifics. Surprisingly, music cognition in human listeners is typically studied with stimuli that are relatively low in biological significance, such as instrumental sounds. The present study explored the possibility that melodies might be remembered better when presented vocally rather than instrumentally. Adults listened to unfamiliar folk melodies, with some presented in familiar timbres (voice and piano) and others in less familiar timbres (banjo and marimba). They were subsequently tested on recognition of previously heard melodies intermixed with novel melodies. Melodies presented vocally were remembered better than those presented instrumentally even though they were liked less. Factors underlying the advantage for vocal melodies remain to be determined. In line with its biological significance, vocal music may evoke increased vigilance or arousal, which in turn may result in greater depth of processing and enhanced memory for musical details.


Asunto(s)
Percepción Auditiva/fisiología , Memoria/fisiología , Música/psicología , Canto/fisiología , Adulto , Femenino , Humanos , Masculino , Reconocimiento en Psicología/fisiología , Estudiantes/psicología , Adulto Joven
11.
J Acoust Soc Am ; 131(2): 1307-14, 2012 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-22352504

RESUMEN

Japanese 5- to 13-yr-olds who used cochlear implants (CIs) and a comparison group of normally hearing (NH) Japanese children were tested on their perception and production of speech prosody. For the perception task, they were required to judge whether semantically neutral utterances that were normalized for amplitude were spoken in a happy, sad, or angry manner. The performance of NH children was error-free. By contrast, child CI users performed well below ceiling but above chance levels on happy- and sad-sounding utterances but not on angry-sounding utterances. For the production task, children were required to imitate stereotyped Japanese utterances expressing disappointment and surprise as well as culturally typically representations of crow and cat sounds. NH 5- and 6-year-olds produced significantly poorer imitations than older hearing children, but age was unrelated to the imitation quality of child CI users. Overall, child CI user's imitations were significantly poorer than those of NH children, but they did not differ significantly from the imitations of the youngest NH group. Moreover, there was a robust correlation between the performance of child CI users on the perception and production tasks; this implies that difficulties with prosodic perception underlie their difficulties with prosodic imitation.


Asunto(s)
Sordera/fisiopatología , Emociones/fisiología , Fonética , Discriminación de la Altura Tonal/fisiología , Percepción del Habla/fisiología , Calidad de la Voz/fisiología , Adolescente , Análisis de Varianza , Niño , Preescolar , Implantes Cocleares , Señales (Psicología) , Sordera/congénito , Sordera/psicología , Femenino , Humanos , Masculino , Acústica del Lenguaje
12.
J Acoust Soc Am ; 131(1): 501-8, 2012 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-22280611

RESUMEN

Temporal information provided by cochlear implants enables successful speech perception in quiet, but limited spectral information precludes comparable success in voice perception. Talker identification and speech decoding by young hearing children (5-7 yr), older hearing children (10-12 yr), and hearing adults were examined by means of vocoder simulations of cochlear implant processing. In Experiment 1, listeners heard vocoder simulations of sentences from a man, woman, and girl and were required to identify the talker from a closed set. Younger children identified talkers more poorly than older listeners, but all age groups showed similar benefit from increased spectral information. In Experiment 2, children and adults provided verbatim repetition of vocoded sentences from the same talkers. The youngest children had more difficulty than older listeners, but all age groups showed comparable benefit from increasing spectral resolution. At comparable levels of spectral degradation, performance on the open-set task of speech decoding was considerably more accurate than on the closed-set task of talker identification. Hearing children's ability to identify talkers and decode speech from spectrally degraded material sheds light on the difficulty of these domains for child implant users.


Asunto(s)
Envejecimiento/fisiología , Señales (Psicología) , Acústica del Lenguaje , Percepción del Habla/fisiología , Análisis de Varianza , Niño , Preescolar , Femenino , Humanos , Masculino , Ruido , Reconocimiento en Psicología , Adulto Joven
13.
Nat Hum Behav ; 6(11): 1545-1556, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35851843

RESUMEN

When interacting with infants, humans often alter their speech and song in ways thought to support communication. Theories of human child-rearing, informed by data on vocal signalling across species, predict that such alterations should appear globally. Here, we show acoustic differences between infant-directed and adult-directed vocalizations across cultures. We collected 1,615 recordings of infant- and adult-directed speech and song produced by 410 people in 21 urban, rural and small-scale societies. Infant-directedness was reliably classified from acoustic features only, with acoustic profiles of infant-directedness differing across language and music but in consistent fashions. We then studied listener sensitivity to these acoustic features. We played the recordings to 51,065 people from 187 countries, recruited via an English-language website, who guessed whether each vocalization was infant-directed. Their intuitions were more accurate than chance, predictable in part by common sets of acoustic features and robust to the effects of linguistic relatedness between vocalizer and listener. These findings inform hypotheses of the psychological functions and evolution of human communication.


Asunto(s)
Música , Voz , Humanos , Adulto , Lactante , Habla , Lenguaje , Acústica
14.
Autism Res ; 14(6): 1127-1133, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33398938

RESUMEN

Adults and children with typical development (TD) remember vocal melodies (without lyrics) better than instrumental melodies, which is attributed to the biological and social significance of human vocalizations. Here we asked whether children with autism spectrum disorder (ASD), who have persistent difficulties with communication and social interaction, and adolescents and adults with Williams syndrome (WS), who are highly sociable, even indiscriminately friendly, exhibit a memory advantage for vocal melodies like that observed in individuals with TD. We tested 26 children with ASD, 26 adolescents and adults with WS of similar mental age, and 26 children with TD on their memory for vocal and instrumental (piano, marimba) melodies. After exposing them to 12 unfamiliar folk melodies with different timbres, we required them to indicate whether each of 24 melodies (half heard previously) was old (heard before) or new (not heard before) during an unexpected recognition test. Although the groups successfully distinguished the old from the new melodies, they differed in overall memory. Nevertheless, they exhibited a comparable advantage for vocal melodies. In short, individuals with ASD and WS show enhanced processing of socially significant auditory signals in the context of music. LAY SUMMARY: Typically developing children and adults remember vocal melodies better than instrumental melodies. In this study, we found that children with Autistic Spectrum Disorder, who have severe social processing deficits, and children and adults with Williams syndrome, who are highly sociable, exhibit comparable memory advantages for vocal melodies. The results have implications for musical interventions with these populations.


Asunto(s)
Trastorno del Espectro Autista , Música , Voz , Síndrome de Williams , Adolescente , Adulto , Percepción Auditiva , Trastorno del Espectro Autista/complicaciones , Niño , Humanos , Síndrome de Williams/complicaciones
15.
Ear Hear ; 31(4): 555-66, 2010 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-20588121

RESUMEN

OBJECTIVES: The available research indicates that cochlear implant (CI) users have difficulty in differentiating talkers, especially those of the same gender. The goal of this study was to determine whether child CI users could differentiate talkers under favorable stimulus and task conditions. We predicted that the use of a highly familiar voice, full sentences, and a game-like task with feedback would lead to higher performance levels than those achieved in previous studies of talker identification in CI users. DESIGN: In experiment 1, 21 CI users aged 4.8 to 14.3 yrs and 16 normal-hearing (NH) 5-yr-old children were required to differentiate their mother's scripted utterances from those of an unfamiliar man, woman, and girl in a four-alternative forced-choice task with feedback. In one condition, the utterances incorporated natural prosodic variations. In another condition, nonmaternal talkers imitated the prosody of each maternal utterance. In experiment 2, 19 of the child CI users and 11 of the NH children from experiment 1 returned on a subsequent occasion to participate in a task that required them to differentiate their mother's utterances from those of unfamiliar women in a two-alternative forced-choice task with feedback. Again, one condition had natural prosodic variations and another had maternal imitations. RESULTS: Child CI users in experiment 1 succeeded in differentiating their mother's utterances from those of a man, woman, and girl. Their performance was poorer than the performance of younger NH children, which was at ceiling. Child CI users' performance was better in the context of natural prosodic variations than in the context of imitations of maternal prosody. Child CI users in experiment 2 differentiated their mother's utterances from that of other women, and they also performed better on naturally varying samples than on imitations. CONCLUSIONS: We attribute child CI users' success on talker differentiation, even on same-gender differentiation, to their use of two types of temporal cues: variations in consonant and vowel articulation and variations in speaking rate. Moreover, we contend that child CI users' differentiation of speakers was facilitated by long-term familiarity with their mother's voice.


Asunto(s)
Implantes Cocleares , Sordera/psicología , Sordera/rehabilitación , Madres , Reconocimiento en Psicología , Voz , Adolescente , Adulto , Niño , Preescolar , Señales (Psicología) , Femenino , Humanos , Masculino , Percepción del Habla , Factores de Tiempo
16.
Dev Psychol ; 56(5): 861-868, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32162936

RESUMEN

Parents commonly vocalize to infants to mitigate their distress, especially when holding them is not possible. Here we examined the relative efficacy of parents' speech and singing (familiar and unfamiliar songs) in alleviating the distress of 8- and 10-month-old infants (n = 68 per age group). Parent-infant dyads participated in 3 trials of the Still Face procedure, featuring a 2-min Play Phase, a Still Face phase (parents immobile and unresponsive for 1 min or until infants became visibly distressed), and a 2-min Reunion Phase in which caregivers attempted to reverse infant distress by (a) singing a highly familiar song, (b) singing an unfamiliar song, or (c) expressive talking (order counterbalanced across dyads). In the Reunion Phase, talking led to increased negative affect in both age groups, in contrast to singing familiar or unfamiliar songs, which increased infant attention to parent and decreased negative affect. The favorable consequences were greatest for familiar songs, which also generated increased smiling. Skin conductance recorded from a subset of infants (n = 36 younger, 41 older infants) revealed that arousal levels were highest for the talking reunion, lowest for unfamiliar songs, and intermediate for familiar songs. The arousal effects, considered in conjunction with the behavioral effects, confirm that songs are more effective than speech at mitigating infant distress. We suggest, moreover, that familiar songs generate higher infant arousal than unfamiliar songs because they evoke excitement, reflected in modestly elevated arousal as well as pleasure, in contrast to more subdued responses to unfamiliar songs. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Atención/fisiología , Emociones , Música/psicología , Padres/psicología , Habla , Estrés Psicológico/psicología , Adulto , Percepción Auditiva , Femenino , Humanos , Lactante , Conducta del Lactante/psicología , Masculino
17.
J Exp Psychol Gen ; 149(4): 634-649, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31512903

RESUMEN

Many scholars consider preferences for consonance, as defined by Western music theorists, to be based primarily on biological factors, while others emphasize experiential factors, notably the nature of musical exposure. Cross-cultural experiments suggest that consonance preferences are shaped by musical experience, implying that preferences should emerge or become stronger over development for individuals in Western cultures. However, little is known about this developmental trajectory. We measured preferences for the consonance of simultaneous sounds and related acoustic properties in children and adults to characterize their developmental course and dependence on musical experience. In Study 1, adults and children 6 to 10 years of age rated their liking of simultaneous tone combinations (dyads) and affective vocalizations. Preferences for consonance increased with age and were predicted by changing preferences for harmonicity-the degree to which a sound's frequencies are multiples of a common fundamental frequency-but not by evaluations of beating-fluctuations in amplitude that occur when frequencies are close but not identical, producing the sensation of acoustic roughness. In Study 2, musically trained adults and 10-year-old children also rated the same stimuli. Age and musical training were associated with enhanced preference for consonance. Both measures of experience were associated with an enhanced preference for harmonicity, but were unrelated to evaluations of beating stimuli. The findings are consistent with cross-cultural evidence and the effects of musicianship in Western adults in linking Western musical experience to preferences for consonance and harmonicity. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Percepción Auditiva/fisiología , Emociones/fisiología , Música/psicología , Estimulación Acústica , Adulto , Niño , Femenino , Humanos , Masculino
18.
Cortex ; 45(1): 110-8, 2009 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-19058799

RESUMEN

Listeners may favour particular rhythms because of their degree of conformity to culture-specific expectations or because of perceptual constraints that are apparent early in development. In two experiments we examined adults' and 6-month-old infants' detection of subtle rhythmic and melodic changes to two sequences of tones, a conventional rhythm that musically untrained adults rated as rhythmically good and an unconventional rhythm that was rated as poor. Detection of the changes was above chance in all conditions, but adults and infants performed more accurately in the context of the conventional rhythm. Unlike adults, who benefited from rhythmic conventionality only when detecting rhythmic changes, infants benefited when detecting melodic as well as rhythmic changes. The findings point to infant and adult parallels for some aspects of rhythm processing and to integrated perception of rhythm and melody early in life.


Asunto(s)
Envejecimiento/fisiología , Envejecimiento/psicología , Percepción Auditiva/fisiología , Música/psicología , Estimulación Acústica , Adolescente , Adulto , Discriminación en Psicología/fisiología , Femenino , Humanos , Lactante , Masculino , Desempeño Psicomotor/fisiología , Adulto Joven
19.
Front Psychol ; 10: 1073, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31156507

RESUMEN

Rhythmic movement to music, whether deliberate (e.g., dancing) or inadvertent (e.g., foot-tapping), is ubiquitous. Although parents commonly report that infants move rhythmically to music, especially to familiar music in familiar environments, there has been little systematic study of this behavior. As a preliminary exploration of infants' movement to music in their home environment, we studied V, an infant who began moving rhythmically to music at 6 months of age. Our primary goal was to generate testable hypotheses about movement to music in infancy. Across nine sessions, beginning when V was almost 19 months of age and ending 8 weeks later, she was video-recorded by her mother during the presentation of 60-s excerpts from two familiar and two unfamiliar songs presented at three tempos-the original song tempo as well as faster and slower versions. V exhibited a number of repeated dance movements such as head-bobbing, arm-pumping, torso twists, and bouncing. She danced most to Metallica's Now that We're Dead, a recording that her father played daily in V's presence, often dancing with her while it played. Its high pulse clarity, in conjunction with familiarity, may have increased V's propensity to dance, as reflected in lesser dancing to familiar music with low pulse clarity and to unfamiliar music with high pulse clarity. V moved faster to faster music but only for unfamiliar music, perhaps because arousal drove her movement to familiar music. Her movement to music was positively correlated with smiling, highlighting the pleasurable nature of the experience. Rhythmic movement to music may have enhanced her pleasure, and the joy of listening may have promoted her movement. On the basis of behavior observed in this case study, we propose a scaled-up study to obtain definitive evidence about the effects of song familiarity and specific musical features on infant rhythmic movement, the developmental trajectory of dance skills, and the typical range of variation in such skills.

20.
J Acoust Soc Am ; 124(3): 1759-63, 2008 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-19045665

RESUMEN

Musically untrained participants in five age groups (5-, 6-, 8-, and 11-year-olds, and adults) heard sequences of three 1 s piano tones in which the first and third tones were identical (A5, or 880 Hz) but the middle tone was displaced upward or downward in pitch. Their task was to identify whether the middle tone was higher or lower than the other two tones. In experiment 1, 5-year-olds successfully identified upward and downward shifts of 4, 2, 1, 0.5, and 0.3 semitones. In experiment 2, older children (6-, 8-, and 11-year-olds) and adults successfully identified the same shifts as well as a smaller shift (0.1 semitone). For all age groups, performance accuracy decreased as the size of the shift decreased. Performance improved from 5 to 8 years of age, reaching adult levels at 8 years.


Asunto(s)
Envejecimiento/fisiología , Vías Auditivas/crecimiento & desarrollo , Desarrollo Infantil , Música , Discriminación de la Altura Tonal , Percepción de la Altura Tonal , Estimulación Acústica , Adulto , Factores de Edad , Audiometría , Niño , Preescolar , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA