Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 127
Filtrar
1.
J Neurosci ; 43(15): 2794-2802, 2023 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-36914264

RESUMO

The ability to extract rhythmic structure is important for the development of language, music, and social communication. Although previous studies show infants' brains entrain to the periodicities of auditory rhythms and even different metrical interpretations (e.g., groups of two vs three beats) of ambiguous rhythms, whether the premature brain tracks beat and meter frequencies has not been explored previously. We used high-resolution electroencephalography while premature infants (n = 19, 5 male; mean age, 32 ± 2.59 weeks gestational age) heard two auditory rhythms in the incubators. We observed selective enhancement of the neural response at both beat- and meter-related frequencies. Further, neural oscillations at the beat and duple (groups of 2) meter were phase aligned with the envelope of the auditory rhythmic stimuli. Comparing the relative power at beat and meter frequencies across stimuli and frequency revealed evidence for selective enhancement of duple meter. This suggests that even at this early stage of development, neural mechanisms for processing auditory rhythms beyond simple sensory coding are present. Our results add to a few previous neuroimaging studies demonstrating discriminative auditory abilities of premature neural networks. Specifically, our results demonstrate the early capacities of the immature neural circuits and networks to code both simple beat and beat grouping (i.e., hierarchical meter) regularities of auditory sequences. Considering the importance of rhythm processing for acquiring language and music, our findings indicate that even before birth, the premature brain is already learning this important aspect of the auditory world in a sophisticated and abstract way.SIGNIFICANCE STATEMENT Processing auditory rhythm is of great neurodevelopmental importance. In an electroencephalography experiment in premature newborns, we found converging evidence that when presented with auditory rhythms, the premature brain encodes multiple periodicities corresponding to beat and beat grouping (meter) frequencies, and even selectively enhances the neural response to meter compared with beat, as in human adults. We also found that the phase of low-frequency neural oscillations aligns to the envelope of the auditory rhythms and that this phenomenon becomes less precise at lower frequencies. These findings demonstrate the initial capacities of the developing brain to code auditory rhythm and the importance of special care to the auditory environment of this vulnerable population during a highly dynamic period of neural development.


Assuntos
Percepção Auditiva , Música , Recém-Nascido , Adulto , Humanos , Masculino , Lactente , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Eletroencefalografia/métodos , Audição , Periodicidade
2.
Cereb Cortex ; 33(13): 8734-8747, 2023 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-37143183

RESUMO

Electroencephalography measures are of interest in developmental neuroscience as potentially reliable clinical markers of brain function. Features extracted from electroencephalography are most often averaged across individuals in a population with a particular condition and compared statistically to the mean of a typically developing group, or a group with a different condition, to define whether a feature is representative of the populations as a whole. However, there can be large variability within a population, and electroencephalography features often change dramatically with age, making comparisons difficult. Combined with often low numbers of trials and low signal-to-noise ratios in pediatric populations, establishing biomarkers can be difficult in practice. One approach is to identify electroencephalography features that are less variable between individuals and are relatively stable in a healthy population during development. To identify such features in resting-state electroencephalography, which can be readily measured in many populations, we introduce an innovative application of statistical measures of variance for the analysis of resting-state electroencephalography data. Using these statistical measures, we quantified electroencephalography features commonly used to measure brain development-including power, connectivity, phase-amplitude coupling, entropy, and fractal dimension-according to their intersubject variability. Results from 51 6-month-old infants revealed that the complexity measures, including fractal dimension and entropy, followed by connectivity were the least variable features across participants. This stability was found to be greatest in the right parietotemporal region for both complexity feature, but no significant region of interest was found for connectivity feature. This study deepens our understanding of physiological patterns of electroencephalography data in developing brains, provides an example of how statistical measures can be used to analyze variability in resting-state electroencephalography in a homogeneous group of healthy infants, contributes to the establishment of robust electroencephalography biomarkers of neurodevelopment through the application of variance analyses, and reveals that nonlinear measures may be most relevant biomarkers of neurodevelopment.


Assuntos
Encéfalo , Eletroencefalografia , Criança , Humanos , Lactente , Eletroencefalografia/métodos , Encéfalo/fisiologia , Entropia , Biomarcadores
3.
Sensors (Basel) ; 24(6)2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38544259

RESUMO

Clinical screening tests for balance and mobility often fall short of predicting fall risk. Cognitive distractors and unpredictable external stimuli, common in busy natural environments, contribute to this risk, especially in older adults. Less is known about the effects of upper sensory-motor coordination, such as coordinating one's hand with an external stimulus. We combined movement sonification and affordable inertial motion sensors to develop a task for the precise measurement and manipulation of full-body interaction with stimuli in the environment. In a double-task design, we studied how a supra-postural activity affected quiet stance. The supra-postural task consisted of rhythmic synchronization with a repetitive auditory stimulus. The stimulus was attentionally demanding because it was being modulated continuously. The participant's hand movement was sonified in real time, and their goal was to synchronize their hand movement with the stimulus. In the unpredictable condition, the tempo changed at random points in the trial. A separate sensor recorded postural fluctuations. Young healthy adults were compared to older adult (OA) participants without known risk of falling. The results supported the hypothesis that supra-postural coordination would entrain postural control. The effect was stronger in OAs, supporting the idea that diminished reserve capacities reduce the ability to isolate postural control from sensory-motor and cognitive activity.


Assuntos
Movimento , Postura , Humanos , Idoso , Mãos , Movimento (Física) , Suscetibilidade a Doenças , Equilíbrio Postural , Cognição
4.
Dev Sci ; 26(5): e13360, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36527729

RESUMO

The urge to move to music (groove) depends in part on rhythmic syncopation in the music. For adults, the syncopation-groove relationship has an inverted-U shape: listeners want to move most to rhythms that have some, but not too much, syncopation. However, we do not know whether the syncopation-groove relationship is relatively sensitive to, or resistant to, a listener's experience. In two sets of experiments, we tested whether the syncopation-groove relationship is affected by dance experience or changes through development in childhood. Dancers and nondancers rated groove for 50 rhythmic patterns varying in syncopation. Dancers' and nondancers' ratings did not differ (and Bayesian tests provided substantial evidence that they were equivalent) in terms of mean groove and the optimal level of syncopation. Similarly, ballet and hip-hop dancers' syncopation-groove relationships did not differ. However, dancers had more robust syncopation-groove relationships (higher goodness-of-fit) than nondancers. Children (3-6 years old) completed two tasks to assess their syncopation-groove relationships: In a 2-alternative-forced choice task, children compared rhythms from 2 of 3 possible levels of syncopation (low, medium, and high) and chose which rhythm in a pair was better for dancing. In a dance task, children danced to the same rhythms. Results from both tasks indicated that for children, as for adults, medium syncopation rhythms elicit more groove than low syncopation rhythms. A follow-up experiment replicated the 2-alternative-forced choice task results. Taken together, the results suggest the optimal level of syncopation for groove is resistant to experience, although experience may affect the robustness of the inverted-U relationship. RESEARCH HIGHLIGHTS: In Experiment 1, dancers and nondancers rated groove (the urge to move) for musical rhythms, demonstrating the same inverted-U relationships between syncopation and groove. In Experiment 2, children and adults both chose rhythms with moderate syncopation more than low syncopation as more groove-inducing or better for dancing. Children also danced more for moderate than low syncopation, showing a close perception-behavior relationship across tasks. Similarities in the syncopation-groove relationship regardless of dance training and age suggest that this perceptual and behavioral groove response to rhythmic complexity may be quite resistant to experience.


Assuntos
Dança , Música , Adulto , Humanos , Criança , Pré-Escolar , Teorema de Bayes , Dança/fisiologia
5.
Eur J Neurosci ; 55(8): 2003-2023, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35445451

RESUMO

From auditory rhythm patterns, listeners extract the underlying steady beat and perceptually group beats to form metres. While previous studies show infants discriminate different auditory metres, it remains unknown whether they can maintain (imagine) a metrical interpretation of an ambiguous rhythm through top-down processes. We investigated this via electroencephalographic mismatch responses. We primed 6-month-old infants (N = 24) to hear a 6-beat ambiguous rhythm either in duple metre (n = 13) or in triple metre (n = 11) through loudness accents either on every second or every third beat. Periods of priming were inserted before sequences of the ambiguous unaccented rhythm. To elicit mismatch responses, occasional pitch deviants occurred on either beat 4 (strong beat in triple metre; weak in duple) or beat 5 (strong in duple; weak in triple) of the unaccented trials. At frontal left sites, we found a significant interaction between beat and priming group in the predicted direction. Post-hoc analyses showed that mismatch response amplitudes were significantly larger for beat 5 in the duple-primed than triple-primed group (p = .047) and were non-significantly larger for beat 4 in the triple-primed than duple-primed group. Further, amplitudes were generally larger in infants with musically experienced parents. At frontal right sites, mismatch responses were generally larger for those in the duple compared with triple group, which may reflect a processing advantage for duple metre. These results indicate that infants can impose a top-down, internally generated metre on ambiguous auditory rhythms, an ability that would aid early language and music learning.


Assuntos
Percepção Auditiva , Música , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Eletroencefalografia , Humanos , Lactente , Atividade Motora
6.
Eur J Neurosci ; 55(8): 1972-1985, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35357048

RESUMO

The human auditory system excels at detecting patterns needed for processing speech and music. According to predictive coding, the brain predicts incoming sounds, compares predictions to sensory input and generates a prediction error whenever a mismatch between the prediction and sensory input occurs. Predictive coding can be indexed in electroencephalography (EEG) with the mismatch negativity (MMN) and P3a, two components of event-related potentials (ERP) that are elicited by infrequent deviant sounds (e.g., differing in pitch, duration and loudness) in a stream of frequent sounds. If these components reflect prediction error, they should also be elicited by omitting an expected sound, but few studies have examined this. We compared ERPs elicited by infrequent randomly occurring omissions (unexpected silences) in tone sequences presented at two tones per second to ERPs elicited by frequent, regularly occurring omissions (expected silences) within a sequence of tones presented at one tone per second. We found that unexpected silences elicited significant MMN and P3a, although the magnitude of these components was quite small and variable. These results provide evidence for hierarchical predictive coding, indicating that the brain predicts silences and sounds.


Assuntos
Potenciais Evocados Auditivos , Potenciais Evocados , Estimulação Acústica/métodos , Adulto , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Potenciais Evocados Auditivos/fisiologia , Humanos , Som
7.
Psychol Sci ; 32(9): 1416-1425, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34409898

RESUMO

Anticipating the future is essential for efficient perception and action planning. Yet the role of anticipation in event segmentation is understudied because empirical research has focused on retrospective cues such as surprise. We address this concern in the context of perception of musical-phrase boundaries. A computational model of cognitive sequence processing was used to control the information-dynamic properties of tone sequences. In an implicit, self-paced listening task (N = 38), undergraduates dwelled longer on tones generating high entropy (i.e., high uncertainty) than on those generating low entropy (i.e., low uncertainty). Similarly, sequences that ended on tones generating high entropy were rated as sounding more complete (N = 31 undergraduates). These entropy effects were independent of both the surprise (i.e., information content) and phrase position of target tones in the original musical stimuli. Our results indicate that events generating high entropy prospectively contribute to segmentation processes in auditory sequence perception, independently of the properties of the subsequent event.


Assuntos
Música , Percepção Auditiva , Sinais (Psicologia) , Humanos , Estudos Retrospectivos , Incerteza
8.
Dev Sci ; 24(1): e12982, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32358988

RESUMO

Accurate perception and production of emotional states is important for successful social interactions across the lifespan. Previous research has shown that when identifying emotion in faces, preschool children are more likely to confuse emotions that share valence, but differ in arousal (e.g. sadness and anger) than emotions that share arousal, but differ on valence (e.g. anger and joy). Here, we examined the influence of valence and arousal on children's production of emotion in music. Three-, 5- and 7-year-old children recruited from the greater Hamilton area (N = 74) 'performed' music to produce emotions using a self-pacing paradigm, in which participants controlled the onset and offset of each chord in a musical sequence by repeatedly pressing and lifting the same key on a MIDI piano. Key press velocity controlled the loudness of each chord. Results showed that (a) differentiation of emotions by 5-year-old children was mainly driven by arousal of the target emotion, with differentiation based on both valence and arousal at 7 years and (b) tempo and loudness were used to differentiate emotions earlier in development than articulation. The results indicate that the developmental trajectory of emotion understanding in music may differ from the developmental trajectory in other domains.


Assuntos
Música , Ira , Nível de Alerta , Criança , Pré-Escolar , Emoções , Humanos
9.
Child Dev ; 92(5): e907-e923, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33506491

RESUMO

Accurate time perception is crucial for hearing (speech, music) and action (walking, catching). Motor brain regions are recruited during auditory time perception. Therefore, the hypothesis was tested that children (age 6-7) at risk for developmental coordination disorder (rDCD), a neurodevelopmental disorder involving motor difficulties, would show nonmotor auditory time perception deficits. Psychophysical tasks confirmed that children with rDCD have poorer duration and rhythm perception than typically developing children (N = 47, d = 0.95-1.01). Electroencephalography showed delayed mismatch negativity or P3a event-related potential latency in response to duration or rhythm deviants, reflecting inefficient brain processing (N = 54, d = 0.71-0.95). These findings are among the first to characterize perceptual timing deficits in DCD, suggesting important theoretical and clinical implications.


Assuntos
Música , Percepção da Fala , Percepção do Tempo , Estimulação Acústica , Percepção Auditiva , Criança , Eletroencefalografia , Humanos , Fala
10.
Behav Brain Sci ; 44: e116, 2021 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-34588065

RESUMO

The evolutionary origins of complex capacities such as musicality are not simple, and likely involved many interacting steps of musicality-specific adaptations, exaptations, and cultural creation. A full account of the origins of musicality needs to consider the role of ancient adaptations such as credible singing, auditory scene analysis, and prediction-reward circuits in constraining the emergence of musicality.


Assuntos
Música , Adaptação Fisiológica , Evolução Biológica , Humanos , Recompensa
11.
Proc Natl Acad Sci U S A ; 114(21): E4134-E4141, 2017 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-28484007

RESUMO

The cultural and technological achievements of the human species depend on complex social interactions. Nonverbal interpersonal coordination, or joint action, is a crucial element of social interaction, but the dynamics of nonverbal information flow among people are not well understood. We used joint music making in string quartets, a complex, naturalistic nonverbal behavior, as a model system. Using motion capture, we recorded body sway simultaneously in four musicians, which reflected real-time interpersonal information sharing. We used Granger causality to analyze predictive relationships among the motion time series of the players to determine the magnitude and direction of information flow among the players. We experimentally manipulated which musician was the leader (followers were not informed who was leading) and whether they could see each other, to investigate how these variables affect information flow. We found that assigned leaders exerted significantly greater influence on others and were less influenced by others compared with followers. This effect was present, whether or not they could see each other, but was enhanced with visual information, indicating that visual as well as auditory information is used in musical coordination. Importantly, performers' ratings of the "goodness" of their performances were positively correlated with the overall degree of body sway coupling, indicating that communication through body sway reflects perceived performance success. These results confirm that information sharing in a nonverbal joint action task occurs through both auditory and visual cues and that the dynamics of information flow are affected by changing group relationships.


Assuntos
Cinésica , Liderança , Percepção de Movimento , Movimento , Música , Adulto , Feminino , Humanos , Masculino
12.
Music Percept ; 37(3): 185-195, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36936548

RESUMO

Many foundational questions in the psychology of music require cross-cultural approaches, yet the vast majority of work in the field to date has been conducted with Western participants and Western music. For cross-cultural research to thrive, it will require collaboration between people from different disciplinary backgrounds, as well as strategies for overcoming differences in assumptions, methods, and terminology. This position paper surveys the current state of the field and offers a number of concrete recommendations focused on issues involving ethics, empirical methods, and definitions of "music" and "culture."

13.
Neuroimage ; 198: 31-43, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31059798

RESUMO

Previous studies indicate that temporal predictability can enhance timing and intensity perception, but it is not known whether it also enhances pitch perception, despite pitch being a fundamental perceptual attribute of sound. Here we investigate this in the context of rhythmic regularity, a form of predictable temporal structure common in sound streams, including music and speech. It is known that neural oscillations in low (delta: 1-3 Hz) and high (beta: 15-25 Hz) frequency bands entrain to rhythms in phase and power, respectively, but it is not clear why both low and high frequency bands entrain to external rhythms, and whether they and their coupling serve different perceptual functions. Participants discriminated near-threshold pitch deviations (targets) embedded in either rhythmic (regular/isochronous) or arrhythmic (irregular/non-isochronous) tone sequences. Psychophysically, we found superior pitch discrimination performance for target tones in rhythmic compared to arrhythmic sequences. Electroencephalography recordings from auditory cortex showed that delta phase, beta power modulation, and delta-beta coupling were all modulated by rhythmic regularity. Importantly, trial-by-trial neural-behavioural correlational analyses showed that, prior to a target, the depth of U-shaped beta power modulation predicted pitch discrimination sensitivity whereas cross-frequency coupling strength predicted reaction time. These novel findings suggest that delta phase might reflect rhythmic temporal expectation, beta power temporal attention, and delta-beta coupling auditory-motor communication. Together, low and high frequency auditory neural oscillations reflect different perceptual functions that work in concert for tracking rhythmic regularity and proactively facilitate pitch perception.


Assuntos
Córtex Auditivo/fisiologia , Ritmo beta , Ritmo Delta , Discriminação da Altura Tonal/fisiologia , Estimulação Acústica , Adolescente , Adulto , Sincronização Cortical , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Psicoacústica , Adulto Jovem
14.
Proc Natl Acad Sci U S A ; 111(28): 10383-8, 2014 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-24982142

RESUMO

The auditory environment typically contains several sound sources that overlap in time, and the auditory system parses the complex sound wave into streams or voices that represent the various sound sources. Music is also often polyphonic. Interestingly, the main melody (spectral/pitch information) is most often carried by the highest-pitched voice, and the rhythm (temporal foundation) is most often laid down by the lowest-pitched voice. Previous work using electroencephalography (EEG) demonstrated that the auditory cortex encodes pitch more robustly in the higher of two simultaneous tones or melodies, and modeling work indicated that this high-voice superiority for pitch originates in the sensory periphery. Here, we investigated the neural basis of carrying rhythmic timing information in lower-pitched voices. We presented simultaneous high-pitched and low-pitched tones in an isochronous stream and occasionally presented either the higher or the lower tone 50 ms earlier than expected, while leaving the other tone at the expected time. EEG recordings revealed that mismatch negativity responses were larger for timing deviants of the lower tones, indicating better timing encoding for lower-pitched compared with higher-pitch tones at the level of auditory cortex. A behavioral motor task revealed that tapping synchronization was more influenced by the lower-pitched stream. Results from a biologically plausible model of the auditory periphery suggest that nonlinear cochlear dynamics contribute to the observed effect. The low-voice superiority effect for encoding timing explains the widespread musical practice of carrying rhythm in bass-ranged instruments and complements previously established high-voice superiority effects for pitch and melody.


Assuntos
Córtex Auditivo/fisiologia , Eletroencefalografia , Música , Percepção da Altura Sonora/fisiologia , Adulto , Feminino , Humanos , Masculino
15.
J Neurosci ; 35(45): 15187-98, 2015 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-26558788

RESUMO

Dancing to music involves synchronized movements, which can be at the basic beat level or higher hierarchical metrical levels, as in a march (groups of two basic beats, one-two-one-two …) or waltz (groups of three basic beats, one-two-three-one-two-three …). Our previous human magnetoencephalography studies revealed that the subjective sense of meter influences auditory evoked responses phase locked to the stimulus. Moreover, the timing of metronome clicks was represented in periodic modulation of induced (non-phase locked) ß-band (13-30 Hz) oscillation in bilateral auditory and sensorimotor cortices. Here, we further examine whether acoustically accented and subjectively imagined metric processing in march and waltz contexts during listening to isochronous beats were reflected in neuromagnetic ß-band activity recorded from young adult musicians. First, we replicated previous findings of beat-related ß-power decrease at 200 ms after the beat followed by a predictive increase toward the onset of the next beat. Second, we showed that the ß decrease was significantly influenced by the metrical structure, as reflected by differences across beat type for both perception and imagery conditions. Specifically, the ß-power decrease associated with imagined downbeats (the count "one") was larger than that for both the upbeat (preceding the count "one") in the march, and for the middle beat in the waltz. Moreover, beamformer source analysis for the whole brain volume revealed that the metric contrasts involved auditory and sensorimotor cortices; frontal, parietal, and inferior temporal lobes; and cerebellum. We suggest that the observed ß-band activities reflect a translation of timing information to auditory-motor coordination. SIGNIFICANCE STATEMENT: With magnetoencephalography, we examined ß-band oscillatory activities around 20 Hz while participants listened to metronome beats and imagined musical meters such as a march and waltz. We demonstrated that ß-band event-related desynchronization in the auditory cortex differentiates between beat positions, specifically between downbeats and the following beat. This is the first demonstration of ß-band oscillations related to hierarchical and internalized timing information. Moreover, the meter representation in the ß oscillations was widespread across the brain, including sensorimotor and premotor cortices, parietal lobe, and cerebellum. The results extend current understanding of the role of ß oscillations in neural processing of predictive timing.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Ritmo beta/fisiologia , Imaginação/fisiologia , Periodicidade , Adulto , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Adulto Jovem
16.
J Cogn Neurosci ; 27(5): 1060-7, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25436670

RESUMO

Sound waves emitted by two or more simultaneous sources reach the ear as one complex waveform. Auditory scene analysis involves parsing a complex waveform into separate perceptual representations of the sound sources [Bregman, A. S. Auditory scene analysis: The perceptual organization of sounds. London: MIT Press, 1990]. Harmonicity provides an important cue for auditory scene analysis. Normally, harmonics at integer multiples of a fundamental frequency are perceived as one sound with a pitch corresponding to the fundamental frequency. However, when one harmonic in such a complex, pitch-evoking sound is sufficiently mistuned, that harmonic emerges from the complex tone and is perceived as a separate auditory object. Previous work has shown that the percept of two objects is indexed in both children and adults by the object-related negativity component of the ERP derived from EEG recordings [Alain, C., Arnott, S. T., & Picton, T. W. Bottom-up and top-down influences on auditory scene analysis: Evidence from event-related brain potentials. Journal of Experimental Psychology: Human Perception and Performance, 27, 1072-1089, 2001]. Here we examine the emergence of object-related responses to an 8% harmonic mistuning in infants between 2 and 12 months of age. Two-month-old infants showed no significant object-related response. However, in 4- to 12-month-old infants, a significant frontally positive component was present, and by 8-12 months, a significant frontocentral object-related negativity was present, similar to that seen in older children and adults. This is in accordance with previous research demonstrating that infants younger than 4 months of age do not integrate harmonic information to perceive pitch when the fundamental is missing [He, C., Hotson, L., & Trainor, L. J. Maturation of cortical mismatch mismatch responses to occasional pitch change in early infancy: Effects of presentation rate and magnitude of change. Neuropsychologia, 47, 218-229, 2009]. The results indicate that the ability to use harmonic information to segregate simultaneous sounds emerges at the cortical level between 2 and 4 months of age.


Assuntos
Envelhecimento/fisiologia , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Fatores Etários , Eletroencefalografia , Feminino , Humanos , Lactente , Masculino , Matemática
17.
Eur J Neurosci ; 40(11): 3608-19, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25308742

RESUMO

In human neonates, orienting behavior in response to an off-midline sound source disappears around the first postnatal month, only to re-emerge at ~4 months. To date, it is unclear whether sound localization processes continue to operate between postnatal months 1 and 3. Here, we used an event-related potential, reflecting change detection in the auditory cortices, to measure the cortical responses elicited by large (± 90° relative to midline), infrequent changes in sound source location in 2-, 5-, 8- and 13-month-old infants. Both fast-negative mismatch negativity (MMN) Näätänen et al. (2007) and slow-positive mismatch response (MMR) Trainor et al. (2003) were elicited from all age groups. However, both components were smaller and the fast-negative component occurred later in the 2-month-old group than in older age groups. Additionally, the slow-positive component tended to diminish in amplitude with increasing age, whereas the fast-negative component grew larger and tended to occur earlier with increasing age. These results suggest that the cortical representation of sound location matures similarly to representations of pitch and duration. A subsequent investigation of 2-month-old infants confirmed that the observed MMR and MMN were elicited by changes in sound source location, and were not merely attributable to changes in loudness cues. The presence of both MMR and MMN in the 2-month-old group indicates that the cortex is able to detect changes in sound location despite the behavioral insensitivity observed around 1-3 months of age.


Assuntos
Córtex Auditivo/crescimento & desenvolvimento , Córtex Auditivo/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Lactente , Masculino
18.
Dev Sci ; 17(1): 142-58, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24205955

RESUMO

Children learn the structure of the music of their culture similarly to how they learn the language to which they are exposed in their daily environment. Furthermore, as with language, children acquire this musical knowledge without formal instruction. Two critical aspects of musical pitch structure in Western tonal music are key membership (understanding which notes belong in a key and which do not) and harmony (understanding which notes combine to form chords and which notes and chords tend to follow others). The early developmental trajectory of the acquisition of this knowledge remains unclear, in part because of the difficulty of testing young children. In two experiments, we investigated 4- and 5-year-olds' enculturation to Western musical pitch using a novel age-appropriate and engaging behavioral task (Experiment 1) and electroencephalography (EEG; Experiment 2). In Experiment 1 we found behavioral evidence that 5-year-olds were sensitive to key membership but not to harmony, and no evidence that 4-year-olds were sensitive to either. However, in Experiment 2 we found neurophysiological evidence that 4-year-olds were sensitive to both key membership and harmony. Our results suggest that musical enculturation has a long developmental trajectory, and that children may have some knowledge of key membership and harmony before that knowledge can be expressed through explicit behavioral judgments.


Assuntos
Comportamento Infantil/fisiologia , Potenciais Evocados/fisiologia , Música , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Encéfalo/fisiologia , Mapeamento Encefálico , Pré-Escolar , Eletroencefalografia , Feminino , Humanos , Julgamento/fisiologia , Masculino
19.
Dev Sci ; 17(6): 1003-11, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25513669

RESUMO

Adults who move together to a shared musical beat synchronously as opposed to asynchronously are subsequently more likely to display prosocial behaviors toward each other. The development of musical behaviors during infancy has been described previously, but the social implications of such behaviors in infancy have been little studied. In Experiment 1, each of 48 14-month-old infants was held by an assistant and gently bounced to music while facing the experimenter, who bounced either in-synchrony or out-of-synchrony with the way the infant was bounced. The infants were then placed in a situation in which they had the opportunity to help the experimenter by handing objects to her that she had 'accidently' dropped. We found that 14-month-old infants were more likely to engage in altruistic behavior and help the experimenter after having been bounced to music in synchrony with her, compared to infants who were bounced to music asynchronously with her. The results of Experiment 2, using anti-phase bouncing, suggest that this is due to the contingency of the synchronous movements as opposed to movement symmetry. These findings support the hypothesis that interpersonal motor synchrony might be one key component of musical engagement that encourages social bonds among group members, and suggest that this motor synchrony to music may promote the very early development of altruistic behavior.


Assuntos
Percepção Auditiva/fisiologia , Comportamento do Lactente/fisiologia , Relações Interpessoais , Comportamento Social , Estimulação Acústica , Análise de Variância , Feminino , Humanos , Lactente , Masculino , Movimento , Estatística como Assunto
20.
Cereb Cortex ; 23(3): 660-9, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22419678

RESUMO

Infants must learn to make sense of real-world auditory environments containing simultaneous and overlapping sounds. In adults, event-related potential studies have demonstrated the existence of separate preattentive memory traces for concurrent note sequences and revealed perceptual dominance for encoding of the voice with higher fundamental frequency of 2 simultaneous tones or melodies. Here, we presented 2 simultaneous streams of notes (15 semitones apart) to 7-month-old infants. On 50% of trials, either the higher or the lower note was modified by one semitone, up or down, leaving 50% standard trials. Infants showed mismatch negativity (MMN) to changes in both voices, indicating separate memory traces for each voice. Furthermore, MMN was earlier and larger for the higher voice as in adults. When in the context of a second voice, representation of the lower voice was decreased and that of the higher voice increased compared with when each voice was presented alone. Additionally, correlations between MMN amplitude and amount of weekly music listening suggest that experience affects the development of auditory memory. In sum, the ability to process simultaneous pitches and the dominance of the highest voice emerge early during infancy and are likely important for the perceptual organization of sound in realistic environments.


Assuntos
Córtex Auditivo/fisiologia , Mapeamento Encefálico , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Lactente , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA