Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Dev Sci ; : e13528, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38770599

RESUMO

Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants' daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants' home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4.

2.
Front Hum Neurosci ; 18: 1380075, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38756844

RESUMO

Introduction: Previous studies underscore the importance of speech input, particularly infant-directed speech (IDS) during one-on-one (1:1) parent-infant interaction, for child language development. We hypothesize that infants' attention to speech input, specifically IDS, supports language acquisition. In infants, attention and orienting responses are associated with heart rate deceleration. We examined whether individual differences in infants' heart rate measured during 1:1 mother-infant interaction is related to speech input and later language development scores in a longitudinal study. Methods: Using a sample of 31 3-month-olds, we assessed infant heart rate during mother-infant face-to-face interaction in a laboratory setting. Multiple measures of speech input were gathered at 3 months of age during naturally occurring interactions at home using the Language ENvironment Analysis (LENA) system. Language outcome measures were assessed in the same children at 30 months of age using the MacArthur-Bates Communicative Development Inventory (CDI). Results: Two novel findings emerged. First, we found that higher maternal IDS in a 1:1 context at home, as well as more mother-infant conversational turns at home, are associated with a lower heart rate measured during mother-infant social interaction in the laboratory. Second, we found significant associations between infant heart rate during mother-infant interaction in the laboratory at 3 months and prospective language development (CDI scores) at 30 months of age. Discussion: Considering the current results in conjunction with other converging theoretical and neuroscientific data, we argue that high IDS input in the context of 1:1 social interaction increases infants' attention to speech and that infants' attention to speech in early development fosters their prospective language growth.

3.
Sci Rep ; 13(1): 6480, 2023 04 20.
Artigo em Inglês | MEDLINE | ID: mdl-37081119

RESUMO

Comparing artificial neural networks with outputs of neuroimaging techniques has recently seen substantial advances in (computer) vision and text-based language models. Here, we propose a framework to compare biological and artificial neural computations of spoken language representations and propose several new challenges to this paradigm. The proposed technique is based on a similar principle that underlies electroencephalography (EEG): averaging of neural (artificial or biological) activity across neurons in the time domain, and allows to compare encoding of any acoustic property in the brain and in intermediate convolutional layers of an artificial neural network. Our approach allows a direct comparison of responses to a phonetic property in the brain and in deep neural networks that requires no linear transformations between the signals. We argue that the brain stem response (cABR) and the response in intermediate convolutional layers to the exact same stimulus are highly similar without applying any transformations, and we quantify this observation. The proposed technique not only reveals similarities, but also allows for analysis of the encoding of actual acoustic properties in the two signals: we compare peak latency (i) in cABR relative to the stimulus in the brain stem and in (ii) intermediate convolutional layers relative to the input/output in deep convolutional networks. We also examine and compare the effect of prior language exposure on the peak latency in cABR and in intermediate convolutional layers. Substantial similarities in peak latency encoding between the human brain and intermediate convolutional networks emerge based on results from eight trained networks (including a replication experiment). The proposed technique can be used to compare encoding between the human brain and intermediate convolutional layers for any acoustic property and for other neuroimaging techniques.


Assuntos
Redes Neurais de Computação , Fala , Humanos , Eletroencefalografia , Tronco Encefálico/diagnóstico por imagem , Idioma
4.
Dev Sci ; 25(6): e13323, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36114705

RESUMO

The development of skills related to executive function (EF) in infancy, including their emergence, underlying neural mechanisms, and interconnections to other cognitive skills, is an area of increasing research interest. Here, we report on findings from a multidimensional dataset demonstrating that infants' behavioral performance on a flexible learning task improved across development and that the task performance is highly correlated with both neural structure and neural function. The flexible learning task probed infants' ability to learn two different associations, concurrently, over 16 trials, requiring multiple skills relevant to EF. We examined infants' neural structure by measuring myelin density in the brain, using a novel macromolecular proton fraction (MPF) mapping method. We further examined an important neural function of speech processing by characterizing the mismatch response (MMR) to speech contrasts using magnetoencephalography (MEG). All measurements were performed longitudinally in monolingual English-learning infants at 7- and 11-months of age. At the group level, 11-month-olds, but not 7-month-olds, demonstrated evidence of learning both associations in the behavioral task. Myelin density in the prefrontal region at 7 months of age was found to be highly predictive of behavioral task performance at 11 months of age, suggesting that myelination may support the development of these skills. Furthermore, a machine-learning regression analysis revealed that individual differences in the behavioral task are predicted by concurrent neural speech processing at both ages, suggesting that these skills do not develop in isolation. Together, these cross-modality results revealed novel insights into EF-related skills. HIGHLIGHT: Monolingual infants demonstrated flexible learning on a task requiring executive function skills at 11 months, but not at 7 months. Infants' myelin density at 7 months is highly predictive of their behavioral performance in the flexible learning task at 11 months of age. Individual differences in the flexible learning task performance are also correlated with concurrent neural processing of speech at both ages.


Assuntos
Função Executiva , Percepção da Fala , Lactente , Humanos , Função Executiva/fisiologia , Percepção da Fala/fisiologia , Fala , Aprendizagem , Idioma
5.
Neuroimage ; 263: 119641, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36170763

RESUMO

Between 6 and 12 months of age there are dramatic changes in infants' processing of language. The neurostructural underpinnings of these changes are virtually unknown. The objectives of this study were to (1) examine changes in brain myelination during this developmental period and (2) examine the relationship between myelination during this period and later language development. Macromolecular proton fraction (MPF) was used as a marker of myelination. Whole-brain MPF maps were obtained with 1.25 mm3 isotropic spatial resolution from typically developing children at 7 and 11 months of age. Effective myelin density was calculated from MPF based on a linear relationship known from the literature. Voxel-based analyses were used to identify longitudinal changes in myelin density and to calculate correlations between myelin density at these ages and later language development. Increases in myelin density were more predominant in white matter than in gray matter. A strong predictive relationship was found between myelin density at 7 months of age, language production at 24 and 30 months of age, and rate of language growth. No relationships were found between myelin density at 11 months, or change in myelin density between 7 and 11 months of age, and later language measures. Our findings suggest that critical changes in brain structure may precede periods of pronounced change in early language skills.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Criança , Lactente , Humanos , Pré-Escolar , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Bainha de Mielina , Desenvolvimento da Linguagem , Prótons
6.
JASA Express Lett ; 2(5): 054401, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35578694

RESUMO

The frequency-following response (FFR) is a scalp-recorded signal that reflects phase-locked activity from neurons across the auditory system. In addition to capturing information about sounds, the FFR conveys biometric information, reflecting individual differences in auditory processing. To investigate the development of FFR biometric patterns, we trained a pattern recognition model to recognize infants (N = 16) from FFRs collected at 7 and 11 months. Model recognition scores were used to index the robustness of FFR biometric patterns at each time. Results showed better recognition scores at 11 months, demonstrating the emergence of robust FFR idiosyncratic patterns during this first year of life.

7.
Neuroimage ; 256: 119242, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35483648

RESUMO

The 'sensitive period' for phonetic learning (∼6-12 months) is one of the earliest milestones in language acquisition where infants start to become specialized in processing speech sounds in their native language. In the last decade, advancements in neuroimaging technologies for infants are starting to shed light on the underlying neural mechanisms supporting this important learning period. The current study reports on a large longitudinal dataset with the aim to replicate and extend on two important questions: 1) what are the developmental changes during the 'sensitive period' for native and nonnative speech processing? 2) how does native and nonnative speech processing in infants predict later language outcomes? Fifty-four infants were recruited at 7 months of age and their neural processing of speech was measured using Magnetoencephalography (MEG). Specifically, the neural sensitivity to a native and a nonnative speech contrast was indexed by the mismatch response (MMR). They repeated the measurement again at 11 months of age and their language development was further tracked from 12 months to 30 months of age using the MacArthur-Bates Communicative Development Inventory (CDI). Using an a priori Region-of-Interest (ROI) approach, we observed significant increases for the Native MMR in the left inferior frontal region (IF) and superior temporal region (ST) from 7 to 11 months, but not for the Nonnative MMR. Complementary whole brain comparison revealed more widespread developmental changes for both contrasts. However, only individual differences in the left IF and ST for the Nonnative MMR at 11 months of age were significant predictors of individual vocabulary growth up to 30 months of age. An exploratory machine-learning based analysis further revealed that whole brain time series for both Native and Nonnative contrasts can robustly predict later outcomes, but with very different underlying spatial-temporal patterns. The current study extends our current knowledge and suggests that native and nonnative speech processing may follow different developmental trajectories and utilize different mechanisms that are relevant for later language skills.


Assuntos
Percepção da Fala , Fala , Pré-Escolar , Humanos , Lactente , Desenvolvimento da Linguagem , Magnetoencefalografia , Fonética , Fala/fisiologia , Percepção da Fala/fisiologia
8.
Front Hum Neurosci ; 15: 607148, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34149375

RESUMO

Behavioral studies examining vowel perception in infancy indicate that, for many vowel contrasts, the ease of discrimination changes depending on the order of stimulus presentation, regardless of the language from which the contrast is drawn and the ambient language that infants have experienced. By adulthood, linguistic experience has altered vowel perception; analogous asymmetries are observed for non-native contrasts but are mitigated for native contrasts. Although these directional effects are well documented behaviorally, the brain mechanisms underlying them are poorly understood. In the present study we begin to address this gap. We first review recent behavioral work which shows that vowel perception asymmetries derive from phonetic encoding strategies, rather than general auditory processes. Two existing theoretical models-the Natural Referent Vowel framework and the Native Language Magnet model-are invoked as a means of interpreting these findings. Then we present the results of a neurophysiological study which builds on this prior work. Using event-related brain potentials, we first measured and assessed the mismatch negativity response (MMN, a passive neurophysiological index of auditory change detection) in English and French native-speaking adults to synthetic vowels that either spanned two different phonetic categories (/y/vs./u/) or fell within the same category (/u/). Stimulus presentation was organized such that each vowel was presented as standard and as deviant in different blocks. The vowels were presented with a long (1,600-ms) inter-stimulus interval to restrict access to short-term memory traces and tap into a "phonetic mode" of processing. MMN analyses revealed weak asymmetry effects regardless of the (i) vowel contrast, (ii) language group, and (iii) MMN time window. Then, we conducted time-frequency analyses of the standard epochs for each vowel. In contrast to the MMN analysis, time-frequency analysis revealed significant differences in brain oscillations in the theta band (4-8 Hz), which have been linked to attention and processing efficiency. Collectively, these findings suggest that early-latency (pre-attentive) mismatch responses may not be a strong neurophysiological correlate of asymmetric behavioral vowel discrimination. Rather, asymmetries may reflect differences in neural processing efficiency for vowels with certain inherent acoustic-phonetic properties, as revealed by theta oscillatory activity.

9.
Dev Cogn Neurosci ; 48: 100949, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33823366

RESUMO

The 'sensitive period' for phonetic learning posits that between 6 and 12 months of age, infants' discrimination of native and nonnative speech sounds diverge. Individual differences in this dynamic processing of speech have been shown to predict later language acquisition up to 30 months of age, using parental surveys. Yet, it is unclear whether infant speech discrimination could predict longer-term language outcome and risk for developmental speech-language disorders, which affect up to 16 % of the population. The current study reports a prospective prediction of speech-language skills at a much later age-6 years-old-from the same children's nonnative speech discrimination at 11 months-old, indexed by MEG mismatch responses. Children's speech-language skills at 6 were comprehensively evaluated by a speech-language pathologist in two ways: individual differences in spoken grammar, and the presence versus absence of speech-language disorders. Results showed that the prefrontal MEG mismatch response at 11 months not only significantly predicted individual differences in spoken grammar skills at 6 years, but also accurately identified the presence versus absence of speech-language disorders, using a machine-learning classification. These results represent new evidence that advance our theoretical understanding of the neurodevelopmental trajectory of language acquisition and early risk factors for developmental speech-language disorders.


Assuntos
Transtornos da Linguagem , Percepção da Fala , Criança , Feminino , Humanos , Individualidade , Masculino , Fonética , Estudos Prospectivos , Fatores de Risco , Fala
10.
Neuroimage ; 227: 117678, 2021 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-33359342

RESUMO

Myelin development during adolescence is becoming an area of growing interest in view of its potential relationship to cognition, behavior, and learning. While recent investigations suggest that both white matter (WM) and gray matter (GM) undergo protracted myelination during adolescence, quantitative relations between myelin development in WM and GM have not been previously studied. We quantitatively characterized the dependence of cortical GM, WM, and subcortical myelin density across the brain on age, gender, and puberty status during adolescence with the use of a novel macromolecular proton fraction (MPF) mapping method. Whole-brain MPF maps from a cross-sectional sample of 146 adolescents (age range 9-17 years) were collected. Myelin density was calculated from MPF values in GM and WM of all brain lobes, as well as in subcortical structures. In general, myelination of cortical GM was widespread and more significantly correlated with age than that of WM. Myelination of GM in the parietal lobe was found to have a significantly stronger age dependence than that of GM in the frontal, occipital, temporal and insular lobes. Myelination of WM in the temporal lobe had the strongest association with age as compared to WM in other lobes. Myelin density was found to be higher in males as compared to females when averaged across all cortical lobes, as well as in a bilateral subcortical region. Puberty stage was significantly correlated with myelin density in several cortical areas and in the subcortical GM. These findings point to significant differences in the trajectories of myelination of GM and WM across brain regions and suggest that cortical GM myelination plays a dominant role during adolescent development.


Assuntos
Encéfalo/crescimento & desenvolvimento , Substância Cinzenta/crescimento & desenvolvimento , Bainha de Mielina , Substância Branca/crescimento & desenvolvimento , Adolescente , Desenvolvimento do Adolescente , Mapeamento Encefálico/métodos , Criança , Estudos Transversais , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino
11.
Brain Behav ; 10(11): e01836, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32920995

RESUMO

INTRODUCTION: Music is ubiquitous and powerful in the world's cultures. Music listening involves abundant information processing (e.g., pitch, rhythm) in the central nervous system and can also induce changes in the physiology, such as heart rate and perspiration. Yet, previous studies tended to examine music information processing in the brain separately from physiological changes. In the current study, we focused on the temporal structure of music (i.e., beat and meter) and examined the physiology, neural processing, and, most importantly, the relation between the two areas. METHODS: Simultaneous MEG and ECG data were collected from a group of adults (N = 15) while they passively listened to duple and triple rhythmic patterns. To characterize physiology, we measured heart rate variability (HRV), indexing the parasympathetic nervous system function (PSNS). To characterize neural processing of beat and meter, we examined the neural entertainment and calculated the beat-to-meter ratio to index the relation between beat-level and meter-level entrainment. Specifically, the current study investigated three related questions: (a) whether listening to musical rhythms affects HRV; (b) whether the neural beat-to-meter ratio differed between metrical conditions, and (c) whether neural beat-to-meter ratio is related to HRV. RESULTS: Results suggest that while at the group level, both HRV and neural processing are highly similar across metrical conditions, at the individual level, neural beat-to-meter ratio significantly predicts HRV, establishing a neural-physiological link. CONCLUSION: This observed link is discussed under the theoretical "neurovisceral integration model," and it provides important new perspectives in music cognition and auditory neuroscience research.


Assuntos
Música , Estimulação Acústica , Percepção Auditiva , Encéfalo , Cognição
12.
Brain Lang ; 194: 77-83, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31129300

RESUMO

Cross-language speech perception experiments indicate that for many vowel contrasts, discrimination is easier when the same pair of vowels is presented in one direction compared to the reverse direction. According to one account, these directional asymmetries reflect a universal bias favoring "focal" vowels (i.e., vowels with prominent spectral peaks formed by the convergence of adjacent formants). An alternative account is that such effects reflect an experience-dependent bias favoring prototypical exemplars of native-language vowel categories. Here, we tested the predictions of these accounts by recording the auditory frequency-following response in English-speaking listeners to two synthetic variants of the vowel /u/ that differed in the proximity of their first and second formants and prototypicality, with stimuli arranged in oddball and reversed-oddball blocks. Participants showed evidence of neural discrimination when the more-focal/less-prototypic /u/ served as the deviant stimulus, but not when the less-focal/more-prototypic /u/ served as the deviant, consistent with the focalization account.


Assuntos
Fonética , Acústica da Fala , Percepção da Fala , Adulto , Discriminação Psicológica , Feminino , Humanos , Masculino , Multilinguismo
13.
J Exp Psychol Hum Percept Perform ; 45(2): 285-300, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30570319

RESUMO

Directional asymmetries reveal a universal bias in vowel perception favoring extreme vocalic articulations, which lead to acoustic vowel signals with dynamic formant trajectories and well-defined spectral prominences because of the convergence of adjacent formants. The present experiments investigated whether this bias reflects speech-specific processes or general properties of spectral processing in the auditory system. Toward this end, we examined whether analogous asymmetries in perception arise with nonspeech tonal analogues that approximate some of the dynamic and static spectral characteristics of naturally produced /u/ vowels executed with more versus less extreme lip gestures. We found a qualitatively similar but weaker directional effect with 2-component tones varying in both the dynamic changes and proximity of their spectral energies. In subsequent experiments, we pinned down the phenomenon using tones that varied in 1 or both of these 2 acoustic characteristics. We found comparable asymmetries with tones that differed exclusively in their spectral dynamics, and no asymmetries with tones that differed exclusively in their spectral proximity or both spectral features. We interpret these findings as evidence that dynamic spectral changes are a critical cue for eliciting asymmetries in nonspeech tone perception, but that the potential contribution of general auditory processes to asymmetries in vowel perception is limited. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Psicolinguística , Acústica da Fala , Adolescente , Adulto , Feminino , Humanos , Masculino , Percepção da Fala/fisiologia , Adulto Jovem
14.
Proc Natl Acad Sci U S A ; 115(35): 8716-8721, 2018 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-30104356

RESUMO

Linguistic experience affects speech perception from early infancy, as previously evidenced by behavioral and brain measures. Current research focuses on whether linguistic effects on speech perception can be observed at an earlier stage in the neural processing of speech (i.e., auditory brainstem). Brainstem responses reflect rapid, automatic, and preattentive encoding of sounds. Positive experiential effects have been reported by examining the frequency-following response (FFR) component of the complex auditory brainstem response (cABR) in response to sustained high-energy periodic portions of speech sounds (vowels and lexical tones). The current study expands the existing literature by examining the cABR onset component in response to transient and low-energy portions of speech (consonants), employing simultaneous magnetoencephalography (MEG) in addition to electroencephalography (EEG), which provide complementary source information on cABR. Utilizing a cross-cultural design, we behaviorally measured perceptual responses to consonants in native Spanish- and English-speaking adults, in addition to cABR. Brain and behavioral relations were examined. Results replicated previous behavioral differences between language groups and further showed that individual consonant perception is strongly associated with EEG-cABR onset peak latency. MEG-cABR source analysis of the onset peaks complimented the EEG-cABR results by demonstrating subcortical sources for both peaks, with no group differences in peak locations. Current results demonstrate a brainstem-perception relation and show that the effects of linguistic experience on speech perception can be observed at the brainstem level.


Assuntos
Tronco Encefálico/fisiologia , Idioma , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino
15.
Neuropsychologia ; 106: 289-297, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28987905

RESUMO

Musical sounds, along with speech, are the most prominent sounds in our daily lives. They are highly dynamic, yet well structured in the temporal domain in a hierarchical manner. The temporal structures enhance the predictability of musical sounds. Western music provides an excellent example: while time intervals between musical notes are highly variable, underlying beats can be realized. The beat-level temporal structure provides a sense of regular pulses. Beats can be further organized into units, giving the percept of alternating strong and weak beats (i.e. metrical structure or meter). Examining neural processing at the meter level offers a unique opportunity to understand how the human brain extracts temporal patterns, predicts future stimuli and optimizes neural resources for processing. The present study addresses two important questions regarding meter processing, using the mismatch negativity (MMN) obtained with electroencephalography (EEG): 1) how tempo (fast vs. slow) and type of metrical structure (duple: two beats per unit vs. triple: three beats per unit) affect the neural processing of metrical structure in non-musically trained individuals, and 2) how early music training modulates the neural processing of metrical structure. Metrical structures were established by patterns of consecutive strong and weak tones (Standard) with occasional violations that disrupted and reset the structure (Deviant). Twenty non-musicians listened passively to these tones while their neural activities were recorded. MMN indexed the neural sensitivity to the meter violations. Results suggested that MMNs were larger for fast tempo and for triple meter conditions. Further, 20 musically trained individuals were tested using the same methods and the results were compared to the non-musicians. While tempo and meter type similarly influenced MMNs in both groups, musicians overall exhibited significantly reduced MMNs, compared to their non-musician counterparts. Further analyses indicated that the reduction was driven by responses to sounds that defined the structure (Standard), not by responses to Deviants. We argue that musicians maintain a more accurate and efficient mental model for metrical structures, which incorporates occasional disruptions using significantly fewer neural resources.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Adulto Jovem
16.
Proc Natl Acad Sci U S A ; 113(19): 5212-7, 2016 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-27114512

RESUMO

Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants' neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants' neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants' neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants' ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Percepção da Fala/fisiologia , Fala/fisiologia , Percepção do Tempo/fisiologia , Feminino , Humanos , Lactente , Masculino
17.
J Acoust Soc Am ; 138(2): EL133-7, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26328738

RESUMO

Native tonal-language speakers exhibit reduced sensitivity to lexical tone differences within, compared to across, categories (higher-level linguistic category influence). Yet, sensitivity is enhanced among musically trained, non-tonal-language-speaking individuals (lower-level acoustics processing influence). The current study investigated the relative contribution of higher- and lower-level influences when both are present. Seventeen Mandarin musicians completed music pitch and lexical tone discrimination tasks. Similar to English musicians [Zhao and Kuhl (2015). J. Acoust. Soc. Am. 137(3), 1452-1463], Mandarin musicians' overall sensitivity to lexical tone differences was associated with music pitch score, suggesting lower-level contributions. However, the musician's sensitivities to lexical tone pairs along a continuum were similar to Mandarin non-musicians, reflecting dominant higher-level influences.


Assuntos
Discriminação Psicológica/fisiologia , Idioma , Música , Fonética , Discriminação da Altura Tonal/fisiologia , Percepção da Fala/fisiologia , Adulto , Povo Asiático , Educação , Feminino , Humanos , Linguística , Masculino , Música/psicologia , Testes Psicológicos , Adulto Jovem
18.
J Acoust Soc Am ; 137(3): 1452-63, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25786956

RESUMO

Previous studies suggest that musicians show an advantage in processing and encoding foreign-language lexical tones. The current experiments examined whether musical experience influences the perceptual learning of lexical tone categories. Experiment I examined whether musicians with no prior experience of tonal languages differed from nonmusicians in the perception of a lexical tone continuum. Experiment II examined whether short-term perceptual training on lexical tones altered the perception of the lexical tone continuum differentially in English-speaking musicians and nonmusicians. Results suggested that (a) musicians exhibited higher sensitivity overall to tonal changes, but perceived the lexical tone continuum in a manner similar to nonmusicians (continuously), in contrast to native Mandarin speakers (categorically); and (b) short-term perceptual training altered perception; however, there were no significant differences between the effects of training on musicians and nonmusicians.


Assuntos
Aprendizagem , Música , Discriminação da Altura Tonal , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adulto , Audiometria da Fala , Feminino , Humanos , Masculino , Memória , Fonética , Espectrografia do Som , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA