Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 14.134
Filter
1.
Trends Hear ; 28: 23312165241256721, 2024.
Article in English | MEDLINE | ID: mdl-38773778

ABSTRACT

This study aimed to investigate the role of hearing aid (HA) usage in language outcomes among preschool children aged 3-5 years with mild bilateral hearing loss (MBHL). The data were retrieved from a total of 52 children with MBHL and 30 children with normal hearing (NH). The association between demographical, audiological factors and language outcomes was examined. Analyses of variance were conducted to compare the language abilities of HA users, non-HA users, and their NH peers. Furthermore, regression analyses were performed to identify significant predictors of language outcomes. Aided better ear pure-tone average (BEPTA) was significantly correlated with language comprehension scores. Among children with MBHL, those who used HA outperformed the ones who did not use HA across all linguistic domains. The language skills of children with MBHL were comparable to those of their peers with NH. The degree of improvement in audibility in terms of aided BEPTA was a significant predictor of language comprehension. It is noteworthy that 50% of the parents expressed reluctance regarding HA use for their children with MBHL. The findings highlight the positive impact of HA usage on language development in this population. Professionals may therefore consider HAs as a viable treatment option for children with MBHL, especially when there is a potential risk of language delay due to hearing loss. It was observed that 25% of the children with MBHL had late-onset hearing loss. Consequently, the implementation of preschool screening or a listening performance checklist is recommended to facilitate early detection.


Subject(s)
Child Language , Hearing Aids , Hearing Loss, Bilateral , Language Development , Humans , Male , Child, Preschool , Female , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/psychology , Speech Perception , Case-Control Studies , Correction of Hearing Impairment/instrumentation , Treatment Outcome , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Severity of Illness Index , Comprehension , Hearing , Audiometry, Pure-Tone , Age Factors , Auditory Threshold , Language Tests
2.
Codas ; 36(4): e20230268, 2024.
Article in Portuguese, English | MEDLINE | ID: mdl-38775528

ABSTRACT

PURPOSE: To check the lexical repertoire of Brazilian Portuguese-speaking children at 24 and 30 months of age and the association between the number of words spoken and the following variables: socioeconomic status, parents' education, presence of siblings in the family, whether or not they attend school, and excessive use of tablets and cell phones. METHODS: 30 parents of children aged 24 months living in the state of São Paulo participated in the study. Using videoconferencing platforms, they underwent a speech-language pathology anamnesis, an interview with social services, and then they completed the "MacArthur Communicative Development Inventory - First Words and Gestures" as soon as their children were 24 and 30 months old. Quantitative and qualitative inferential inductive statistics were applied. RESULTS: the median number of words produced was 283 at 24 months and 401 at 30 months, indicating an increase of around 118 words after six months. The child attending a school environment had a significant relationship with increased vocabulary. CONCLUSION: The study reinforces the fact that vocabulary grows with age and corroborates the fact that children aged 24 months already have a repertoire greater than 50 words. Those who attend school every day produce at least 70 more words than those who do not.


OBJETIVO: Verificar o repertório lexical de crianças falantes do português brasileiro aos 24 e 30 meses e a associação entre a quantidade de palavras faladas e as variáveis: nível socioeconômico, escolaridade dos pais, presença de irmãos no convívio familiar, frequentar ou não escola e uso exacerbado de tablets e celulares pelas crianças. MÉTODO: 30 pais de crianças com 24 meses, residentes no estado de São Paulo participaram do estudo. Por meio de plataformas de videoconferência eles foram submetidos à anamnese fonoaudiológica, entrevista com o serviço social e preencheram o "Inventário MacArthur de Desenvolvimento Comunicativo - Primeiras Palavras e Gestos", quando seus filhos tinham 24 e 30 meses. Foi aplicada estatística indutiva inferencial, quantitativa e qualitativa. RESULTADOS: A mediana das palavras emitidas foi de 283 aos 24 meses e 401 aos 30 meses, indicando aumento em torno de 118 palavras após seis meses. A criança estar frequentando ambiente escolar apresentou relação significativa com o aumento do vocabulário. CONCLUSÃO: O estudo reforça o crescimento do vocabulário conforme o avanço da idade e corrobora o fato de as crianças com 24 meses já possuírem um repertório maior que 50 palavras. Aqueles que frequentam escola diariamente produzem pelo menos 70 palavras a mais dos que não frequentam.


Subject(s)
Vocabulary , Humans , Brazil , Child, Preschool , Female , Male , Language Development , Socioeconomic Factors , Child Language
3.
Int J Pediatr Otorhinolaryngol ; 180: 111968, 2024 May.
Article in English | MEDLINE | ID: mdl-38714045

ABSTRACT

AIM & OBJECTIVES: The study aimed to compare P1 latency and P1-N1 amplitude with receptive and expressive language ages in children using cochlear implant (CI) in one ear and a hearing aid (HA) in non-implanted ear. METHODS: The study included 30 children, consisting of 18 males and 12 females, aged between 48 and 96 months. The age at which the children received CI ranged from 42 to 69 months. A within-subject research design was utilized and participants were selected through purposive sampling. Auditory late latency responses (ALLR) were assessed using the Intelligent hearing system to measure P1 latency and P1-N1 amplitude. The assessment checklist for speech-language skills (ACSLS) was employed to evaluate receptive and expressive language age. Both assessments were conducted after cochlear implantation. RESULTS: A total of 30 children participated in the study, with a mean implant age of 20.03 months (SD: 8.14 months). The mean P1 latency and P1-N1 amplitude was 129.50 ms (SD: 15.05 ms) and 6.93 µV (SD: 2.24 µV) respectively. Correlation analysis revealed no significant association between ALLR measures and receptive or expressive language ages. However, there was significant negative correlation between the P1 latency and implant age (Spearman's rho = -0.371, p = 0.043). CONCLUSIONS: The study suggests that P1 latency which is an indicative of auditory maturation, may not be a reliable marker for predicting language outcomes. It can be concluded that language development is likely to be influenced by other factors beyond auditory maturation alone.


Subject(s)
Cochlear Implants , Language Development , Humans , Male , Female , Child, Preschool , Child , Cochlear Implantation/methods , Reaction Time/physiology , Deafness/surgery , Deafness/rehabilitation , Evoked Potentials, Auditory/physiology , Age Factors , Speech Perception/physiology
5.
Codas ; 36(3): e20230159, 2024.
Article in English | MEDLINE | ID: mdl-38695437

ABSTRACT

PURPOSE: The overuse of screen-based devices results in developmental problems in children. Parents are an integral part of the children's language development. The present study explores the parental perspectives on the impact of screen time on the language skills of typically developing school-going children using a developed questionnaire. METHODS: 192 parents of typically developing children between 6 and 10 years of age participated in the study. Phase 1 of the study included the development of a questionnaire targeting the impact of screen devices on language development. The questionnaire was converted into an online survey and was circulated among the parents in Phase 2. Descriptive statistics were performed on the retrieved data and a chi-square test was done to determine the association between the use of screen devices across all language parameters. RESULTS: Parents reported television and smartphones to be the most used type of device, with a large proportion of children using screen-based devices for 1-2 hours per day. Most parents reported children prefer watching screens mainly for entertainment purposes, occasionally under supervision, without depending on them as potential rewards. The impact of screen-based devices on language skills has been discussed under the semantics, syntax, and pragmatic aspects of language. CONCLUSION: The findings of this study will help identify the existing trends in the usage of screen-based devices by children, thereby identifying potential contributing factors towards language delays. This information will also benefit in parental counselling during the interventional planning of children with language delays.


Subject(s)
Language Development , Parents , Screen Time , Humans , Child , Female , Male , Surveys and Questionnaires , India , Television , Adult , Smartphone
6.
Am Ann Deaf ; 168(5): 258-273, 2024.
Article in English | MEDLINE | ID: mdl-38766938

ABSTRACT

Little information is available on d/Deaf and hard of hearing (d/DHH) learners' L2 development. Their limited auditory access may discourage them from taking standardized tests, highlighting the need for alternative ways of assessing their L2 development and proficiency. Therefore, this study suggests adopting processability theory, which demonstrates a universal order of L2 development. Interviews with d/DHH learners and their teachers were conducted to explore their current difficulties in regard to understanding their L2 development. Also, we conducted brief speaking tasks to suggest alternatives to testing the L2 development of learners who are d/DHH in comparison to typical literacy learners. The result showed d/DHH students' L2 developmental patterns are similar to those of typical hearing peers, suggesting that d/DHH students and hearing learners share difficulties in similar areas when learning English. Teachers highlighted the lack of appropriate English tests to determine the d/DHH students' L2 development.


Subject(s)
Education of Hearing Disabled , Multilingualism , Humans , Education of Hearing Disabled/methods , Female , Male , Adolescent , Persons With Hearing Impairments/psychology , Students/psychology , Child , Language Tests , Deafness/psychology , Language Development , Comprehension
7.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38771241

ABSTRACT

The functional brain connectome is highly dynamic over time. However, how brain connectome dynamics evolves during the third trimester of pregnancy and is associated with later cognitive growth remains unknown. Here, we use resting-state functional Magnetic Resonance Imaging (MRI) data from 39 newborns aged 32 to 42 postmenstrual weeks to investigate the maturation process of connectome dynamics and its role in predicting neurocognitive outcomes at 2 years of age. Neonatal brain dynamics is assessed using a multilayer network model. Network dynamics decreases globally but increases in both modularity and diversity with development. Regionally, module switching decreases with development primarily in the lateral precentral gyrus, medial temporal lobe, and subcortical areas, with a higher growth rate in primary regions than in association regions. Support vector regression reveals that neonatal connectome dynamics is predictive of individual cognitive and language abilities at 2  years of age. Our findings highlight network-level neural substrates underlying early cognitive development.


Subject(s)
Brain , Cognition , Connectome , Magnetic Resonance Imaging , Humans , Connectome/methods , Female , Male , Magnetic Resonance Imaging/methods , Cognition/physiology , Infant, Newborn , Brain/growth & development , Brain/diagnostic imaging , Brain/physiology , Child, Preschool , Language Development , Child Development/physiology
8.
JAMA Netw Open ; 7(5): e2410721, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38753331

ABSTRACT

Importance: Preterm children are at risk for neurodevelopment impairments. Objective: To evaluate the effect of a music therapy (MT) intervention (parent-led, infant-directed singing) for premature children during the neonatal intensive care unit (NICU) stay and/or after hospital discharge on language development at 24 months' corrected age (CA). Design, Setting, and Participants: This predefined secondary analysis followed participants in the LongSTEP (Longitudinal Study of Music Therapy's Effectiveness for Premature Infants and Their Caregivers) randomized clinical trial, which was conducted from August 2018 to April 2022 in 8 NICUs across 5 countries (Argentina, Colombia, Israel, Norway, and Poland) and included clinic follow-up visits and extended interventions after hospital discharge. Intervention: Participants were children born preterm (<35 weeks' gestation) and their parents. Participants were randomized at enrollment to MT with standard care (SC) or SC alone; they were randomized to MT or SC again at discharge. The MT was parent-led, infant-directed singing tailored to infant responses and supported by a music therapist and was provided 3 times weekly in the NICU and/or in 7 sessions across 6 months after discharge. The SC consisted of early intervention methods of medical, nursing, and social services, without MT. Main Outcome and Measures: Primary outcome was language development, as measured by the Bayley Scales of Infant and Toddler Development, Third Edition (BSID-III) language composite score, with the remaining BSID-III composite and subscale scores as the secondary outcomes. Group differences in treatment effects were assessed using linear mixed-effects models using all available data. Results: Of 206 participants (103 female infants [50%]; mean [SD] GA, 30.5 [2.7] weeks), 51 were randomized to MT and 53 to SC at enrollment; at discharge, 52 were randomized to MT and 50 to SC. A total of 112 (54%) were retained at the 24 months' CA follow-up. Most participants (79 [70%] to 93 [83%]) had BSID-III scores in the normal range (≥85). Mean differences for the language composite score were -2.36 (95% CI, -12.60 to 7.88; P = .65) for the MT at NICU with postdischarge SC group, 2.65 (95% CI, -7.94 to 13.23; P = .62) for the SC at NICU and postdischarge MT group, and -3.77 (95% CI, -13.97 to 6.43; P = .47) for the MT group at both NICU and postdischarge. There were no significant effects for cognitive or motor development. Conclusions and Relevance: This secondary analysis did not confirm an effect of parent-led, infant-directed singing on neurodevelopment in preterm children at 24 months' CA; wide CIs suggest, however, that potential effects cannot be excluded. Future research should determine the MT approaches, implementation time, and duration that are effective in targeting children at risk for neurodevelopmental impairments and introducing broader measurements for changes in brain development. Trial Registration: ClinicalTrials.gov Identifier: NCT03564184.


Subject(s)
Infant, Premature , Music Therapy , Humans , Music Therapy/methods , Female , Male , Infant, Newborn , Infant , Intensive Care Units, Neonatal , Child, Preschool , Language Development , Longitudinal Studies , Child Development/physiology , Neurodevelopmental Disorders/prevention & control , Colombia , Norway , Israel
9.
Autism Res ; 17(5): 989-1000, 2024 May.
Article in English | MEDLINE | ID: mdl-38690644

ABSTRACT

Prior work examined how minimally verbal (MV) children with autism used their gestural communication during social interactions. However, interactions are exchanges between social partners. Examining parent-child social interactions is critically important given the influence of parent responsivity on children's communicative development. Specifically, parent responses that are semantically contingent to the child's communication plays an important role in further shaping children's language learning. This study examines whether MV autistic children's (N = 47; 48-95 months; 10 females) modality and form of communication are associated with parent responsivity during an in-home parent-child interaction (PCI). The PCI was collected using natural language sampling methods and coded for child modality and form of communication and parent responses. Findings from Kruskal-Wallis H tests revealed that there was no significant difference in parent semantically contingent responses based on child communication modality (spoken language, gesture, gesture-speech combinations, and AAC) and form of communication (precise vs. imprecise). Findings highlight the importance of examining multiple modalities and forms of communication in MV children with autism to obtain a more comprehensive understanding of their communication abilities; and underscore the inclusion of interactionist models of communication to examine children's input on parent responses in further shaping language learning experiences.


Subject(s)
Autistic Disorder , Communication , Parent-Child Relations , Humans , Female , Male , Child , Child, Preschool , Autistic Disorder/psychology , Gestures , Parents , Language Development , Speech
10.
Cogn Sci ; 48(5): e13448, 2024 05.
Article in English | MEDLINE | ID: mdl-38742768

ABSTRACT

Interpreting a seemingly simple function word like "or," "behind," or "more" can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network-based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learned by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spatial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives and and or without any prior knowledge of logical reasoning as well as early evidence that they are sensitive to alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on the frequency of models' input. Our findings offer proof-of-concept evidence that it is possible to learn the nuanced interpretations of function words in a visually grounded context by using non-symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning.


Subject(s)
Language , Learning , Humans , Semantics , Language Development , Neural Networks, Computer , Child , Logic
11.
Ugeskr Laeger ; 186(18)2024 Apr 29.
Article in Danish | MEDLINE | ID: mdl-38704717

ABSTRACT

Ankyloglossia or tongue-tie is a condition where the anatomical variation of the sublingual frenulum can limit normal tongue function. In Denmark, as in other countries, an increase in the number of children treated for ankyloglossia has been described over the past years. Whether or not ankyloglossia and its release affect the speech has also been increasingly discussed on Danish television and social media. In this review, the possible connection between ankyloglossia, its surgical treatment, and speech development in children is discussed.


Subject(s)
Ankyloglossia , Humans , Ankyloglossia/surgery , Child , Language Development , Tongue/surgery , Lingual Frenum/surgery , Lingual Frenum/abnormalities , Speech , Infant
13.
Cereb Cortex ; 34(4)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38566510

ABSTRACT

Statistical learning (SL) is the ability to detect and learn regularities from input and is foundational to language acquisition. Despite the dominant role of SL as a theoretical construct for language development, there is a lack of direct evidence supporting the shared neural substrates underlying language processing and SL. It is also not clear whether the similarities, if any, are related to linguistic processing, or statistical regularities in general. The current study tests whether the brain regions involved in natural language processing are similarly recruited during auditory, linguistic SL. Twenty-two adults performed an auditory linguistic SL task, an auditory nonlinguistic SL task, and a passive story listening task as their neural activation was monitored. Within the language network, the left posterior temporal gyrus showed sensitivity to embedded speech regularities during auditory, linguistic SL, but not auditory, nonlinguistic SL. Using a multivoxel pattern similarity analysis, we uncovered similarities between the neural representation of auditory, linguistic SL, and language processing within the left posterior temporal gyrus. No other brain regions showed similarities between linguistic SL and language comprehension, suggesting that a shared neurocomputational process for auditory SL and natural language processing within the left posterior temporal gyrus is specific to linguistic stimuli.


Subject(s)
Learning , Speech Perception , Adult , Humans , Language , Linguistics , Language Development , Brain , Speech Perception/physiology , Brain Mapping , Magnetic Resonance Imaging
14.
Cereb Cortex ; 34(4)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38566511

ABSTRACT

This study investigates neural processes in infant speech processing, with a focus on left frontal brain regions and hemispheric lateralization in Mandarin-speaking infants' acquisition of native tonal categories. We tested 2- to 6-month-old Mandarin learners to explore age-related improvements in tone discrimination, the role of inferior frontal regions in abstract speech category representation, and left hemisphere lateralization during tone processing. Using a block design, we presented four Mandarin tones via [ta] and measured oxygenated hemoglobin concentration with functional near-infrared spectroscopy. Results showed age-related improvements in tone discrimination, greater involvement of frontal regions in older infants indicating abstract tonal representation development and increased bilateral activation mirroring native adult Mandarin speakers. These findings contribute to our broader understanding of the relationship between native speech acquisition and infant brain development during the critical period of early language learning.


Subject(s)
Speech Perception , Speech , Adult , Infant , Humans , Aged , Speech Perception/physiology , Pitch Perception/physiology , Language Development , Brain/diagnostic imaging , Brain/physiology
15.
PLoS One ; 19(4): e0301144, 2024.
Article in English | MEDLINE | ID: mdl-38625962

ABSTRACT

INTRODUCTION: Noise exposure during pregnancy may affect a child's auditory system, which may disturb fetal learning and language development. We examined the impact of occupational noise exposure during pregnancy on children's language acquisition at the age of one. METHODS: A cohort study was conducted among women working in the food industry, as kindergarten teachers, musicians, dental nurses, or pharmacists who had a child aged <1 year. The analyses covered 408 mother-child pairs. Language acquisition was measured using the Infant-Toddler Checklist. An occupational hygienist assessed noise exposure individually as no (N = 180), low (70-78 dB; N = 108) or moderate/high exposure (>79 dB; N = 120). RESULTS: Among the boys, the adjusted mean differences in language acquisition scores were -0.4 (95% CI -2.5, 1.8) for low, and -0.7 (95% CI -2.9, 1.4) for moderate/high exposure compared to no exposure. Among the girls the respective scores were +0.1 (95% CI -2.2, 2.5) and -0.1 (95% CI -2.3, 2.2). Among the children of kindergarten teachers, who were mainly exposed to human noise, low or moderate exposure was associated with lower language acquisition scores. The adjusted mean differences were -3.8 (95% CI -7.2, -0.4) for low and -4.9 (95% CI -8.6, -1.2) for moderate exposure. CONCLUSIONS: In general, we did not detect an association between maternal noise exposure and children's language acquisition among one-year-old children. However, the children of kindergarten teachers exposed to human noise had lower language acquisition scores than the children of the non-exposed participants. These suggestive findings merit further investigation by level and type of exposure.


Subject(s)
Noise, Occupational , Occupational Exposure , Male , Pregnancy , Infant , Humans , Female , Cohort Studies , Noise, Occupational/adverse effects , Language Development , Maternal Exposure/adverse effects
16.
BMC Public Health ; 24(1): 1050, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38622610

ABSTRACT

BACKGROUND: Despite young children's widespread use of mobile devices, little research exists on this use and its association with children's language development. The aim of this study was to examine the associations between mobile device screen time and language comprehension and expressive language skills. An additional aim was to examine whether three factors related to the domestic learning environment modify the associations. METHODS: The study uses data from the Danish large-scale survey TRACES among two- and three-year-old children (n = 31,125). Mobile device screen time was measured as time spent on mobile devices on a normal day. Measurement of language comprehension and expressive language skills was based on subscales from the Five to Fifteen Toddlers questionnaire. Multivariable linear regression was used to examine the association between child mobile device screen time and language development and logistic regression to examine the risk of experiencing significant language difficulties. Joint exposure analyses were used to examine the association between child mobile device screen time and language development difficulties in combination with three other factors related to the domestic learning environment: parental education, reading to the child and child TV/PC screen time. RESULTS: High mobile device screen time of one hour or more per day was significantly associated with poorer language development scores and higher odds for both language comprehension difficulties (1-2 h: AOR = 1.30; ≥ 2 h: AOR = 1.42) and expressive language skills difficulties (1-2 h: AOR = 1.19; ≥ 2 h: AOR = 1.46). The results suggest that reading frequently to the child partly buffers the negative effect of high mobile device screen time on language comprehension difficulties but not on expressive language skills difficulties. No modifying effect of parental education and time spent by the child on TV/PC was found. CONCLUSIONS: Mobile device screen time of one hour or more per day is associated with poorer language development among toddlers. Reading frequently to the child may have a buffering effect on language comprehension difficulties but not on expressive language skills difficulties.


Subject(s)
Language Development Disorders , Screen Time , Humans , Child, Preschool , Language Development Disorders/epidemiology , Language Development , Computers, Handheld , Surveys and Questionnaires
17.
Proc Natl Acad Sci U S A ; 121(18): e2312323121, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38621117

ABSTRACT

Zebra finches, a species of songbirds, learn to sing by creating an auditory template through the memorization of model songs (sensory learning phase) and subsequently translating these perceptual memories into motor skills (sensorimotor learning phase). It has been traditionally believed that babbling in juvenile birds initiates the sensorimotor phase while the sensory phase of song learning precedes the onset of babbling. However, our findings challenge this notion by demonstrating that testosterone-induced premature babbling actually triggers the onset of the sensory learning phase instead. We reveal that juvenile birds must engage in babbling and self-listening to acquire the tutor song as the template. Notably, the sensory learning of the template in songbirds requires motor vocal activity, reflecting the observation that prelinguistic babbling in humans plays a crucial role in auditory learning for language acquisition.


Subject(s)
Finches , Animals , Humans , Vocalization, Animal , Learning , Language Development
18.
PLoS One ; 19(4): e0300382, 2024.
Article in English | MEDLINE | ID: mdl-38625991

ABSTRACT

The neural processes underpinning cognition and language development in infancy are of great interest. We investigated EEG power and coherence in infancy, as a reflection of underlying cortical function of single brain region and cross-region connectivity, and their relations to cognition and early precursors of speech and language development. EEG recordings were longitudinally collected from 21 infants with typical development between approximately 1 and 7 months. We investigated relative band power at 3-6Hz and 6-9Hz and EEG coherence of these frequency ranges at 25 electrode pairs that cover key brain regions. A correlation analysis was performed to assess the relationship between EEG measurements across frequency bands and brain regions and raw Bayley cognitive and language developmental scores. In the first months of life, relative band power is not correlated with cognitive and language scales. However, 3-6Hz coherence is negatively correlated with receptive language scores between frontoparietal regions, and 6-9Hz coherence is negatively correlated with expressive language scores between frontoparietal regions. The results from this preliminary study contribute to the existing literature on the relationship between electrophysiological development, cognition, and early speech precursors in this age group. Future work should create norm references of early development in these domains that can be compared with infants at risk for neurodevelopmental disabilities.


Subject(s)
Electroencephalography , Speech , Infant , Humans , Electroencephalography/methods , Language Development , Cognition/physiology , Brain
19.
Cogn Sci ; 48(4): e13435, 2024 04.
Article in English | MEDLINE | ID: mdl-38564253

ABSTRACT

General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory-the component of short-term memory used for temporary storage and manipulation of information. In this study, we consider the relationship between working memory and regularization of linguistic variation. Regularization is a well-documented process whereby languages become less variable (on some dimension) over time. This process has been argued to be driven by the behavior of individual language users, but the specific mechanism is not agreed upon. Here, we use an artificial language learning experiment to investigate whether limitations in working memory during either language learning or language production drive regularization behavior. We find that taxing working memory during production results in the loss of all types of variation, but the process by which random variation becomes more predictable is better explained by learning biases. A computational model offers a potential explanation for the production effect using a simple self-priming mechanism.


Subject(s)
Language , Learning , Humans , Language Development , Memory, Short-Term , Cognition
20.
Cogn Sci ; 48(4): e13431, 2024 04.
Article in English | MEDLINE | ID: mdl-38622981

ABSTRACT

Prediction-based accounts of language acquisition have the potential to explain several different effects in child language acquisition and adult language processing. However, evidence regarding the developmental predictions of such accounts is mixed. Here, we consider several predictions of these accounts in two large-scale developmental studies of syntactic priming of the English dative alternation. Study 1 was a cross-sectional study (N = 140) of children aged 3-9 years, in which we found strong evidence of abstract priming and the lexical boost, but little evidence that either effect was moderated by age. We found weak evidence for a prime surprisal effect; however, exploratory analyses revealed a protracted developmental trajectory for verb-structure biases, providing an explanation as for why prime surprisal effects are more elusive in developmental populations. In a longitudinal study (N = 102) of children in tightly controlled age bands at 42, 48, and 54 months, we found priming effects emerged on trials with verb overlap early but did not observe clear evidence of priming on trials without verb overlap until 54 months. There was no evidence of a prime surprisal effect at any time point and none of the effects were moderated by age. The results relating to the emergence of the abstract priming and lexical boost effects are consistent with prediction-based models, while the absence of age-related effects appears to reflect the structure-specific challenges the dative presents to English-acquiring children. Overall, our complex pattern of findings demonstrates the value of developmental data sets in testing psycholinguistic theory.


Subject(s)
Language , Psycholinguistics , Adult , Child , Humans , Cross-Sectional Studies , Longitudinal Studies , Language Development
SELECTION OF CITATIONS
SEARCH DETAIL
...