Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 907
Filter
Add more filters

Publication year range
1.
Proc Natl Acad Sci U S A ; 121(18): e2312323121, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38621117

ABSTRACT

Zebra finches, a species of songbirds, learn to sing by creating an auditory template through the memorization of model songs (sensory learning phase) and subsequently translating these perceptual memories into motor skills (sensorimotor learning phase). It has been traditionally believed that babbling in juvenile birds initiates the sensorimotor phase while the sensory phase of song learning precedes the onset of babbling. However, our findings challenge this notion by demonstrating that testosterone-induced premature babbling actually triggers the onset of the sensory learning phase instead. We reveal that juvenile birds must engage in babbling and self-listening to acquire the tutor song as the template. Notably, the sensory learning of the template in songbirds requires motor vocal activity, reflecting the observation that prelinguistic babbling in humans plays a crucial role in auditory learning for language acquisition.


Subject(s)
Finches , Animals , Humans , Vocalization, Animal , Learning , Language Development
2.
Proc Natl Acad Sci U S A ; 121(38): e2321008121, 2024 Sep 17.
Article in English | MEDLINE | ID: mdl-39254996

ABSTRACT

We know little about the mechanisms through which leader-follower dynamics during dyadic play shape infants' language acquisition. We hypothesized that infants' decisions to visually explore a specific object signal focal increases in endogenous attention, and that when caregivers respond to these proactive behaviors by naming the object it boosts infants' word learning. To examine this, we invited caregivers and their 14-mo-old infants to play with novel objects, before testing infants' retention of the novel object-label mappings. Meanwhile, their electroencephalograms were recorded. Results showed that infants' proactive looks toward an object during play associated with greater neural signatures of endogenous attention. Furthermore, when caregivers named objects during these episodes, infants showed greater word learning, but only when caregivers also joined their focus of attention. Our findings support the idea that infants' proactive visual explorations guide their acquisition of a lexicon.


Subject(s)
Language Development , Humans , Infant , Female , Male , Attention/physiology , Social Interaction , Electroencephalography , Verbal Learning/physiology , Learning/physiology
3.
Proc Natl Acad Sci U S A ; 120(1): e2209153119, 2023 01 03.
Article in English | MEDLINE | ID: mdl-36574655

ABSTRACT

In the second year of life, infants begin to rapidly acquire the lexicon of their native language. A key learning mechanism underlying this acceleration is syntactic bootstrapping: the use of hidden cues in grammar to facilitate vocabulary learning. How infants forge the syntactic-semantic links that underlie this mechanism, however, remains speculative. A hurdle for theories is identifying computationally light strategies that have high precision within the complexity of the linguistic signal. Here, we presented 20-mo-old infants with novel grammatical elements in a complex natural language environment and measured their resultant vocabulary expansion. We found that infants can learn and exploit a natural language syntactic-semantic link in less than 30 min. The rapid speed of acquisition of a new syntactic bootstrap indicates that even emergent syntactic-semantic links can accelerate language learning. The results suggest that infants employ a cognitive network of efficient learning strategies to self-supervise language development.


Subject(s)
Learning , Semantics , Humans , Infant , Language , Vocabulary , Linguistics , Language Development
4.
Proc Natl Acad Sci U S A ; 119(38): e2123230119, 2022 09 20.
Article in English | MEDLINE | ID: mdl-36095175

ABSTRACT

At birth, infants discriminate most of the sounds of the world's languages, but by age 1, infants become language-specific listeners. This has generally been taken as evidence that infants have learned which acoustic dimensions are contrastive, or useful for distinguishing among the sounds of their language(s), and have begun focusing primarily on those dimensions when perceiving speech. However, speech is highly variable, with different sounds overlapping substantially in their acoustics, and after decades of research, we still do not know what aspects of the speech signal allow infants to differentiate contrastive from noncontrastive dimensions. Here we show that infants could learn which acoustic dimensions of their language are contrastive, despite the high acoustic variability. Our account is based on the cross-linguistic fact that even sounds that overlap in their acoustics differ in the contexts they occur in. We predict that this should leave a signal that infants can pick up on and show that acoustic distributions indeed vary more by context along contrastive dimensions compared with noncontrastive dimensions. By establishing this difference, we provide a potential answer to how infants learn about sound contrasts, a question whose answer in natural learning environments has remained elusive.


Subject(s)
Language Development , Speech Perception , Speech , Humans , Infant , Learning
5.
Proc Natl Acad Sci U S A ; 119(18): e2123239119, 2022 05 03.
Article in English | MEDLINE | ID: mdl-35482916

ABSTRACT

Infants begin learning the visual referents of nouns before their first birthday. Despite considerable empirical and theoretical effort, little is known about the statistics of the experiences that enable infants to break into object­name learning. We used wearable sensors to collect infant experiences of visual objects and their heard names for 40 early-learned categories. The analyzed data were from one context that occurs multiple times a day and includes objects with early-learned names: mealtime. The statistics reveal two distinct timescales of experience. At the timescale of many mealtime episodes (n = 87), the visual categories were pervasively present, but naming of the objects in each of those categories was very rare. At the timescale of single mealtime episodes, names and referents did cooccur, but each name­referent pair appeared in very few of the mealtime episodes. The statistics are consistent with incremental learning of visual categories across many episodes and the rapid learning of name­object mappings within individual episodes. The two timescales are also consistent with a known cortical learning mechanism for one-episode learning of associations: new information, the heard name, is incorporated into well-established memories, the seen object category, when the new information cooccurs with the reactivation of that slowly established memory.


Subject(s)
Names , Vocabulary , Humans , Infant , Language , Language Development , Learning
6.
Neuroimage ; 299: 120720, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-38971484

ABSTRACT

This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior frontal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.


Subject(s)
Deafness , Humans , Deafness/diagnostic imaging , Deafness/physiopathology , Neuroimaging/methods , Nerve Net/diagnostic imaging , Brain Mapping/methods , Brain/diagnostic imaging , Language , Linguistics
7.
Dev Sci ; 27(4): e13502, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38482775

ABSTRACT

It is known that the rhythms of speech are visible on the face, accurately mirroring changes in the vocal tract. These low-frequency visual temporal movements are tightly correlated with speech output, and both visual speech (e.g., mouth motion) and the acoustic speech amplitude envelope entrain neural oscillations. Low-frequency visual temporal information ('visual prosody') is known from behavioural studies to be perceived by infants, but oscillatory studies are currently lacking. Here we measure cortical tracking of low-frequency visual temporal information by 5- and 8-month-old infants using a rhythmic speech paradigm (repetition of the syllable 'ta' at 2 Hz). Eye-tracking data were collected simultaneously with EEG, enabling computation of cortical tracking and phase angle during visual-only speech presentation. Significantly higher power at the stimulus frequency indicated that cortical tracking occurred across both ages. Further, individual differences in preferred phase to visual speech related to subsequent measures of language acquisition. The difference in phase between visual-only speech and the same speech presented as auditory-visual at 6- and 9-months was also examined. These neural data suggest that individual differences in early language acquisition may be related to the phase of entrainment to visual rhythmic input in infancy. RESEARCH HIGHLIGHTS: Infant preferred phase to visual rhythmic speech predicts language outcomes. Significant cortical tracking of visual speech is present at 5 and 8 months. Phase angle to visual speech at 8 months predicted greater receptive and productive vocabulary at 24 months.


Subject(s)
Language Development , Speech Perception , Speech , Humans , Infant , Male , Female , Speech Perception/physiology , Speech/physiology , Electroencephalography , Individuality , Visual Perception/physiology , Eye-Tracking Technology , Acoustic Stimulation , Photic Stimulation
8.
Dev Sci ; 27(5): e13510, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38597678

ABSTRACT

Although identifying the referents of single words is often cited as a key challenge for getting word learning off the ground, it overlooks the fact that young learners consistently encounter words in the context of other words. How does this company help or hinder word learning? Prior investigations into early word learning from children's real-world language input have yielded conflicting results, with some influential findings suggesting an advantage for words that keep a diverse company of other words, and others suggesting the opposite. Here, we sought to triangulate the source of this conflict, comparing different measures of diversity and approaches to controlling for correlated effects of word frequency across multiple languages. The results were striking: while different diversity measures on their own yielded conflicting results, once nonlinear relationships with word frequency were controlled, we found convergent evidence that contextual consistency supports early word learning. RESEARCH HIGHLIGHTS: The words children learn occur in a sea of other words. The company words keep ranges from highly variable to highly consistent and circumscribed. Prior findings conflict over whether variability versus consistency helps early word learning. Accounting for correlated effects of word frequency resolved the conflict across multiple languages. Results reveal convergent evidence that consistency helps early word learning.


Subject(s)
Language Development , Verbal Learning , Vocabulary , Humans , Verbal Learning/physiology , Child, Preschool , Female , Male , Learning , Child Language , Language
9.
Dev Sci ; 27(2): e13442, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37612886

ABSTRACT

Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language-learning environment of American English-learning toddlers by testing how well phonetic clarity and referential clarity align in infant-directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words' first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents' speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less-clear ones. Such multimodal "gems" offer special opportunities for early word learning. RESEARCH HIGHLIGHTS: In parent-infant interaction, parents' referential intentions are sometimes clear and sometimes unclear; likewise, parents' pronunciation is sometimes clear and sometimes quite difficult to understand. We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance. Thus, there are globally valuable instances ("gems") from which children could learn about words' pronunciations and words' meanings at the same time. Homing in on clear phonetic instances and filtering out less-clear ones would help children identify these multimodal "gems" during word learning.


Subject(s)
Speech Perception , Speech , Infant , Humans , Child, Preschool , Phonetics , Language Development , Learning , Language
10.
Dev Sci ; 27(2): e13436, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37551932

ABSTRACT

The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants' word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants' familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words. RESEARCH HIGHLIGHTS: We examined 10-month-old infants' ERP word familiarity response using audio-visual stories, in which a speaker addressed infants with direct or averted gaze while speaking. Ten-month-old infants can segment and recognise familiar words from audio-visual speech, indicated by their negative-going ERP response to familiar, relative to novel, words. This negative-going ERP word familiarity effect was present for isolated words over left-frontal electrodes regardless of whether the speaker offered eye contact while speaking. An additional positivity in response to familiar words was observed for direct gaze only, over right fronto-central and central electrodes.


Subject(s)
Speech Perception , Speech , Infant , Humans , Speech/physiology , Fixation, Ocular , Language , Evoked Potentials/physiology , Speech Perception/physiology
11.
Dev Sci ; : e13551, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39036879

ABSTRACT

Test-retest reliability-establishing that measurements remain consistent across multiple testing sessions-is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants' preference for infant-directed speech over adult-directed speech in a large sample (N = 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants' speech preference (overall r = 0.09, 95% CI [-0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study's effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants. RESEARCH HIGHLIGHTS: We assessed test-retest reliability of infants' preference for infant-directed over adult-directed speech in a large pre-registered sample (N = 158). There was no consistent evidence of test-retest reliability in measures of infants' speech preference. Applying stricter criteria for the inclusion of participants may lead to higher test-retest reliability, but at the cost of substantial decreases in sample size. Developmental research relying on stable individual differences should consider the underlying reliability of its measures.

12.
Cereb Cortex ; 33(11): 6872-6890, 2023 05 24.
Article in English | MEDLINE | ID: mdl-36807501

ABSTRACT

Although teaching animals a few meaningful signs is usually time-consuming, children acquire words easily after only a few exposures, a phenomenon termed "fast-mapping." Meanwhile, most neural network learning algorithms fail to achieve reliable information storage quickly, raising the question of whether a mechanistic explanation of fast-mapping is possible. Here, we applied brain-constrained neural models mimicking fronto-temporal-occipital regions to simulate key features of semantic associative learning. We compared networks (i) with prior encounters with phonological and conceptual knowledge, as claimed by fast-mapping theory, and (ii) without such prior knowledge. Fast-mapping simulations showed word-specific representations to emerge quickly after 1-10 learning events, whereas direct word learning showed word-meaning mappings only after 40-100 events. Furthermore, hub regions appeared to be essential for fast-mapping, and attention facilitated it, but was not strictly necessary. These findings provide a better understanding of the critical mechanisms underlying the human brain's unique ability to acquire new words rapidly.


Subject(s)
Brain , Semantics , Child , Humans , Linguistics , Brain Mapping , Occipital Lobe
13.
J Exp Child Psychol ; 239: 105805, 2024 03.
Article in English | MEDLINE | ID: mdl-37944290

ABSTRACT

As children learn to communicate with others, they must develop an understanding of the principles that underlie human communication. Recent evidence suggests that adults expect communicative principles to govern all forms of communication, not just language, but evidence about children's ability to do so is sparse. This study investigated whether preschool children expect both pictures and words to adhere to the communicative principle of quantity using a simple matched paradigm. Children (N = 293) aged of 3 to 5 years (52.5% male and 47.5% female; majority White with college-educated mothers) participated. Results show that children as young as 3.5 years can use the communicative principle of quantity to infer meaning across verbal and pictorial alternatives.


Subject(s)
Communication , Language , Adult , Humans , Male , Female , Child, Preschool , Aged , Learning , Mothers , Language Development
14.
J Exp Child Psychol ; 247: 106057, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39226857

ABSTRACT

Negation-triggered inferences are universal across human languages. Hearing "This is not X" should logically lead to the inference that all elements other than X constitute possible alternatives. However, not all logically possible alternatives are equally accessible in the real world. To qualify as a plausible alternative, it must share with the negated element as many similarities as possible, and the most plausible one is often from the same taxonomic category as the negated element. The current article reports on two experiments that investigated the development of preschool children's ability to infer plausible alternatives triggered by negation. Experiment 1 showed that in a context where children were required to determine the most plausible alternative to the negated element, the 4- and 5-year-olds, but not the 3-year-olds, exhibited a robust preference for the taxonomic associates. Experiment 2 further demonstrated that the 3-, 4- and 5-year-olds considered all the complement set members as equally possible alternatives in a context where they were not explicitly required to evaluate the plausibility of different candidates. Taken together, our findings reveal interesting developmental continuity in preschool children's ability to make inferences about plausible alternatives triggered by negation. We discuss the potential semantic and pragmatic factors that contribute to children's emerging awareness of typical alternatives triggered by negative expressions.


Subject(s)
Semantics , Humans , Child, Preschool , Male , Female , Concept Formation , Child Development/physiology , Age Factors , Language Development
15.
Acta Paediatr ; 113(8): 1852-1859, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38700433

ABSTRACT

AIM: In today's increasingly digitalised society, there is a growing need for information on how parents can support their children's language development at home. We investigated the associations between three types of parental linguistic support and children's language skills in different domains. METHODS: Between April 2019 and March 2020, 164 children aged between 2.5 and 4.1 years and their parents were recruited via daycare centres in Helsinki. Information on how frequently parents read, told free stories and sang to their children was collected. The children's lexical and grammatical skills and general language ability were assessed using validated instruments. RESULTS: More frequent reading, storytelling and singing were all separately associated with higher-level expressive lexical and general expressive language ability. More frequent reading and storytelling were also associated with higher-level phonological skills. Only reading was associated with receptive skills. The regression analyses revealed that reading had the highest explanatory value for lexical and general language ability after controlling for the effect of background factors. Furthermore, storytelling had the highest explanatory value for grammatical skills. CONCLUSION: The results highlight the benefits of parental reading. However, broad use of all parental linguistic activities is recommended to support the development of children's different language domains.


Subject(s)
Language Development , Humans , Child, Preschool , Male , Female , Parents/psychology , Reading , Parent-Child Relations , Linguistics
16.
Proc Natl Acad Sci U S A ; 118(7)2021 02 09.
Article in English | MEDLINE | ID: mdl-33510040

ABSTRACT

Before they even speak, infants become attuned to the sounds of the language(s) they hear, processing native phonetic contrasts more easily than nonnative ones. For example, between 6 to 8 mo and 10 to 12 mo, infants learning American English get better at distinguishing English and [l], as in "rock" vs. "lock," relative to infants learning Japanese. Influential accounts of this early phonetic learning phenomenon initially proposed that infants group sounds into native vowel- and consonant-like phonetic categories-like and [l] in English-through a statistical clustering mechanism dubbed "distributional learning." The feasibility of this mechanism for learning phonetic categories has been challenged, however. Here, we demonstrate that a distributional learning algorithm operating on naturalistic speech can predict early phonetic learning, as observed in Japanese and American English infants, suggesting that infants might learn through distributional learning after all. We further show, however, that, contrary to the original distributional learning proposal, our model learns units too brief and too fine-grained acoustically to correspond to phonetic categories. This challenges the influential idea that what infants learn are phonetic categories. More broadly, our work introduces a mechanism-driven approach to the study of early phonetic learning, together with a quantitative modeling framework that can handle realistic input. This allows accounts of early phonetic learning to be linked to concrete, systematic predictions regarding infants' attunement.


Subject(s)
Language Development , Models, Neurological , Natural Language Processing , Phonetics , Humans , Speech Perception , Speech Recognition Software
17.
Proc Natl Acad Sci U S A ; 118(41)2021 10 12.
Article in English | MEDLINE | ID: mdl-34607945

ABSTRACT

The human ability to produce and understand an indefinite number of sentences is driven by syntax, a cognitive system that can combine a finite number of primitive linguistic elements to build arbitrarily complex expressions. The expressive power of syntax comes in part from its ability to encode potentially unbounded dependencies over abstract structural configurations. How does such a system develop in human minds? We show that 18-mo-old infants are capable of representing abstract nonlocal dependencies, suggesting that a core property of syntax emerges early in development. Our test case is English wh-questions, in which a fronted wh-phrase can act as the argument of a verb at a distance (e.g., What did the chef burn?). Whereas prior work has focused on infants' interpretations of these questions, we introduce a test to probe their underlying syntactic representations, independent of meaning. We ask when infants know that an object wh-phrase and a local object of a verb cannot co-occur because they both express the same argument relation (e.g., * What did the chef burn the pizza ). We find that 1) 18 mo olds demonstrate awareness of this complementary distribution pattern and thus represent the nonlocal grammatical dependency between the wh-phrase and the verb, but 2) younger infants do not. These results suggest that the second year of life is a period of active syntactic development, during which the computational capacities for representing nonlocal syntactic dependencies become evident.


Subject(s)
Language Development , Speech/physiology , Cognition/physiology , Comprehension , Female , Humans , Infant , Male
18.
J Child Lang ; 51(2): 411-433, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37340946

ABSTRACT

Pointing plays a significant role in communication and language development. However, in spoken languages pointing has been viewed as a non-verbal gesture, whereas in sign languages, pointing is regarded to represent a linguistic unit of language. This study compared the use of pointing between seven bilingual hearing children of deaf parents (Kids of Deaf Adults [KODAs]) interacting with their deaf parents and five hearing children interacting with their hearing parents. Data were collected in 6-month intervals from the age of 1;0 to 3;0. Pointing frequency among the deaf parents and KODAs was significantly higher than among the hearing parents and their children. In signing dyads pointing frequency remained stable, whereas in spoken dyads it decreased during the follow-up. These findings suggested that pointing is a fundamental element of parent-child interaction, regardless of the language, but is guided by the modality, gestural and linguistic features of the language in question.


Subject(s)
Deafness , Language Development , Adult , Humans , Follow-Up Studies , Sign Language , Hearing , Parent-Child Relations , Parents , Gestures
19.
J Child Lang ; : 1-34, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38646693

ABSTRACT

Many studies have explored children's acquisition of temporal adverbs. However, the extent to which children's early temporal language has discursive instead of solely temporal meanings has been largely ignored. We report two corpus-based studies that investigated temporal adverbs in Finnish child-parent interaction between the children's ages of 1;7 and 4;11. Study 1 shows that the two corpus children used temporal adverbs to construe both temporal and discursive meanings from their early adverb production and that the children's usage syntactically broadly reflected the input received. Study 2 shows that the discursive uses of adverbs appeared to be learned from contextually anchored caregiver constructions that convey discourse functions like urging and reassuring, and that the usage is related to the children's and caregivers' interactional roles. Our study adds to the literature on the acquisition of temporal adverbs by demonstrating that these items are learned also with additional discursive meanings in family interaction.

20.
Folia Phoniatr Logop ; 76(2): 192-205, 2024.
Article in English | MEDLINE | ID: mdl-37604138

ABSTRACT

INTRODUCTION: Due to the heterogeneity in language trajectories and differences in language exposure, a lot of bilingual children could use some extra support for the acquisition of the school language to reduce the risk of language problems and learning difficulties. Enhancing bilingual children's narrative abilities in the school language could be an efficient approach to advance the general school language abilities as well. Therefore, this study aimed to investigate whether a narrative intervention could improve both general and narrative school language abilities of typically developing bilingual (Turkish-Dutch) children. METHODS: Nineteen Turkish-Dutch bilingual children (6-9.9 years) were enrolled in this single-arm early efficacy study. The intervention procedure was administered in the school language (Dutch) and based on a test-teach-retest principle with two baseline measurements. At baseline 1, the expressive, receptive, and narrative language abilities were determined. The second baseline measurement consisted of a second measurement of the narrative abilities. Subsequently, a weekly 1-h group-based intervention was implemented during 10 sessions. After the intervention phase, the expressive, receptive, and narrative language abilities were tested again. RESULTS: After the intervention, the children produced significantly more story structure elements compared to both baseline measurements. No significant differences were found for microstructure narrative measures. The participants had significantly higher scores on the expressive and receptive language measurements post-intervention. CONCLUSION: These findings suggest that the intervention could be an efficient approach to stimulate the second language development of bilingual children.


Subject(s)
Language Development Disorders , Multilingualism , Child , Humans , Language Therapy , Language Development Disorders/therapy , Language , Language Development
SELECTION OF CITATIONS
SEARCH DETAIL