Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 64
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Dev Sci ; 27(3): e13459, 2024 May.
Article in English | MEDLINE | ID: mdl-37987377

ABSTRACT

We report the findings of a multi-language and multi-lab investigation of young infants' ability to discriminate lexical tones as a function of their native language, age and language experience, as well as of tone properties. Given the high prevalence of lexical tones across human languages, understanding lexical tone acquisition is fundamental for comprehensive theories of language learning. While there are some similarities between the developmental course of lexical tone perception and that of vowels and consonants, findings for lexical tones tend to vary greatly across different laboratories. To reconcile these differences and to assess the developmental trajectory of native and non-native perception of tone contrasts, this study employed a single experimental paradigm with the same two pairs of Cantonese tone contrasts (perceptually similar vs. distinct) across 13 laboratories in Asia-Pacific, Europe and North-America testing 5-, 10- and 17-month-old monolingual (tone, pitch-accent, non-tone) and bilingual (tone/non-tone, non-tone/non-tone) infants. Across the age range and language backgrounds, infants who were not exposed to Cantonese showed robust discrimination of the two non-native lexical tone contrasts. Contrary to this overall finding, the statistical model assessing native discrimination by Cantonese-learning infants failed to yield significant effects. These findings indicate that lexical tone sensitivity is maintained from 5 to 17 months in infants acquiring tone and non-tone languages, challenging the generalisability of the existing theoretical accounts of perceptual narrowing in the first months of life. RESEARCH HIGHLIGHTS: This is a multi-language and multi-lab investigation of young infants' ability to discriminate lexical tones. This study included data from 13 laboratories testing 5-, 10-, and 17-month-old monolingual (tone, pitch-accent, non-tone) and bilingual (tone/non-tone, non-tone/non-tone) infants. Overall, infants discriminated a perceptually similar and a distinct non-native tone contrast, although there was no evidence of a native tone-language advantage in discrimination. These results demonstrate maintenance of tone discrimination throughout development.


Subject(s)
Pitch Perception , Speech Perception , Infant , Humans , Laboratories , Phonetics , Timbre Perception
2.
J Cogn Neurosci ; 35(11): 1741-1759, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37677057

ABSTRACT

In face-to-face conversations, listeners gather visual speech information from a speaker's talking face that enhances their perception of the incoming auditory speech signal. This auditory-visual (AV) speech benefit is evident even in quiet environments but is stronger in situations that require greater listening effort such as when the speech signal itself deviates from listeners' expectations. One example is infant-directed speech (IDS) presented to adults. IDS has exaggerated acoustic properties that are easily discriminable from adult-directed speech (ADS). Although IDS is a speech register that adults typically use with infants, no previous neurophysiological study has directly examined whether adult listeners process IDS differently from ADS. To address this, the current study simultaneously recorded EEG and eye-tracking data from adult participants as they were presented with auditory-only (AO), visual-only, and AV recordings of IDS and ADS. Eye-tracking data were recorded because looking behavior to the speaker's eyes and mouth modulates the extent of AV speech benefit experienced. Analyses of cortical tracking accuracy revealed that cortical tracking of the speech envelope was significant in AO and AV modalities for IDS and ADS. However, the AV speech benefit [i.e., AV > (A + V)] was only present for IDS trials. Gaze behavior analyses indicated differences in looking behavior during IDS and ADS trials. Surprisingly, looking behavior to the speaker's eyes and mouth was not correlated with cortical tracking accuracy. Additional exploratory analyses indicated that attention to the whole display was negatively correlated with cortical tracking accuracy of AO and visual-only trials in IDS. Our results underscore the nuances involved in the relationship between neurophysiological AV speech benefit and looking behavior.


Subject(s)
Speech Perception , Speech , Humans , Adult , Infant , Speech/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Communication
3.
Dev Sci ; 26(5): e13353, 2023 09.
Article in English | MEDLINE | ID: mdl-36415027

ABSTRACT

Music listening often entails spontaneous perception and body movement to a periodic pulse-like meter. There is increasing evidence that this cross-cultural ability relates to neural processes that selectively enhance metric periodicities, even when these periodicities are not prominent in the acoustic stimulus. However, whether these neural processes emerge early in development remains largely unknown. Here, we recorded the electroencephalogram (EEG) of 20 healthy 5- to 6-month-old infants, while they were exposed to two rhythms known to induce the perception of meter consistently across Western adults. One rhythm contained prominent acoustic periodicities corresponding to the meter, whereas the other rhythm did not. Infants showed significantly enhanced representations of meter periodicities in their EEG responses to both rhythms. This effect is unlikely to reflect the tracking of salient acoustic features in the stimulus, as it was observed irrespective of the prominence of meter periodicities in the audio signals. Moreover, as previously observed in adults, the neural enhancement of meter was greater when the rhythm was delivered by low-pitched sounds. Together, these findings indicate that the endogenous enhancement of metric periodicities beyond low-level acoustic features is a neural property that is already present soon after birth. These high-level neural processes could set the stage for internal representations of musical meter that are critical for human movement coordination during rhythmic musical behavior. RESEARCH HIGHLIGHTS: 5- to 6-month-old infants were presented with auditory rhythms that induce the perception of a periodic pulse-like meter in adults. Infants showed selective enhancement of EEG activity at meter-related frequencies irrespective of the prominence of these frequencies in the stimulus. Responses at meter-related frequencies were boosted when the rhythm was conveyed by bass sounds. High-level neural processes that transform rhythmic auditory stimuli into internal meter templates emerge early after birth.


Subject(s)
Music , Adult , Humans , Infant , Acoustic Stimulation , Electroencephalography , Sound , Periodicity , Auditory Perception/physiology
4.
Infancy ; 28(2): 277-300, 2023 03.
Article in English | MEDLINE | ID: mdl-36217702

ABSTRACT

Visual speech cues from a speaker's talking face aid speech segmentation in adults, but despite the importance of speech segmentation in language acquisition, little is known about the possible influence of visual speech on infants' speech segmentation. Here, to investigate whether there is facilitation of speech segmentation by visual information, two groups of English-learning 7-month-old infants were presented with continuous speech passages, one group with auditory-only (AO) speech and the other with auditory-visual (AV) speech. Additionally, the possible relation between infants' relative attention to the speaker's mouth versus eye regions and their segmentation performance was examined. Both the AO and the AV groups of infants successfully segmented words from the continuous speech stream, but segmentation performance persisted for longer for infants in the AV group. Interestingly, while AV group infants showed no significant relation between the relative amount of time spent fixating the speaker's mouth versus eyes and word segmentation, their attention to the mouth was greater than that of AO group infants, especially early in test trials. The results are discussed in relation to the possible pathways through which visual speech cues aid speech perception.


Subject(s)
Speech Perception , Speech , Adult , Humans , Infant , Language Development , Learning , Face
5.
Neuroimage ; 256: 119217, 2022 08 01.
Article in English | MEDLINE | ID: mdl-35436614

ABSTRACT

An auditory-visual speech benefit, the benefit that visual speech cues bring to auditory speech perception, is experienced from early on in infancy and continues to be experienced to an increasing degree with age. While there is both behavioural and neurophysiological evidence for children and adults, only behavioural evidence exists for infants - as no neurophysiological study has provided a comprehensive examination of the auditory-visual speech benefit in infants. It is also surprising that most studies on auditory-visual speech benefit do not concurrently report looking behaviour especially since the auditory-visual speech benefit rests on the assumption that listeners attend to a speaker's talking face and that there are meaningful individual differences in looking behaviour. To address these gaps, we simultaneously recorded electroencephalographic (EEG) and eye-tracking data of 5-month-olds, 4-year-olds and adults as they were presented with a speaker in auditory-only (AO), visual-only (VO), and auditory-visual (AV) modes. Cortical tracking analyses that involved forward encoding models of the speech envelope revealed that there was an auditory-visual speech benefit [i.e., AV > (A + V)], evident in 5-month-olds and adults but not 4-year-olds. Examination of cortical tracking accuracy in relation to looking behaviour, showed that infants' relative attention to the speaker's mouth (vs. eyes) was positively correlated with cortical tracking accuracy of VO speech, whereas adults' attention to the display overall was negatively correlated with cortical tracking accuracy of VO speech. This study provides the first neurophysiological evidence of auditory-visual speech benefit in infants and our results suggest ways in which current models of speech processing can be fine-tuned.


Subject(s)
Speech Perception , Speech , Adult , Auditory Perception/physiology , Child , Child, Preschool , Humans , Infant , Mouth , Speech Perception/physiology , Visual Perception/physiology
6.
Child Dev ; 91(6): e1211-e1230, 2020 11.
Article in English | MEDLINE | ID: mdl-32745250

ABSTRACT

This longitudinal study investigated the effects of maternal emotional health concerns, on infants' home language environment, vocalization quantity, and expressive language skills. Mothers and their infants (at 6 and 12 months; 21 mothers with depression and or anxiety and 21 controls) provided day-long home-language recordings. Compared with controls, risk group recordings contained fewer mother-infant conversational turns and infant vocalizations, but daily number of adult word counts showed no group difference. Furthermore, conversational turns and infant vocalizations were stronger predictors of infants' 18-month vocabulary size than depression and anxiety measures. However, anxiety levels moderated the effect of conversational turns on vocabulary size. These results suggest that variability in mothers' emotional health influences infants' language environment and later language ability.


Subject(s)
Anxiety , Child Language , Child of Impaired Parents , Depression, Postpartum , Mother-Child Relations , Adult , Child Development/physiology , Child of Impaired Parents/education , Child of Impaired Parents/psychology , Communication , Depression , Female , Humans , Infant , Infant, Newborn , Language Development , Longitudinal Studies , Male , Mother-Child Relations/psychology , Mothers/psychology , Puerperal Disorders , Vocabulary , Young Adult
7.
Dyslexia ; 26(1): 3-17, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31994263

ABSTRACT

Children of reading age diagnosed with dyslexia show deficits in reading and spelling skills, but early markers of later dyslexia are already present in infancy in auditory processing and phonological domains. Deficits in lexical development are not typically associated with dyslexia. Nevertheless, it is possible that early auditory/phonological deficits would have detrimental effects on the encoding and storage of novel lexical items. Word-learning difficulties have been demonstrated in school-aged dyslexic children using paired associate learning tasks, but earlier manifestations in infants who are at family risk for dyslexia have not been investigated. This study assessed novel word learning in 19-month-old infants at risk for dyslexia (by virtue of having one dyslexic parent) and infants not at risk for any developmental disorder. Infants completed a word-learning task that required them to map two novel words to their corresponding novel referents. Not at-risk infants showed increased looking time to the novel referents at test compared with at-risk infants. These findings demonstrate, for the first time, that at-risk infants show differences in novel word-learning (fast-mapping) tasks compared with not at-risk infants. Our findings have implications for the development and consolidation of early lexical and phonological skills in infants at family risk of later dyslexia.


Subject(s)
Dyslexia/diagnosis , Language Development Disorders/diagnosis , Paired-Associate Learning , Female , Humans , Infant , Male , Phonetics , Reading
8.
J Acoust Soc Am ; 148(6): 3399, 2020 12.
Article in English | MEDLINE | ID: mdl-33379914

ABSTRACT

This study investigated the effects of hearing loss and hearing experience on the acoustic features of infant-directed speech (IDS) to infants with hearing loss (HL) compared to controls with normal hearing (NH) matched by either chronological or hearing age (experiment 1) and across development in infants with hearing loss as well as the relation between IDS features and infants' developing lexical abilities (experiment 2). Both experiments included detailed acoustic analyses of mothers' productions of the three corner vowels /a, i, u/ and utterance-level pitch in IDS and in adult-directed speech. Experiment 1 demonstrated that IDS to infants with HL was acoustically more variable than IDS to hearing-age matched infants with NH. Experiment 2 yielded no changes in IDS features over development; however, the results did show a positive relationship between formant distances in mothers' speech and infants' concurrent receptive vocabulary size, as well as between vowel hyperarticulation and infants' expressive vocabulary. These findings suggest that despite infants' HL and thus diminished access to speech input, infants with HL are exposed to IDS with generally similar acoustic qualities as are infants with NH. However, some differences persist, indicating that infants with HL might receive less intelligible speech.


Subject(s)
Deafness , Hearing Loss , Speech Perception , Acoustics , Adult , Female , Hearing Loss/diagnosis , Humans , Infant , Speech
9.
Dev Sci ; 22(6): e12836, 2019 11.
Article in English | MEDLINE | ID: mdl-31004544

ABSTRACT

Here we report, for the first time, a relationship between sensitivity to amplitude envelope rise time in infants and their later vocabulary development. Recent research in auditory neuroscience has revealed that amplitude envelope rise time plays a mechanistic role in speech encoding. Accordingly, individual differences in infant discrimination of amplitude envelope rise times could be expected to relate to individual differences in language acquisition. A group of 50 infants taking part in a longitudinal study contributed rise time discrimination thresholds when aged 7 and 10 months, and their vocabulary development was measured at 3 years. Experimental measures of phonological sensitivity were also administered at 3 years. Linear mixed effect models taking rise time sensitivity as the dependent variable, and controlling for non-verbal IQ, showed significant predictive effects for vocabulary at 3 years, but not for the phonological sensitivity measures. The significant longitudinal relationship between amplitude envelope rise time discrimination and vocabulary development suggests that early rise time discrimination abilities have an impact on speech processing by infants.


Subject(s)
Language Development , Speech Perception/physiology , Vocabulary , Child, Preschool , Female , Humans , Infant , Longitudinal Studies , Male , Speech
10.
Neuroimage ; 175: 70-79, 2018 07 15.
Article in English | MEDLINE | ID: mdl-29609008

ABSTRACT

Developmental dyslexia is a multifaceted disorder of learning primarily manifested by difficulties in reading, spelling, and phonological processing. Neural studies suggest that phonological difficulties may reflect impairments in fundamental cortical oscillatory mechanisms. Here we examine cortical mechanisms in children (6-12 years of age) with or without dyslexia (utilising both age- and reading-level-matched controls) using electroencephalography (EEG). EEG data were recorded as participants listened to an audio-story. Novel electrophysiological measures of phonemic processing were derived by quantifying how well the EEG responses tracked phonetic features of speech. Our results provide, for the first time, evidence for impaired low-frequency cortical tracking to phonetic features during natural speech perception in dyslexia. Atypical phonological tracking was focused on the right hemisphere, and correlated with traditional psychometric measures of phonological skills used in diagnostic dyslexia assessments. Accordingly, the novel indices developed here may provide objective metrics to investigate language development and language impairment across languages.


Subject(s)
Dyslexia/physiopathology , Electroencephalography/methods , Functional Laterality/physiology , Image Processing, Computer-Assisted/methods , Psycholinguistics , Speech Perception/physiology , Child , Female , Humans , Male
11.
Dev Sci ; 21(1)2018 01.
Article in English | MEDLINE | ID: mdl-27785865

ABSTRACT

Dyslexia is a neurodevelopmental disorder manifested in deficits in reading and spelling skills that is consistently associated with difficulties in phonological processing. Dyslexia is genetically transmitted, but its manifestation in a particular individual is thought to depend on the interaction of epigenetic and environmental factors. We adopt a novel interactional perspective on early linguistic environment and dyslexia by simultaneously studying two pre-existing factors, one maternal and one infant, that may contribute to these interactions; and two behaviours, one maternal and one infant, to index the effect of these factors. The maternal factor is whether mothers are themselves dyslexic or not (with/without dyslexia) and the infant factor is whether infants are at-/not-at family risk for dyslexia (due to their mother or father being dyslexic). The maternal behaviour is mothers' infant-directed speech (IDS), which typically involves vowel hyperarticulation, thought to benefit speech perception and language acquisition. The infant behaviour is auditory perception measured by infant sensitivity to amplitude envelope rise time, which has been found to be reduced in dyslexic children. Here, at-risk infants showed significantly poorer acoustic sensitivity than not-at-risk infants and mothers only hyperarticulated vowels to infants who were not at-risk for dyslexia. Mothers' own dyslexia status had no effect on IDS quality. Parental speech input is thus affected by infant risk status, with likely consequences for later linguistic development.


Subject(s)
Dyslexia/etiology , Maternal Behavior , Mothers , Auditory Perception , Child , Female , Humans , Infant , Language Development , Male , Speech , Speech Perception
12.
J Child Lang ; 45(2): 273-289, 2018 03.
Article in English | MEDLINE | ID: mdl-28585512

ABSTRACT

Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception - lip-reading and visual influence in auditory-visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory-visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory-visual speech perception.


Subject(s)
Language Development , Lipreading , Speech Perception , Vocabulary , Aptitude , Child, Preschool , Female , Humans , Language , Male , Phonetics
13.
J Child Lang ; 45(5): 1035-1053, 2018 09.
Article in English | MEDLINE | ID: mdl-29502549

ABSTRACT

This longitudinal study assessed three acoustic components of maternal infant-directed speech (IDS) - pitch, affect, and vowel hyperarticulation - in relation to infants' age and their expressive vocabulary size. These three individual components were measured in IDS addressed to infants at 7, 9, 11, 15, and 19 months (N = 18). All three components were exaggerated at all ages in mothers' IDS compared to their adult-directed speech. Importantly, the only significant predictor of infants' expressive vocabulary size at 15 and 19 months was vowel hyperarticulation, but only at 9 months and beyond, not at 7 months, and not pitch or affect at any age. These results set apart vowel hyperarticulation in IDS to infants as the critical IDS component for vocabulary development. Thus IDS, specifically the degree of vowel hyperarticulation therein, is a vehicle by which parents can provide the most optimal speech quality for their infants' linguistic and communicative development.


Subject(s)
Mother-Child Relations , Speech Acoustics , Speech , Vocabulary , Adult , Communication , Female , Humans , Infant , Longitudinal Studies , Male , Mothers , Speech Perception
14.
Dyslexia ; 22(2): 101-19, 2016 May.
Article in English | MEDLINE | ID: mdl-27146374

ABSTRACT

Visual-verbal paired associate learning (PAL) refers to the ability to establish an arbitrary association between a visual referent and an unfamiliar label. It is now established that this ability is impaired in children with dyslexia, but the source of this deficit is yet to be specified. This study assesses PAL performance in children with reading difficulties using a modified version of the PAL paradigm, comprising a comprehension and a production phase, to determine whether the PAL deficit lies in children's ability to establish and retain novel object-novel word associations or their ability to retrieve the learned novel labels for production. Results showed that while children with reading difficulties required significantly more trials to learn the object-word associations, when they were required to use these associations in a comprehension-referent selection task, their accuracy and speed did not differ from controls. Nevertheless, children with reading difficulties were significantly less successful when they were required to produce the learned novel labels in response to the visual stimuli. Thus, these results indicate that while children with reading difficulties are successful at establishing visual-verbal associations, they have a deficit in the verbal production component of PAL tasks, which may relate to a more general underlying impairment in auditory or phonological processing. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Dyslexia/physiopathology , Paired-Associate Learning/physiology , Phonetics , Reading , Child , Comprehension , Dyslexia/psychology , Female , Humans , Male , Verbal Learning
15.
Lang Speech ; 59(Pt 2): 196-218, 2016 Jun.
Article in English | MEDLINE | ID: mdl-27363253

ABSTRACT

Three naming aloud experiments and a lexical decision (LD) experiment used masked priming to index the processing of written Thai vowels and tones. Thai allows for manipulation of the mapping between orthography and phonology not possible in other orthographies, for example, the use of consonants, vowels and tone markers in both horizontal and vertical orthographic positions (HOPs and VOPs). Experiment I showed that changing a vowel between prime and target slowed down target naming but changing a tone mark did not. Experiment I used an across item-design and a different number of HOPs in the way vowels and tones were specified. Experiment 2 used a within-item design and tested vowel and tone changes for both 2-HOP and 3-HOP targets separately. The 3-HOP words showed the same tone and vowel change effect as Experiment 1, whereas 2-HOP items did not. It was speculated that the 2-HOP result was due to the variable position of the vowel affecting priming. Experiment 3 employed a more stringent control over the 2-HOP vowel and tone items and found priming for the tone changes but not for vowel changes. The final experiment retested the items from Experiment 3 with the LD task and found no priming for the tone change items, indicating that the tone effect in Experiment 3 was due to processes involved in naming aloud. In all, the results supported the view that for naming a word, the development of tone information is slower than vowel information.


Subject(s)
Phonetics , Pitch Perception , Reading , Speech Acoustics , Speech Perception , Voice Quality , Acoustics , Humans , Reaction Time , Signal Processing, Computer-Assisted , Sound Spectrography , Speech Production Measurement , Time Factors
16.
J Acoust Soc Am ; 136(1): 357-65, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24993220

ABSTRACT

The findings are reported of an investigation into rhythmic differences between infant-directed speech (IDS) and adult-directed speech (ADS) in a corpus of utterances from Australian English mothers speaking to their infants and to another adult. Given the importance of rhythmic cues to stress and word-segmentation in English, the investigation focused on the extent to which IDS makes such cues salient. Two methods of analysis were used: one focused on segmental durational properties, using a variety of durational measures; the other focused on the prominence of vocalic/sonorant segments, as determined by their duration, intensity, pitch, and spectral balance, using individual measures as well as composite measures of prominence derived from auditory-model analyses. There were few IDS/ADS differences/trends on the individual measures, though mean pitch and pitch variability were higher in IDS than ADS, while IDS vowels showed more negative spectral tilt. However, the model-based analyses suggested that differences in the prominence of vowels/sonorant segments were reduced in IDS, with further analysis suggesting that pitch contributed little to prominence. The reduction in prominence contrasts may be due to the importance of mood-regulation in speech to young infants, and may suggest that infants rely on segmental cues to stress and word-segmentation.


Subject(s)
Cues , Mother-Child Relations , Periodicity , Phonetics , Speech , Voice Quality , Adult , Affect , Australia , Child Language , Female , Humans , Infant , Infant, Newborn , Pitch Perception , Speech Perception , Speech Production Measurement , Time Factors
17.
Front Hum Neurosci ; 18: 1403677, 2024.
Article in English | MEDLINE | ID: mdl-38911229

ABSTRACT

Slow cortical oscillations play a crucial role in processing the speech amplitude envelope, which is perceived atypically by children with developmental dyslexia. Here we use electroencephalography (EEG) recorded during natural speech listening to identify neural processing patterns involving slow oscillations that may characterize children with dyslexia. In a story listening paradigm, we find that atypical power dynamics and phase-amplitude coupling between delta and theta oscillations characterize dyslexic versus other child control groups (typically-developing controls, other language disorder controls). We further isolate EEG common spatial patterns (CSP) during speech listening across delta and theta oscillations that identify dyslexic children. A linear classifier using four delta-band CSP variables predicted dyslexia status (0.77 AUC). Crucially, these spatial patterns also identified children with dyslexia when applied to EEG measured during a rhythmic syllable processing task. This transfer effect (i.e., the ability to use neural features derived from a story listening task as input features to a classifier based on a rhythmic syllable task) is consistent with a core developmental deficit in neural processing of speech rhythm. The findings are suggestive of distinct atypical neurocognitive speech encoding mechanisms underlying dyslexia, which could be targeted by novel interventions.

18.
J Exp Child Psychol ; 116(2): 120-38, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23773915

ABSTRACT

Speech perception is auditory-visual, but relatively little is known about auditory-visual compared with auditory-only speech perception. One avenue for further understanding is via developmental studies. In a recent study, Sekiyama and Burnham (2008) found that English speakers significantly increase their use of visual speech information between 6 and 8 years of age but that this development does not appear to be universal across languages. Here, the possible bases for this language-specific increase among English speakers were investigated. Four groups of English-language children (5, 6, 7, and 8 years) and a group of adults were tested on auditory-visual, auditory-only, and visual-only speech perception; language-specific speech perception with native and non-native speech sounds; articulation; and reading. Results showed that language-specific speech perception and lip-reading ability reliably predicted auditory-visual speech perception in children but that adult auditory-visual speech perception was predicted by auditory-only speech perception. The implications are discussed in terms of both auditory-visual speech perception and language development.


Subject(s)
Auditory Perception , Reading , Speech Perception , Visual Perception , Adult , Age Factors , Child , Child, Preschool , Female , Humans , Language , Male , Phonetics
19.
Brain Lang ; 236: 105217, 2023 01.
Article in English | MEDLINE | ID: mdl-36529116

ABSTRACT

Neural synchronization to amplitude-modulated noise at three frequencies (2 Hz, 5 Hz, 8 Hz) thought to be important for syllable perception was investigated in English-speaking school-aged children. The theoretically-important delta-band (∼2Hz, stressed syllable level) was included along with two syllable-level rates. The auditory steady state response (ASSR) was recorded using EEG in 36 7-to-12-year-old children. Half of the sample had either dyslexia or dyslexia and DLD (developmental language disorder). In comparison to typically-developing children, children with dyslexia or with dyslexia and DLD showed reduced ASSRs for 2 Hz stimulation but similar ASSRs at 5 Hz and 8 Hz. These novel data for English ASSRs converge with prior data suggesting that children with dyslexia have atypical synchrony between brain oscillations and incoming auditory stimulation at âˆ¼ 2 Hz, the rate of stressed syllable production across languages. This atypical synchronization likely impairs speech processing, phonological processing, and possibly syntactic processing, as predicted by Temporal Sampling theory.


Subject(s)
Dyslexia , Speech Perception , Humans , Child , Speech , Acoustic Stimulation , Speech Perception/physiology , Noise
20.
Brain Sci ; 13(5)2023 May 16.
Article in English | MEDLINE | ID: mdl-37239282

ABSTRACT

The music and spoken language domains share acoustic properties such as fundamental frequency (f0, perceived as pitch), duration, resonance frequencies, and intensity. In speech, the acoustic properties form an essential part in differentiating between consonants, vowels, and lexical tones. This study investigated whether there is any advantage of musicality in the perception and production of Thai speech sounds. Two groups of English-speaking adults-one comprising formally trained musicians and the other non-musicians-were tested for their perception and production of Thai consonants, vowels, and tones. For both groups, the perception and production accuracy scores were higher for vowels than consonants and tones, and in production, there was also better accuracy for tones than consonants. Between the groups, musicians (defined as having more than five years of formal musical training) outperformed non-musicians (defined as having less than two years of formal musical training) in both the perception and production of all three sound types. Additional experiential factors that positively influenced the accuracy rates were the current hours of practice per week and those with some indication of an augmentation due to musical aptitude, but only in perception. These results suggest that music training, defined as formal training for more than five years, and musical training, expressed in hours of weekly practice, facilitate the perception and production of non-native speech sounds.

SELECTION OF CITATIONS
SEARCH DETAIL