Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
1.
Behav Sci (Basel) ; 13(10)2023 Sep 26.
Article in English | MEDLINE | ID: mdl-37887450

ABSTRACT

How people recognize linguistic and emotional prosody in different listening conditions is essential for understanding the complex interplay between social context, cognition, and communication. The perception of both lexical tones and emotional prosody depends on prosodic features including pitch, intensity, duration, and voice quality. However, it is unclear which aspect of prosody is perceptually more salient and resistant to noise. This study aimed to investigate the relative perceptual salience of emotional prosody and lexical tone recognition in quiet and in the presence of multi-talker babble noise. Forty young adults randomly sampled from a pool of native Mandarin Chinese with normal hearing listened to monosyllables either with or without background babble noise and completed two identification tasks, one for emotion recognition and the other for lexical tone recognition. Accuracy and speed were recorded and analyzed using generalized linear mixed-effects models. Compared with emotional prosody, lexical tones were more perceptually salient in multi-talker babble noise. Native Mandarin Chinese participants identified lexical tones more accurately and quickly than vocal emotions at the same signal-to-noise ratio. Acoustic and cognitive dissimilarities between linguistic prosody and emotional prosody may have led to the phenomenon, which calls for further explorations into the underlying psychobiological and neurophysiological mechanisms.

2.
Am J Intellect Dev Disabil ; 128(6): 425-448, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37875276

ABSTRACT

Automated methods for processing of daylong audio recordings are efficient and may be an effective way of assessing developmental stage for typically developing children; however, their utility for children with developmental disabilities may be limited by constraints of algorithms and the scope of variables produced. Here, we present a novel utterance-level processing (ULP) system that 1) extracts utterances from daylong recordings, 2) verifies automated speaker tags using human annotation, and 3) provides vocal maturity metrics unavailable through automated systems. Study 1 examines the reliability and validity of this system in low-risk controls (LRC); Study 2 extends the ULP to children with Angelman syndrome (AS). Results showed that ULP annotations demonstrated high coder agreement across groups. Further, ULP metrics aligned with language assessments for LRC but not AS, perhaps reflecting limitations of language assessments in AS. We argue that ULP increases accuracy, efficiency, and accessibility of detailed vocal analysis for syndromic populations.


Subject(s)
Angelman Syndrome , Speech , Humans , Child , Reproducibility of Results
3.
Indian J Otolaryngol Head Neck Surg ; 75(3): 1707-1711, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37636633

ABSTRACT

Signal-to-noise ratio (SNR) is one of the important parameters to be considered for the effective perception of speech. Many researchers indicate that children with autism spectrum disorder (ASD) have reduced capacity to integrate sensory information across different modalities and show speech understanding difficulty in the presence of background speech or noise. So, this present study was undertaken with the aimed to evaluate and compare the speech perception ability in quiet and in the presence of noise for children with and without ASD and also to compare across different noise conditions. Speech perception in noise was measured for 15 children with ASD and 15 age-matched children without ASD in the age range of 8 to 12 years. The stimulus includes standardized bisyllabic and trisyllabic Kannada words in quiet and at different SNR conditions. The result showed that children with ASD had poor performance in all the listening conditions (quiet, speech babble, and speech noise) and the syllable conditions (bisyllables and trisyllables) compared to children without ASD. When compared across quiet and different SNR conditions for individuals with ASD, the result showed the best performance in quiet conditions followed by different SNR conditions. The performance deteriorated with a decrease in SNR for both groups. Children with ASD showed poor performance in quiet and in the presence of noise compared to children without ASD. Speech perception evaluation in the presence of noise provides a more reliable predictor of the communication difficulty faced by children with ASD than evaluating only in quiet conditions.

4.
Clin Linguist Phon ; 37(4-6): 330-344, 2023 06 03.
Article in English | MEDLINE | ID: mdl-35652603

ABSTRACT

Limited evidence for early indicators of childhood apraxia of speech (CAS) precludes reliable diagnosis before 36 months, although a few prior studies have identified several potential early indicators. We examined these possible early indicators in 10 toddlers aged 14-24 months at risk for CAS due to a genetic condition: 7q11.23 duplication syndrome (Dup7). Phon Vocalisation analyses were conducted on phonetic transcriptions of each child's vocalisations during an audio-video recorded 30-minute play session with a caregiver and/or a trained research assistant. The resulting data were compared to data previously collected by Overby from similar-aged toddlers developing typically (TD), later diagnosed with CAS (LCAS), or later diagnosed with another speech sound disorder (LSSD). The Dup7 group did not differ significantly from the LCAS group on any measure. In contrast, the Dup7 group evidenced significant delays relative to the LSSD group on canonical babble frequency, volubility, consonant place diversity, and consonant manner diversity and relative to the TD group not only on these measures but also on canonical babble ratio, consonant diversity, and vocalisation structure diversity. Toddlers with Dup7 also demonstrated expressive vocabulary delay as measured by both number of word types orally produced during the play sessions and primary caregivers' responses on a standardised parent-report measure of early expressive vocabulary. Examining babble, phonetic, and phonotactic characteristics from the productions of young children may allow for earlier identification of CAS and a better understanding of the nature of CAS.


Subject(s)
Apraxias , Speech , Humans , Child, Preschool , Speech/physiology , Apraxias/diagnosis , Apraxias/genetics , Speech Disorders , Phonetics , Speech Production Measurement
5.
HGG Adv ; 3(3): 100119, 2022 Jul 14.
Article in English | MEDLINE | ID: mdl-35677809

ABSTRACT

Precision medicine is an emerging approach to managing disease by taking into consideration an individual's genetic and environmental profile toward two avenues to improved outcomes: prevention and personalized treatments. This framework is largely geared to conditions conventionally falling into the field of medical genetics. Here, we show that the same avenues to improving outcomes can be applied to conditions in the field of behavior genomics, specifically disorders of spoken language. Babble Boot Camp (BBC) is the first comprehensive and personalized program designed to proactively mitigate speech and language disorders in infants at predictable risk by fostering precursor and early communication skills via parent training. The intervention begins at child age 2 to 5 months and ends at age 24 months, with follow-up testing at 30, 42, and 54 months. To date, 44 children with a newborn diagnosis of classic galactosemia (CG) have participated in the clinical trial of BBC. CG is an inborn error of metabolism of genetic etiology that predisposes up to 85% of children to severe speech and language disorders. Of 13 children with CG who completed the intervention and all or part of the follow-up testing, only one had disordered speech and none had disordered language skills. For the treated children who completed more than one assessment, typical speech and language skills were maintained over time. This shows that knowledge of genetic risk at birth can be leveraged toward proactive and personalized management of a disorder that manifests behaviorally.

6.
Healthcare (Basel) ; 10(3)2022 Feb 28.
Article in English | MEDLINE | ID: mdl-35326936

ABSTRACT

Hearing is a complex ability that extends beyond the peripheral auditory system. A speech in noise/competition test is a valuable measure to include in the test battery when attempting to assess an individual's "hearing". The present study compared syllable vs. word scoring of the Greek Speech-in-Babble (SinB) test with 22 native Greek speaking children (6-12-year-olds) diagnosed with auditory processing disorder (APD) and 33 native Greek speaking typically developing children (6-12-year-olds). A three-factor analysis of variance revealed greater discriminative ability for syllable scoring than word scoring, with significant interactions between group and scoring. Two-way analysis of variance revealed SinB word-based measures (SNR50%) were larger (poorer performance) than syllable-based measures for both groups of children. Cohen's d values were larger for syllable-based mean scores compared to word-based mean scores between groups for both ears. These findings indicate that the type of scoring affects the SinB's resolution capacity and that syllable scoring might better differentiate typically developing children and children with APD.

7.
Logoped Phoniatr Vocol ; 47(1): 1-9, 2022 Apr.
Article in English | MEDLINE | ID: mdl-32696707

ABSTRACT

AIM: The present study investigates the effect of signal degradation on perceived listening effort in children with hearing loss listening in a simulated class-room context. It also examines the associations between perceived listening effort, passage comprehension performance and executive functioning. METHODS: Twenty-four children (aged 06:03-13:00 years) with hearing impairment using cochlear implant (CI) and/or hearing aids (HA) participated. The children made ratings of perceived listening effort after completing an auditory passage comprehension task. All children performed the task in four different listening conditions: listening to a typical (i.e. normal) voice in quiet, to a dysphonic voice in quiet, to a typical voice in background noise and to a dysphonic voice in background noise. In addition, the children completed a task assessing executive function. RESULTS: Both voice quality and background noise increased perceived listening effort in children with CI/HA, but no interaction with executive function was seen. CONCLUSION: Since increased listening effort seems to be a consequence of increased cognitive resource spending, it is likely that less resources will be available for these children not only to comprehend but also to learn in challenging listening environments such as classrooms.


Subject(s)
Hearing Loss , Speech Perception , Child , Humans , Listening Effort , Noise/adverse effects , Voice Quality
8.
Data Brief ; 38: 107367, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34568527

ABSTRACT

The data presented in this article is related to our research article titled "contralateral suppression of transient evoked otoacoustic emissions for various noise signals" (Kalaiah et al., 2017) [1]. The contralateral suppression of transient evoked otoacoustic emissions (TEOAE) was measured from 19 young adults with normal hearing sensitivity. To measure the contralateral suppression, TEOAEs were recorded using clicks in linear mode with and without presenting noise to the contralateral ear. Initially, the TEOAE was recorded without presenting noise to the contralateral ear of participants referred to as 'baseline' TEOAE. Following this, the TEOAE was recorded by presenting noise to the contralateral ear, referred to as contralateral noise conditions. Noises used in the present study included white noise, amplitude-modulated noise, and real-life noise signals. All recordings were completed on the same session in single probe-fit condition. The data reported here include the global amplitude of TEOAE, noise-floor level, and signal-to-noise ratio across baselines and contralateral noise conditions and the magnitude of contralateral suppression for various noises. Further, the data includes the amplitude of TEOAEs and suppression magnitude across eight 2 ms time bands between 2 and 18 ms.

9.
Exp Psychol ; 68(1): 49-55, 2021 Jan.
Article in English | MEDLINE | ID: mdl-34109807

ABSTRACT

The familiar talker advantage is the finding that a listener's ability to perceive and understand a talker is facilitated when the listener is familiar with the talker. However, it is unclear when the benefits of familiarity emerge and whether they strengthen over time. To better understand the time course of the familiar talker advantage, we assessed the effects of long-term, implicit voice learning on 89 young adults' sentence recognition accuracy in the presence of four-talker babble. A university professor served as the target talker in the experiment. Half the participants were students of the professor and familiar with her voice. The professor was a stranger to the remaining participants. We manipulated the listeners' degree of familiarity with the professor over the course of a semester. We used mixed effects modeling to test for the effects of the two independent variables: talker and hours of exposure. Analyses revealed a familiar talker advantage in the listeners after 16 weeks (∼32 h) of exposure to the target voice. These results imply that talker familiarity (outside of the confines of a long-term, familial relationship) seems to be a much quicker-to-emerge, reliable cue for bootstrapping spoken language perception than previous literature suggested.


Subject(s)
Noise , Speech Perception/physiology , Adult , Female , Humans , Language , Male , Young Adult
10.
Front Neurosci ; 15: 646137, 2021.
Article in English | MEDLINE | ID: mdl-34012384

ABSTRACT

OBJECTIVES: Auditory perceptual learning studies tend to focus on the nature of the target stimuli. However, features of the background noise can also have a significant impact on the amount of benefit that participants obtain from training. This study explores whether perceptual learning of speech in background babble noise generalizes to other, real-life environmental background noises (car and rain), and if the benefits are sustained over time. DESIGN: Normal-hearing native English speakers were randomly assigned to a training (n = 12) or control group (n = 12). Both groups completed a pre- and post-test session in which they identified Bamford-Kowal-Bench (BKB) target words in babble, car, or rain noise. The training group completed speech-in-babble noise training on three consecutive days between the pre- and post-tests. A follow up session was conducted between 8 and 18 weeks after the post-test session (training group: n = 9; control group: n = 7). RESULTS: Participants who received training had significantly higher post-test word identification accuracy than control participants for all three types of noise, although benefits were greatest for the babble noise condition and weaker for the car- and rain-noise conditions. Both training and control groups maintained their pre- to post-test improvement over a period of several weeks for speech in babble noise, but returned to pre-test accuracy for speech in car and rain noise. CONCLUSION: The findings show that training benefits can show some generalization from speech-in-babble noise to speech in other types of environmental noise. Both groups sustained their learning over a period of several weeks for speech-in-babble noise. As the control group received equal exposure to all three noise types, the sustained learning with babble noise, but not other noises, implies that a structural feature of babble noise was conducive to the sustained improvement. These findings emphasize the importance of considering the background noise as well as the target stimuli in auditory perceptual learning studies.

11.
Infant Behav Dev ; 63: 101531, 2021 05.
Article in English | MEDLINE | ID: mdl-33582572

ABSTRACT

The aim of the present mixed cross-sectional and longitudinal study was to observe and describe some aspects of vocal imitation in natural mother-infant interaction. Specifically, maternal imitation of infant utterances was observed in relation to the imitative modeling, mirrored equivalence, and social guided learning models of infant speech development. Nine mother-infant dyads were audio-video recorded. Infants were recruited at different ages between 6 and 11 months and followed for 3 months, providing a quasi-longitudinal series of data from 6 through 14 months of age. It was observed that maternal imitation was more frequent than infant imitation even though vocal imitation was a rare maternal response. Importantly, mothers used a range of contingent and noncontingent vocal responses in interaction with their infants. Mothers responded to three-quarters of their infant's vocalizations, including speech-like and less mature vocalization types. The infants' phonetic repertoire expanded with age. Overall, the findings are most consistent with the social guided learning approach. Infants rarely imitated their mothers, suggests a creative self-motivated learning mechanism that requires further investigation.


Subject(s)
Imitative Behavior , Mothers , Cross-Sectional Studies , Female , Humans , Infant , Infant Behavior , Longitudinal Studies , Mother-Child Relations
12.
J Pers Med ; 11(2)2021 Jan 28.
Article in English | MEDLINE | ID: mdl-33525536

ABSTRACT

Type I (classic) galactosemia, galactose 1-phosphate uridylyltransferase (GALT)-deficiency is a hereditary disorder of galactose metabolism. The current therapeutic standard of care, a galactose-restricted diet, is effective in treating neonatal complications but is inadequate in preventing burdensome complications. The development of several animal models of classic galactosemia that (partly) mimic the biochemical and clinical phenotypes and the resolution of the crystal structure of GALT have provided important insights; however, precise pathophysiology remains to be elucidated. Novel therapeutic approaches currently being explored focus on several of the pathogenic factors that have been described, aiming to (i) restore GALT activity, (ii) influence the cascade of events and (iii) address the clinical picture. This review attempts to provide an overview on the latest advancements in therapy approaches.

13.
Front Artif Intell ; 4: 809321, 2021.
Article in English | MEDLINE | ID: mdl-35005616

ABSTRACT

The sophistication of artificial intelligence (AI) technologies has significantly advanced in the past decade. However, the observed unpredictability and variability of AI behavior in noisy signals is still underexplored and represents a challenge when trying to generalize AI behavior to real-life environments, especially for people with a speech disorder, who already experience reduced speech intelligibility. In the context of developing assistive technology for people with Parkinson's disease using automatic speech recognition (ASR), this pilot study reports on the performance of Google Cloud speech-to-text technology with dysarthric and healthy speech in the presence of multi-talker babble noise at different intensity levels. Despite sensitivities and shortcomings, it is possible to control the performance of these systems with current tools in order to measure speech intelligibility in real-life conditions.

14.
Cogn Psychol ; 122: 101308, 2020 11.
Article in English | MEDLINE | ID: mdl-32504852

ABSTRACT

Infants' early babbling allows them to engage in proto-conversations with caretakers, well before clearly articulated, meaningful words are part of their productive lexicon. Moreover, the well-rehearsed sounds from babble serve as a perceptual 'filter', drawing infants' attention towards words that match the sounds they can reliably produce. Using naturalistic home recordings of 44 10-11-month-olds (an age with high variability in early speech sound production), this study tests whether infants' early consonant productions match words and objects in their environment. We find that infants' babble matches the consonants produced in their caregivers' speech. Infants with a well-established consonant repertoire also match their babble to objects in their environment. Our findings show that infants' early consonant productions are shaped by their input: by 10 months, the sounds of babble match what infants see and hear.


Subject(s)
Language Development , Phonetics , Speech Perception/physiology , Verbal Learning , Acoustic Stimulation , Communication , Female , Humans , Infant , Male
15.
Appl Acoust ; 162: 107183, 2020 May.
Article in English | MEDLINE | ID: mdl-32362663

ABSTRACT

This project set out to develop an app for infants under one year of age that responds in real time to language-like infant utterances with attractive images on an iPad screen. Language-like vocalisations were defined as voiced utterances which were not high pitched squeals, nor shouts. The app, BabblePlay, was intended for use in psycholinguistic research to investigate the possible causal relationship between early canonical babble and early onset of word production. It is also designed for a clinical setting, (1) to illustrate the importance of feedback as a way to encourage infant vocalisations, and (2) to provide consonant production practice for infant populations that do not vocalise enough or who vocalise in an atypical way, specifically, autistic infants (once they have begun to produce consonants). This paper describes the development and testing of BabblePlay, which responds to an infant's vocalisations with colourful moving shapes on the screen that are analogous to some features of the infant's vocalization including loudness and duration. Validation testing showed high correlation between the app and two human judges in identifying vocalisations in 200 min of BabblePlay recordings, and a feasibility study conducted with 60 infants indicates that they can learn the contingency between their vocalisations and the appearance of shapes on the screen in one five minute BabblePlay session. BabblePlay meets the specification of being a simple and easy-to-use app. It has been shown to be a promising tool for research on infant language development that could lead to its use in home and professional environments to demonstrate the importance of immediate reward for vocal utterances to increase vocalisations in infants.

16.
Int J Audiol ; 59(1): 33-38, 2020 01.
Article in English | MEDLINE | ID: mdl-31305187

ABSTRACT

Objective: The Speech in Babble (SiB) test assesses the perception of speech in noise in UK adults. Here, we define the normal range of SiB scores to enable the use of the test in clinic.Design: In each test, 25 monosyllabic words were played in background multi-talker babble. Listeners had to repeat the word they heard. An adaptive procedure was used to determine the signal-to-noise ratio needed to reach 50% correct responses (i.e. the Speech Reception Threshold). Eight distinct equivalent lists were available.Study sample: Sixty-nine normal-hearing adults (aged 20-57 years) with no reported listening difficulties participated in the study and completed the SiB test twice in both ears.Results: Normative SiB scores varied from -0.8 dB to 3.7 dB suggesting that patients outside these limits should be considered as having abnormal scores. No statistically significant difference between ears and no effect of age or sex was found. There was "fair" test-retest reliability.Conclusion: The SiB test is a short, valid and reliable test that can be used in UK clinics, e.g. as part of a standard APD battery or evaluating the performance of hearing impaired patients.


Subject(s)
Speech Discrimination Tests/statistics & numerical data , Speech Reception Threshold Test/statistics & numerical data , Adult , Auditory Threshold , Female , Healthy Volunteers , Humans , Male , Middle Aged , Noise , Perceptual Masking , Reference Values , Signal-To-Noise Ratio , Speech Perception , United Kingdom , Young Adult
17.
Logoped Phoniatr Vocol ; 45(1): 15-23, 2020 Apr.
Article in English | MEDLINE | ID: mdl-30879365

ABSTRACT

Purpose: Speech signal degradation such as a voice disorder presented in quiet or in combination with multi-talker babble noise could affect listening comprehension in children with hearing impairment. This study aims to investigate the effects of voice quality and multi-talker babble noise on passage comprehension in children with using cochlear implants (CIs) and/or hearing aids (HAs). It also aims to examine what role executive functioning has for passage comprehension in listening conditions with degraded signals (voice quality and multi-talker babble noise) in children using CI/HA. Methods: Twenty-three children (10 boys and 13 girls; mean age 9 years) using CI and/or HA were tested for passage comprehension in four listening conditions: a typical voice or a (hoarse) dysphonic, voice presented in quiet or in multi-talker babble noise. Results: The results show that the dysphonic voice did not affect passage comprehension in quiet or in noise. Multi-talker babble noise decreased passage comprehension compared to performance in quiet. No interactions with executive function were found. Conclusions: In conclusion, children with CI/HA seem to struggle with comprehension in poor sound environments, which in turn may reduce learning opportunities at school.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Comprehension , Disabled Children/rehabilitation , Executive Function , Hearing Aids , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Speech Perception , Adolescent , Adolescent Behavior , Age Factors , Child , Child Behavior , Disabled Children/psychology , Dysphonia/physiopathology , Female , Humans , Male , Persons With Hearing Impairments/psychology , Speech Intelligibility , Voice Quality
18.
Logoped Phoniatr Vocol ; 44(2): 87-94, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30204510

ABSTRACT

PURPOSE: This study examines the influence of voice quality and multi-talker babble noise on processing and storage performance in a working memory task performed by children using cochlear implants (CI) and/or hearing aids (HA). METHODS: Twenty-three children with a hearing impairment using CI and/or HA participated. Age range was between 6 and 13 years. The Competing Language Processing Task (CLPT) was assessed in three listening conditions; a typical voice presented in quiet, a dysphonic voice in quiet, and a typical voice in multi-talker babble noise (signal-to-noise ratio +10 dB). Being a dual task, the CLPT consists of a sentence processing component and a recall component. The recall component constitutes the measure of working memory capacity (WMC). Higher-level executive function was assessed using Elithorn's mazes. RESULTS: The results showed that the dysphonic voice did not affect performance in the processing component or performance in the recall component. Multi-talker babble noise decreased performance in the recall component but not in the processing component. Higher-level executive function was not significantly related to performance in any component. CONCLUSIONS: The findings indicate that multi-talker babble noise, but not a dysphonic voice quality, seems to put strain on WMC in children using CI and/or HA.


Subject(s)
Cochlear Implants , Disabled Children/rehabilitation , Hearing Aids , Mental Recall , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Speech Perception , Voice Quality , Acoustic Stimulation , Adolescent , Child , Comprehension , Disabled Children/psychology , Dysphonia/physiopathology , Electric Stimulation , Female , Humans , Male , Persons With Hearing Impairments/psychology , Speech Acoustics , Speech Intelligibility
19.
Lang Speech ; 62(3): 531-545, 2019 Sep.
Article in English | MEDLINE | ID: mdl-30070165

ABSTRACT

The identification of English consonants in quiet and multi-talker babble was examined for three groups of young adult listeners: Chinese in China, Chinese in the USA (CNU), and English-native listeners. As expected, native listeners outperformed non-native listeners. The two non-native groups had similar performance in quiet, whereas CNU listeners performed significantly better than Chinese in China listeners in babble. It is concluded that CNU listeners may benefit from English experience, for example, better use of temporal variation in noise and better capacity against informational masking, to perceive English consonants better in babble. Possible explanations regarding the differential noise effect on the three groups are discussed.


Subject(s)
Multilingualism , Noise/adverse effects , Perceptual Masking , Speech Acoustics , Speech Intelligibility , Speech Perception , Voice Quality , Adult , Female , Humans , Male , Recognition, Psychology , Young Adult
20.
Int J Pediatr Otorhinolaryngol ; 111: 39-46, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29958612

ABSTRACT

OBJECTIVE: The present study was designed to test the hypothesis that medial olivocochlear system functionality is associated with speech recognition in babble performance in children diagnosed with central auditory processing disorder. METHOD: Children diagnosed with central auditory processing disorder who specifically demonstrated speech in noise deficits were compared to children diagnosed with central auditory processing disorder without these deficits. Suppression effects were examined across 15 time intervals to examine variability. Analysis of right and left ear suppression was performed separately to evaluate laterality. STUDY SAMPLE: 52 children diagnosed with central auditory processing disorder, aged 6-14 years were divided into normal or abnormal groups based on SinB performance in each ear. Cut-off value was set at SNR = 1.33 dB. Transient otoacoustic emissions suppression was measured. RESULTS: The abnormal Speech in Babble Right Ear group showed significant negative correlations with suppression levels for 7 of the 15 time intervals measured. No significant correlations with SinBR performance were observed for the remaining time intervals, as was the case for the typically evaluated R8-18 time interval and the Speech in Babble Left Ear. CONCLUSIONS: Results indicate that suppression is influenced by the time window analysed, and ear tested, and is associated with speech recognition in babble performance in children with central auditory processing disorder.


Subject(s)
Language Development Disorders/physiopathology , Otoacoustic Emissions, Spontaneous/physiology , Speech Perception/physiology , Adolescent , Child , Female , Functional Laterality , Hearing Tests , Humans , Language Development Disorders/diagnosis , Male , Noise
SELECTION OF CITATIONS
SEARCH DETAIL