Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.178
Filter
1.
Acta Otolaryngol ; : 1-6, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39351976

ABSTRACT

BACKGROUND: Hearing can be preserved in patients with considerable low-frequency hearing implanted with cochlear implants. However, the most favorable electrode type for hearing preservation and speech perception has been debated. OBJECTIVE: The aim was to evaluate hearing preservation and speech discrimination one year post-implantation for all types of cochlear implant electrode used for adult patients implanted between 2014 and 2022. METHODS: The HEARING group formula was used to calculate the degree of hearing preservation, which was defined as minimal (0-25%), partial (25-75%) or complete (≥ 75%). Speech perception was measured by monosyllabic words. RESULTS: Analysis of hearing preservation for the various electrode types revealed that FLEX 24 preserved hearing statistically significantly better (p < 0.05) than FLEX 28, FLEX soft, and contour advance. Also, FLEX 20 preserved hearing statistically significantly better (p < 0.05) than contour advance. No statistically significant difference was found for the monosyllabic word score for the different electrode types. DISCUSSION: There was a statistically significant difference between the electrode types in terms of hearing preservation but not for speech perception. The result of this study contributes important information about hearing preservation and speech perception that can be used for pre-surgery patient counselling.

2.
Front Psychol ; 15: 1484655, 2024.
Article in English | MEDLINE | ID: mdl-39355294

ABSTRACT

[This corrects the article DOI: 10.3389/fpsyg.2023.1270743.].

3.
Front Hum Neurosci ; 18: 1424920, 2024.
Article in English | MEDLINE | ID: mdl-39234407

ABSTRACT

Past studies have explored formant centering, a corrective behavior of convergence over the duration of an utterance toward the formants of a putative target vowel. In this study, we establish the existence of a similar centering phenomenon for pitch in healthy elderly controls and examine how such corrective behavior is altered in Alzheimer's Disease (AD). We found the pitch centering response in healthy elderly was similar when correcting pitch errors below and above the target (median) pitch. In contrast, patients with AD showed an asymmetry with a larger correction for the pitch errors below the target phonation than above the target phonation. These findings indicate that pitch centering is a robust compensation behavior in human speech. Our findings also explore the potential impacts on pitch centering from neurodegenerative processes impacting speech in AD.

4.
J Clin Med ; 13(17)2024 Sep 05.
Article in English | MEDLINE | ID: mdl-39274482

ABSTRACT

(1) Background: The aim of the present study was to assess the impact of reverberation on speech perception in noise and spatial release from masking (SRM) in bimodal or bilateral cochlear implant (CI) users and CI subjects with low-frequency residual hearing using combined electric-acoustic stimulation (EAS). (2) Methods: In total, 10 bimodal, 14 bilateral CI users and 14 EAS users, and 17 normal hearing (NH) controls, took part in the study. Speech reception thresholds (SRTs) in unmodulated noise were assessed in co-located masker condition (S0N0) with a spatial separation of speech and noise (S0N60) in both free-field and loudspeaker-based room simulation for two different reverberation times. (3) Results: There was a significant detrimental effect of reverberation on SRTs and SRM in all subject groups. A significant difference between the NH group and all the CI/EAS groups was found. There was no significant difference in SRTs between any CI and EAS group. Only NH subjects achieved spatial release from masking in reverberation, whereas no beneficial effect of spatial separation of speech and noise was found in any CI/EAS group. (4) Conclusions: The subject group with electric-acoustic stimulation did not yield a superior outcome in terms of speech perception in noise under reverberation when the noise was presented towards the better hearing ear.

5.
J Cogn ; 7(1): 69, 2024.
Article in English | MEDLINE | ID: mdl-39280724

ABSTRACT

Music making across cultures arguably involves a blend of innovation and adherence to established norms. This integration allows listeners to recognise a range of innovative, surprising, and functional elements in music, while also associating them to a certain tradition or style. In this light, musical creativity may be seen to involve the novel recombination of shared elements and rules, which can in itself give rise to new cultural conventions. Put simply, future norms rely on past knowledge and present action; this holds for music as it does for other cultural domains. A key process permeating this temporal transition, with regards to both music making and music listening, is prediction. Recent findings suggest that as we listen to music, our brain is constantly generating predictions based on prior knowledge acquired in a given enculturation context. Those predictions, in turn, can shape our appraisal of the music, in a continual perception-action loop. This dynamic process of predicting and calibrating expectations may enable shared musical realities, that is, sets of norms that are transmitted, with some modification, either vertically between generations of a given musical culture, or horizontally between peers of the same or different cultures. As music transforms through cultural evolution, so do the predictive models in our minds and the expectancy they give rise to, influenced by cultural exposure and individual experience. Thus, creativity and prediction are both fundamental and complementary to the transmission of cultural systems, including music, across generations and societies. For these reasons, prediction, creativity and cultural evolution were the central themes in a symposium we organised in 2022. The symposium aimed to study their interplay from an interdisciplinary perspective, guided by contemporary theories and methodologies. This special issue compiles research discussed during or inspired by that symposium, concluding with potential directions for the field of music cognition in that spirit.

6.
Infant Behav Dev ; 77: 101992, 2024 Sep 18.
Article in English | MEDLINE | ID: mdl-39298930

ABSTRACT

In the current preregistered study, we tested n = 67 6-month-old Norwegian infants' discrimination of a native vowel contrast /y-i/ and a non-native (British) vowel contrast /ʌ-æ/ in an eye-tracking habituation paradigm. Our results showed that, on a group level, infants did not discriminate either contrast. Yet, exploratory analyses revealed a negative association between infants' performance in each experiment, that is, better discrimination of the native contrast was associated with worse discrimination of the non-native contrast. Potentially, infants in this study might have been on the cusp of perceptual reorganisation towards their native language.

7.
Article in English | MEDLINE | ID: mdl-39299967

ABSTRACT

OBJECTIVE: To evaluate objective and subjective hearing outcomes in experienced cochlear implant users with single sided deafness (SSD CI) who used fitting maps created via anatomy-based fitting (ABF) and clinically-based fitting (CBF). PARTICIPANTS: Twelve SSD CI users with postlingual hearing loss. INTERVENTION: OTOPLAN (Version 3. (MED-EL) was used to determine intracochlear electrode contact positions using post-operative high-resolution flat panel volume computed tomography. From these positions, the corresponding center frequencies and bandwidths were derived for each channel. These were implemented in the clinical fitting software MAESTRO to yield an ABF map individualized to each user. MAIN OUTCOME MEASURES: ABF and CBF maps were compared. Objective speech perception in quiet and in noise, binaural effects, and self-perceived sound quality were evaluated. RESULTS: Significantly higher speech perception in noise scores were observed with the ABF map compared to the CBF map (mean SRT50: -6.49 vs. -4.8 dB SNR for the S0NCI configuration and - 3.85 vs. -2.75 dB SNR for the S0N0 configuration). Summation and squelch effects were significantly increased with the ABF map (0.86 vs. 0.21 dB SNR for summation and 0.85 vs. -0.09 dB SNR for squelch). No improvement in speech perception in quiet or spatial release from masking were observed with the ABF map. A similar level of self-perceived sound quality was reported for each map. Upon the end of the study, all users opted to keep the ABF map. This preference was independent of the angular insertion depth of the electrode array. CONCLUSIONS: Experienced SSD CI users preferred using the ABF map, which gave them significant improvements in binaural hearing and some aspects of speech perception.

8.
Hum Brain Mapp ; 45(13): e70023, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39268584

ABSTRACT

The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.


Subject(s)
Speech Perception , Speech , Humans , Speech Perception/physiology , Speech/physiology , Brain Mapping , Likelihood Functions , Motor Cortex/physiology , Cerebral Cortex/physiology , Cerebral Cortex/diagnostic imaging
9.
Brain Sci ; 14(9)2024 Sep 02.
Article in English | MEDLINE | ID: mdl-39335391

ABSTRACT

Transcranial magnetic stimulation (TMS) has been widely used to study the mechanisms that underlie motor output. Yet, the extent to which TMS acts upon the cortical neurons implicated in volitional motor commands and the focal limitations of TMS remain subject to debate. Previous research links TMS to improved subject performance in behavioral tasks, including a bias in phoneme discrimination. Our study replicates this result, which implies a causal relationship between electro-magnetic stimulation and psychomotor activity, and tests whether TMS-facilitated psychomotor activity recorded via electroencephalography (EEG) may thus serve as a superior input for neural decoding. First, we illustrate that site-specific TMS elicits a double dissociation in discrimination ability for two phoneme categories. Next, we perform a classification analysis on the EEG signals recorded during TMS and find a dissociation between the stimulation site and decoding accuracy that parallels the behavioral results. We observe weak to moderate evidence for the alternative hypothesis in a Bayesian analysis of group means, with more robust results upon stimulation to a brain region governing multiple phoneme features. Overall, task accuracy was a significant predictor of decoding accuracy for phoneme categories (F(1,135) = 11.51, p < 0.0009) and individual phonemes (F(1,119) = 13.56, p < 0.0003), providing new evidence for a causal link between TMS, neural function, and behavior.

10.
Article in English | MEDLINE | ID: mdl-39311007

ABSTRACT

Recent studies suggest that benefiting early from both a cochlear implant (CI) and exposure to cued speech (CS, support system for the perception of oral language) positively impacts deaf children's speech perception, speech intelligibility, and reading. This study aims to show how: 1/CS-based speech perception ("cue reading"), and speech intelligibility might also constitute precise measures for determining the impact of CI and CS on deaf students' literary performance; 2/print exposure might also be a predictive factor in this equation. We conducted regression analyses to examine the impact of these three variables in two experiments conducted on Grade 2-3 deaf children and Grade 6-9 deaf adolescents. Results indicate print exposure significantly contributes to literacy skills across experiments, with additional contributions from cue reading and speech intelligibility in older students. The predictive aspect of the print exposure, cue reading, and speech intelligibility variables will be discussed, as will the consequences for educational and pedagogical practices.

11.
Lang Speech ; : 238309241269059, 2024 Sep 19.
Article in English | MEDLINE | ID: mdl-39297582

ABSTRACT

Kiezdeutsch is a multiethnolectal variety of German spoken by young people from multicultural communities that exhibits lexical, syntactic, and phonetic differences from standard German. A rather salient and pervasive feature of this variety is the fronting of the standard palatal fricative /ç/ (as in ich "I") to [ɕ] or [ʃ]. Previous perception work shows that this difference is salient and carries social meaning but dependent on the listener group. Further investigations also point to the significance of /ɔɪ/-fronting in production; however, whether this is salient in perception has not yet been investigated. In several (multi)ethnolectal varieties, differences in voice quality compared to the standard have been identified. Therefore, in this study, we present an acoustic comparison of voice quality in adolescent speakers of Kiezdeutsch and SG, with results showing that Kiezdeutsch speakers produce a breathier voice quality. In addition, we report on a perception test designed to examine the social meaning of voice quality in combination with two segmental cues: coronalization of /ç/ and /ɔɪ/-fronting. The results indicate perceptual gradience for phonetic alternations detected in Kiezdeutsch with coronalization of /ç/ being a highly salient and reliable marker, whereas fronting of /ɔɪ/ and breathy voice do not appear to be clearly enregistered features of Kiezdeutsch by all listeners. Thus, even though we find differences in production, these may not necessarily be relevant in perception, pointing toward enregisterment- like sound change-being a continuous process of forming learned associations through tokens of experiences.

12.
Front Psychol ; 15: 1446240, 2024.
Article in English | MEDLINE | ID: mdl-39315043

ABSTRACT

The temporal dynamics of the perception of within-word coarticulatory cues remain a subject of ongoing debate in speech perception research. This behavioral gating study sheds light on the unfolding predictive use of anticipatory coarticulation in onset fricatives. Word onset fricatives (/f/ and /s/) were split into four gates (15, 35, 75 and 135 milliseconds). Listeners made a forced choice about the word they were listening to, based on the stimulus gates. The results showed fast predictive use of coarticulatory lip rounding during /s/ word onsets, as early as 15 ms from word onset. For /f/ onsets, coarticulatory backness and height began to be used predictively after 75 ms. These findings indicate that onset times of the occurrence and use of coarticulatory cues can be extremely fast and have a time course that differs depending on fricative type.

13.
Front Psychol ; 15: 1394309, 2024.
Article in English | MEDLINE | ID: mdl-39323581

ABSTRACT

Previous research on the perception of segmental features of languages has established a correlation between the phoneme inventory of a language and its speakers' perceptual abilities, as indexed by discrimination tasks and Mismatch Negativity (MMN). Building on this background, the current study elucidated the relationship between perceptual ability and tonal inventory by utilizing two tonal languages. Two groups of participants were included in the present experiment: Mandarin speakers and Hakka-Mandarin speakers. Onset latency analysis revealed a significant difference in the Mandarin syllable condition, with Hakka-Mandarin speakers demonstrating earlier MMN latency than Mandarin speakers. This suggests a more efficient auditory processing mechanism in Hakka-Mandarin speakers. Both groups, however, showed similar MMN latency in the Hakka syllable condition. The interaction between language background and syllable type indicates that other factors, such as syllable sonority, also influence MMN responses. These findings highlight the importance of considering multiple phonemic inventories and syllable characteristics in studies of tonal perception.

14.
Dev Cogn Neurosci ; 70: 101444, 2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39332108

ABSTRACT

Prenatal listening experience reportedly modulates how humans process speech at birth, but little is known about how speech perception develops throughout the perinatal period. The present experiment assessed the neural event-related potentials (ERP) and mismatch responses (MMR) to native vowels in 99 neonates born between 32 and 42 weeks of gestation. The vowels elicited reliable ERPs in newborns whose gestational age at time of experiment was at least 36 weeks and 1 day (36 + 1). The ERPs reflected spectral distinctions between vowel onsets from age 36 weeks + 6 days and durational distinctions at vowel offsets from age 37 weeks + 6 days. Starting at age 40 + 4, there was evidence of neural discrimination of vowel length, indexed by a negative MMR response. The present findings extend our understanding of the earliest stages of speech perception development in that they pinpoint the ages at which the cortex reliably responds to the phonetic characteristics of individual speech sounds and discriminates a native phoneme contrast. The age at which the brain reliably differentiates vowel onsets coincides with what is considered term age in many countries (37 weeks + 0 days of gestational age). Future studies should investigate to what extent the perinatal maturation of the cortical responses to speech sounds is modulated by the ambient language.

15.
Cereb Cortex ; 34(9)2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39329356

ABSTRACT

Evidence suggests that the articulatory motor system contributes to speech perception in a context-dependent manner. This study tested 2 hypotheses using magnetoencephalography: (i) the motor cortex is involved in phonological processing, and (ii) it aids in compensating for speech-in-noise challenges. A total of 32 young adults performed a phonological discrimination task under 3 noise conditions while their brain activity was recorded using magnetoencephalography. We observed simultaneous activation in the left ventral primary motor cortex and bilateral posterior-superior temporal gyrus when participants correctly identified pairs of syllables. This activation was significantly more pronounced for phonologically different than identical syllable pairs. Notably, phonological differences were resolved more quickly in the left ventral primary motor cortex than in the left posterior-superior temporal gyrus. Conversely, the noise level did not modulate the activity in frontal motor regions and the involvement of the left ventral primary motor cortex in phonological discrimination was comparable across all noise conditions. Our results show that the ventral primary motor cortex is crucial for phonological processing but not for compensation in challenging listening conditions. Simultaneous activation of left ventral primary motor cortex and bilateral posterior-superior temporal gyrus supports an interactive model of speech perception, where auditory and motor regions shape perception. The ventral primary motor cortex may be involved in a predictive coding mechanism that influences auditory-phonetic processing.


Subject(s)
Magnetoencephalography , Motor Cortex , Phonetics , Speech Perception , Humans , Male , Female , Motor Cortex/physiology , Young Adult , Speech Perception/physiology , Adult , Functional Laterality/physiology , Discrimination, Psychological/physiology , Acoustic Stimulation , Brain Mapping , Noise
16.
Entropy (Basel) ; 26(9)2024 Aug 28.
Article in English | MEDLINE | ID: mdl-39330066

ABSTRACT

Misunderstandings in dyadic interactions often persist despite our best efforts, particularly between native and non-native speakers, resembling a broken duet that refuses to harmonise. This paper delves into the computational mechanisms underpinning these misunderstandings through the lens of the broken Lorenz system-a continuous dynamical model. By manipulating a specific parameter regime, we induce bistability within the Lorenz equations, thereby confining trajectories to distinct attractors based on initial conditions. This mirrors the persistence of divergent interpretations that often result in misunderstandings. Our simulations reveal that differing prior beliefs between interlocutors result in misaligned generative models, leading to stable yet divergent states of understanding when exposed to the same percept. Specifically, native speakers equipped with precise (i.e., overconfident) priors expect inputs to align closely with their internal models, thus struggling with unexpected variations. Conversely, non-native speakers with imprecise (i.e., less confident) priors exhibit a greater capacity to adjust and accommodate unforeseen inputs. Our results underscore the important role of generative models in facilitating mutual understanding (i.e., establishing a shared narrative) and highlight the necessity of accounting for multistable dynamics in dyadic interactions.

17.
J Neurosci Methods ; 412: 110277, 2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39245330

ABSTRACT

BACKGROUND: Speech sounds are processed in the human brain through intricate and interconnected cortical and subcortical structures. Two neural signatures, one largely from cortical sources (mismatch response, MMR) and one largely from subcortical sources (frequency-following response, FFR) are critical for assessing speech processing as they both show sensitivity to high-level linguistic information. However, there are distinct prerequisites for recording MMR and FFR, making them difficult to acquire simultaneously NEW METHOD: Using a new paradigm, our study aims to concurrently capture both signals and test them against the following criteria: (1) replicating the effect that the MMR to a native speech contrast significantly differs from the MMR to a nonnative speech contrast, and (2) demonstrating that FFRs to three speech sounds can be reliably differentiated. RESULTS: Using EEG from 18 adults, we observed a decoding accuracy of 72.2 % between the MMR to native vs. nonnative speech contrasts. A significantly larger native MMR was shown in the expected time window. Similarly, a significant decoding accuracy of 79.6 % was found for FFR. A high stimulus-to-response cross-correlation with a 9 ms lag suggested that FFR closely tracks speech sounds. COMPARISON WITH EXISTING METHOD(S): These findings demonstrate that our paradigm reliably captures both MMR and FFR concurrently, replicating and extending past research with much fewer trials (MMR: 50 trials; FFR: 200 trials) and shorter experiment time (12 minutes). CONCLUSIONS: This study paves the way to understanding cortical-subcortical interactions for speech and language processing, with the ultimate goal of developing an assessment tool specific to early development.

18.
Neuroimage ; : 120875, 2024 Sep 26.
Article in English | MEDLINE | ID: mdl-39341475

ABSTRACT

In speech perception, low-frequency cortical activity tracks hierarchical linguistic units (e.g., syllables, phrases, and sentences) on top of acoustic features (e.g., speech envelope). Since the fluctuation of speech envelope typically corresponds to the syllabic boundaries, one common interpretation is that the acoustic envelope underlies the extraction of discrete syllables from continuous speech for subsequent linguistic processing. However, it remains unclear whether and how cortical activity encodes linguistic information when the speech envelope does not provide acoustic correlates of syllables. To address the issue, we introduced a frequency-tagging speech stream where the syllabic rhythm was obscured by echoic envelopes and investigated neural encoding of hierarchical linguistic information using electroencephalography (EEG). When listeners attended to the echoic speech, cortical activity showed reliable tracking of syllable, phrase, and sentence levels, among which the higher-level linguistic units elicited more robust neural responses. When attention was diverted from the echoic speech, reliable neural tracking of the syllable level was also observed in contrast to deteriorated neural tracking of the phrase and sentence levels. Further analyses revealed that the envelope aligned with the syllabic rhythm could be recovered from the echoic speech through a neural adaptation model, and the reconstructed envelope yielded higher predictive power for the neural tracking responses than either the original echoic envelope or anechoic envelope. Taken together, these results suggest that neural adaptation and attentional modulation jointly contribute to neural encoding of linguistic information in distorted speech where the syllabic rhythm is obscured by echoes.

19.
Front Hum Neurosci ; 18: 1441854, 2024.
Article in English | MEDLINE | ID: mdl-39345947

ABSTRACT

Introduction: Aided auditory late latency response (LLR) serves as an objective tool for evaluating auditory cortical maturation following cochlear implantation in children. While aided LLR is commonly measured using sound-field acoustic stimulation, recording electrically evoked LLR (eLLR) offer distinct advantages, such as improved stimulus control and the capability for single electrode stimulation. Hence, the study aimed to compare eLLR responses with single electrode stimulation in the apical, middle, and basal regions and to evaluate their relationship with speech perception in paediatric cochlear implant (CI) recipients. Method: eLLR responses with single electrode stimulation were measured in 27 paediatric unilateral CI users with an active recording electrode placed at Cz. The stimuli consisted of 36 msec biphasic pulse trains presented across three electrode sites (apical-E20, middle-E11, and basal-E03). eLLR responses were compared across these electrode sites, and the relationship between speech recognition scores in quiet and age at implantation with eLLR components was evaluated. Results: eLLR responses were detected in 77 out of 81 tested electrodes of all participants combined (27 for apical, 26 for middle, and 24 for basal stimulation). There were no significant differences in P1, N1 latencies and P1 amplitude across electrode site. However, significantly larger N1 and P1-N1 amplitudes were observed for apical stimulations compared to basal stimulations. No differences in N1 amplitude were found between middle and apical stimulations, and the P1-N1 amplitude was significantly larger for middle compared to basal electrode stimulation, with no difference between the apical and middle electrodes stimulation. A moderate positive correlation was present between speech recognition scores in quiet and both N1, P1-N1 amplitudes for apical stimulation. Age at implantation was negatively correlated with N1 amplitude for the apical and P1-N1 amplitude for basal stimulation. Discussion: eLLR responses could be elicited in majority of paediatric CI users across electrode sites. Variations in eLLR responses across electrode sites suggest disparities in auditory cortical maturation. The findings underscore the significance of the N1 biomarker in evaluating higher-order auditory cortical development. Therefore, utilizing eLLR with single electrode stimulation may serve as a valuable tool for assessing post-cochlear implantation maturational changes in paediatric populations.

20.
J Clin Med ; 13(16)2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39200929

ABSTRACT

Background/Objectives: Autism spectrum disorder (ASD) is a lifelong neurodevelopmental condition characterised by impairments in social communication, sensory abnormalities, and attentional deficits. Children with ASD often face significant challenges with speech perception and auditory attention, particularly in noisy environments. This study aimed to assess the effectiveness of noise cancelling Bluetooth earbuds (Nuheara IQbuds Boost) in improving speech perception and auditory attention in children with ASD. Methods: Thirteen children aged 6-13 years diagnosed with ASD participated. Pure tone audiometry confirmed normal hearing levels. Speech perception in noise was measured using the Consonant-Nucleus-Consonant-Word test, and auditory/visual attention was evaluated via the Integrated Visual and Auditory Continuous Performance Task. Participants completed these assessments both with and without the IQbuds in situ. A two-week device trial evaluated classroom listening and communication improvements using the Listening Inventory for Education-Revised (teacher version) questionnaire. Results: Speech perception in noise was significantly poorer for the ASD group compared to typically developing peers and did not change with the IQbuds. Auditory attention, however, significantly improved when the children were using the earbuds. Additionally, classroom listening and communication improved significantly after the two-week device trial. Conclusions: While the noise cancelling earbuds did not enhance speech perception in noise for children with ASD, they significantly improved auditory attention and classroom listening behaviours. These findings suggest that Bluetooth earbuds could be a viable alternative to remote microphone systems for enhancing auditory attention in children with ASD, offering benefits in classroom settings and potentially minimising the stigma associated with traditional assistive listening devices.

SELECTION OF CITATIONS
SEARCH DETAIL