Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 30.071
1.
Article Zh | MEDLINE | ID: mdl-38811173

Objective: To investigate the auditory and speech abilities of children with congenital auditory neuropathy (AN) after cochlear implant (CI), and to analyze the role of genetic testing in predicting the postoperative outcomes of CI in AN patients. Methods: Fourteen children diagnosed with AN by audiological battery test and underwent CI surgery in Xijing Hospital of the Air Force Medical University from 2002 to 2021 were included in this study (9 males and 5 females), with an implantation age of (3.1±1.7) years (mean±standard deviation, the same as follows). The preoperative audiological results and deafness gene results were analyzed. Another 52 children with ordinary sensorineural hearing loss (SNHL) were selected as the control group (30 males and 22 females), with an implantation age of (2.2±0.9) years. The demographic factors such as age and gender were matched with those of the AN group. The modified Category Auditory Performance (CAP-Ⅱ) and Speech Intelligence Rate (SIR) were used to evaluate the development of postoperative auditory and speech abilities in two groups. The Mandarin Speech Test System was used to test the speech recognition rate of monosyllabic and disyllabic words and sentences. Matlab 2022 software was used to analyze the data. Results: The results of gene in 14 children with AN showed that 6 cases had OTOF gene mutations, 2 cases (siblings) were confirmed to have TNN gene mutations through whole exome sequencing, and the remaining 6 cases were not find any clear pathogenic gene mutations. All subjects underwent CI surgery with electrodes implanted into the cochlea smoothly, and there were no postoperative complications. After surgery, all AN children had improved auditory and speech abilities, but only 64% (9/14) of AN children with CI had auditory ability scores comparable to the control group of SNHL children (including 2 children with TNN gene mutations), and 36% (5/14) of AN children had lower scores than the control group of SNHL children.The average speech recognition rate of two children with TNN gene mutations was 86.5%, and of two children with OTOF gene mutations was 83.2%. Conclusions: AN children achieved varying degrees of auditory and speech abilities after CI, but the postoperative effects varied greatly. Some children achieved similar results as ordinary SNHL children, but there were still some children whose effects were worse than those of ordinary SNHL children. The postoperative efficacy of CI in two children with AN caused by TNN pathogenic genes were comparable to that of ordinary SNHL in children. Genetic testing had certain reference value for predicting the postoperative effect of CI in AN children.


Cochlear Implantation , Cochlear Implants , Hearing Loss, Central , Hearing Loss, Sensorineural , Humans , Male , Female , Child, Preschool , Hearing Loss, Central/genetics , Hearing Loss, Central/surgery , Hearing Loss, Sensorineural/surgery , Treatment Outcome , Child , Speech Perception
2.
Cogn Sci ; 48(5): e13449, 2024 May.
Article En | MEDLINE | ID: mdl-38773754

We recently reported strong, replicable (i.e., replicated) evidence for lexically mediated compensation for coarticulation (LCfC; Luthra et al., 2021), whereby lexical knowledge influences a prelexical process. Critically, evidence for LCfC provides robust support for interactive models of cognition that include top-down feedback and is inconsistent with autonomous models that allow only feedforward processing. McQueen, Jesse, and Mitterer (2023) offer five counter-arguments against our interpretation; we respond to each of those arguments here and conclude that top-down feedback provides the most parsimonious explanation of extant data.


Speech Perception , Humans , Speech Perception/physiology , Cognition , Language
3.
Sci Rep ; 14(1): 11491, 2024 05 20.
Article En | MEDLINE | ID: mdl-38769115

Several attempts for speech brain-computer interfacing (BCI) have been made to decode phonemes, sub-words, words, or sentences using invasive measurements, such as the electrocorticogram (ECoG), during auditory speech perception, overt speech, or imagined (covert) speech. Decoding sentences from covert speech is a challenging task. Sixteen epilepsy patients with intracranially implanted electrodes participated in this study, and ECoGs were recorded during overt speech and covert speech of eight Japanese sentences, each consisting of three tokens. In particular, Transformer neural network model was applied to decode text sentences from covert speech, which was trained using ECoGs obtained during overt speech. We first examined the proposed Transformer model using the same task for training and testing, and then evaluated the model's performance when trained with overt task for decoding covert speech. The Transformer model trained on covert speech achieved an average token error rate (TER) of 46.6% for decoding covert speech, whereas the model trained on overt speech achieved a TER of 46.3% ( p > 0.05 ; d = 0.07 ) . Therefore, the challenge of collecting training data for covert speech can be addressed using overt speech. The performance of covert speech can improve by employing several overt speeches.


Brain-Computer Interfaces , Electrocorticography , Speech , Humans , Female , Male , Adult , Speech/physiology , Speech Perception/physiology , Young Adult , Feasibility Studies , Epilepsy/physiopathology , Neural Networks, Computer , Middle Aged , Adolescent
4.
Cogn Res Princ Implic ; 9(1): 29, 2024 05 12.
Article En | MEDLINE | ID: mdl-38735013

Auditory stimuli that are relevant to a listener have the potential to capture focal attention even when unattended, the listener's own name being a particularly effective stimulus. We report two experiments to test the attention-capturing potential of the listener's own name in normal speech and time-compressed speech. In Experiment 1, 39 participants were tested with a visual word categorization task with uncompressed spoken names as background auditory distractors. Participants' word categorization performance was slower when hearing their own name rather than other names, and in a final test, they were faster at detecting their own name than other names. Experiment 2 used the same task paradigm, but the auditory distractors were time-compressed names. Three compression levels were tested with 25 participants in each condition. Participants' word categorization performance was again slower when hearing their own name than when hearing other names; the slowing was strongest with slight compression and weakest with intense compression. Personally relevant time-compressed speech has the potential to capture attention, but the degree of capture depends on the level of compression. Attention capture by time-compressed speech has practical significance and provides partial evidence for the duplex-mechanism account of auditory distraction.


Attention , Names , Speech Perception , Humans , Attention/physiology , Female , Male , Speech Perception/physiology , Adult , Young Adult , Speech/physiology , Reaction Time/physiology , Acoustic Stimulation
5.
Otol Neurotol ; 45(5): e381-e384, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38728553

OBJECTIVE: To examine patient preference after stapedotomy versus cochlear implantation in a unique case of a patient with symmetrical profound mixed hearing loss and similar postoperative speech perception improvement. PATIENTS: An adult patient with bilateral symmetrical far advanced otosclerosis, with profound mixed hearing loss. INTERVENTION: Stapedotomy in the left ear, cochlear implantation in the right ear. MAIN OUTCOME MEASURE: Performance on behavioral audiometry, and subjective report of hearing and intervention preference. RESULTS: A patient successfully underwent left stapedotomy and subsequent cochlear implantation on the right side, per patient preference. Preoperative audiometric characteristics were similar between ears (pure-tone average [PTA] [R: 114; L: 113 dB]; word recognition score [WRS]: 22%). Postprocedural audiometry demonstrated significant improvement after stapedotomy (PTA: 59 dB, WRS: 75%) and from cochlear implant (PTA: 20 dB, WRS: 60%). The patient subjectively reported a preference for the cochlear implant ear despite having substantial gains from stapedotomy. A nuanced discussion highlighting potentially overlooked benefits of cochlear implants in far advanced otosclerosis is conducted. CONCLUSION: In comparison with stapedotomy and hearing aids, cochlear implantation generally permits greater access to sound among patients with far advanced otosclerosis. Though the cochlear implant literature mainly focuses on speech perception outcomes, an underappreciated benefit of cochlear implantation is the high likelihood of achieving "normal" sound levels across the audiogram.


Cochlear Implantation , Otosclerosis , Speech Perception , Stapes Surgery , Humans , Otosclerosis/surgery , Stapes Surgery/methods , Cochlear Implantation/methods , Speech Perception/physiology , Treatment Outcome , Male , Middle Aged , Hearing Loss, Mixed Conductive-Sensorineural/surgery , Audiometry, Pure-Tone , Patient Preference , Female , Adult
6.
Trends Hear ; 28: 23312165241239541, 2024.
Article En | MEDLINE | ID: mdl-38738337

Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.


Cochlea , Speech Perception , Tinnitus , Humans , Cochlea/physiopathology , Tinnitus/physiopathology , Tinnitus/diagnosis , Animals , Speech Perception/physiology , Hyperacusis/physiopathology , Noise/adverse effects , Auditory Perception/physiology , Synapses/physiology , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Noise-Induced/diagnosis , Loudness Perception
7.
Trends Hear ; 28: 23312165241246596, 2024.
Article En | MEDLINE | ID: mdl-38738341

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.


Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory, Brain Stem , Speech Perception , Humans , Evoked Potentials, Auditory, Brain Stem/physiology , Male , Female , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Young Adult , Auditory Threshold/physiology , Time Factors , Cochlear Nerve/physiology , Healthy Volunteers
8.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Article En | MEDLINE | ID: mdl-38717201

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Cues , Perceptual Masking , Sound Localization , Speech Intelligibility , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Speech Acoustics , Speech Reception Threshold Test , Noise
9.
J Acoust Soc Am ; 155(5): 3060-3070, 2024 May 01.
Article En | MEDLINE | ID: mdl-38717210

Speakers tailor their speech to different types of interlocutors. For example, speech directed to voice technology has different acoustic-phonetic characteristics than speech directed to a human. The present study investigates the perceptual consequences of human- and device-directed registers in English. We compare two groups of speakers: participants whose first language is English (L1) and bilingual L1 Mandarin-L2 English talkers. Participants produced short sentences in several conditions: an initial production and a repeat production after a human or device guise indicated either understanding or misunderstanding. In experiment 1, a separate group of L1 English listeners heard these sentences and transcribed the target words. In experiment 2, the same productions were transcribed by an automatic speech recognition (ASR) system. Results show that transcription accuracy was highest for L1 talkers for both human and ASR transcribers. Furthermore, there were no overall differences in transcription accuracy between human- and device-directed speech. Finally, while human listeners showed an intelligibility benefit for coda repair productions, the ASR transcriber did not benefit from these enhancements. Findings are discussed in terms of models of register adaptation, phonetic variation, and human-computer interaction.


Multilingualism , Speech Intelligibility , Speech Perception , Humans , Male , Female , Adult , Young Adult , Speech Acoustics , Phonetics , Speech Recognition Software
10.
J Acoust Soc Am ; 155(5): 2990-3004, 2024 May 01.
Article En | MEDLINE | ID: mdl-38717206

Speakers can place their prosodic prominence on any locations within a sentence, generating focus prosody for listeners to perceive new information. This study aimed to investigate age-related changes in the bottom-up processing of focus perception in Jianghuai Mandarin by clarifying the perceptual cues and the auditory processing abilities involved in the identification of focus locations. Young, middle-aged, and older speakers of Jianghuai Mandarin completed a focus identification task and an auditory perception task. The results showed that increasing age led to a decrease in listeners' accuracy rate in identifying focus locations, with all participants performing the worst when dynamic pitch cues were inaccessible. Auditory processing abilities did not predict focus perception performance in young and middle-aged listeners but accounted significantly for the variance in older adults' performance. These findings suggest that age-related deteriorations in focus perception can be largely attributed to declined auditory processing of perceptual cues. Poor ability to extract frequency modulation cues may be the most important underlying psychoacoustic factor for older adults' difficulties in perceiving focus prosody in Jianghuai Mandarin. The results contribute to our understanding of the bottom-up mechanisms involved in linguistic prosody processing in aging adults, particularly in tonal languages.


Aging , Cues , Speech Perception , Humans , Middle Aged , Aged , Male , Female , Aging/psychology , Aging/physiology , Young Adult , Adult , Speech Perception/physiology , Age Factors , Speech Acoustics , Acoustic Stimulation , Pitch Perception , Language , Voice Quality , Psychoacoustics , Audiometry, Speech
11.
J Acoust Soc Am ; 155(5): 3090-3100, 2024 May 01.
Article En | MEDLINE | ID: mdl-38717212

The perceived level of femininity and masculinity is a prominent property by which a speaker's voice is indexed, and a vocal expression incongruent with the speaker's gender identity can greatly contribute to gender dysphoria. Our understanding of the acoustic cues to the levels of masculinity and femininity perceived by listeners in voices is not well developed, and an increased understanding of them would benefit communication of therapy goals and evaluation in gender-affirming voice training. We developed a voice bank with 132 voices with a range of levels of femininity and masculinity expressed in the voice, as rated by 121 listeners in independent, individually randomized perceptual evaluations. Acoustic models were developed from measures identified as markers of femininity or masculinity in the literature using penalized regression and tenfold cross-validation procedures. The 223 most important acoustic cues explained 89% and 87% of the variance in the perceived level of femininity and masculinity in the evaluation set, respectively. The median fo was confirmed to provide the primary cue, but other acoustic properties must be considered in accurate models of femininity and masculinity perception. The developed models are proposed to afford communication and evaluation of gender-affirming voice training goals and improve voice synthesis efforts.


Cues , Speech Acoustics , Speech Perception , Voice Quality , Humans , Female , Male , Adult , Young Adult , Masculinity , Middle Aged , Femininity , Adolescent , Gender Identity , Acoustics
12.
JASA Express Lett ; 4(5)2024 May 01.
Article En | MEDLINE | ID: mdl-38717469

The perceptual boundary between short and long categories depends on speech rate. We investigated the influence of speech rate on perceptual boundaries for short and long vowel and consonant contrasts by Spanish-English bilingual listeners and English monolinguals. Listeners tended to adapt their perceptual boundaries to speech rates, but the strategy differed between groups, especially for consonants. Understanding the factors that influence auditory processing in this population is essential for developing appropriate assessments of auditory comprehension. These findings have implications for the clinical care of older populations whose ability to rely on spectral and/or temporal information in the auditory signal may decline.


Multilingualism , Speech Perception , Humans , Speech Perception/physiology , Female , Male , Adult , Phonetics , Young Adult
13.
JASA Express Lett ; 4(5)2024 May 01.
Article En | MEDLINE | ID: mdl-38717468

This study evaluated whether adaptive training with time-compressed speech produces an age-dependent improvement in speech recognition in 14 adult cochlear-implant users. The protocol consisted of a pretest, 5 h of training, and a posttest using time-compressed speech and an adaptive procedure. There were significant improvements in time-compressed speech recognition at the posttest session following training (>5% in the average time-compressed speech recognition threshold) but no effects of age. These results are promising for the use of adaptive training in aural rehabilitation strategies for cochlear-implant users across the adult lifespan and possibly using speech signals, such as time-compressed speech, to train temporal processing.


Cochlear Implants , Speech Perception , Humans , Speech Perception/physiology , Aged , Male , Middle Aged , Female , Adult , Aged, 80 and over , Cochlear Implantation/methods , Time Factors
14.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Article En | MEDLINE | ID: mdl-38714314

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Face , Trust , Voice , Humans , Female , Voice/physiology , Young Adult , Adult , Face/physiology , Speech Perception/physiology , Pitch Perception/physiology , Facial Recognition/physiology , Cues , Adolescent
15.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38715408

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Aging , Brain , Comprehension , Noise , Spectroscopy, Near-Infrared , Speech Perception , Humans , Adult , Speech Perception/physiology , Male , Female , Spectroscopy, Near-Infrared/methods , Middle Aged , Young Adult , Aged , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging , Aging/physiology , Brain Mapping/methods , Acoustic Stimulation/methods
16.
Nat Commun ; 15(1): 3692, 2024 May 01.
Article En | MEDLINE | ID: mdl-38693186

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Attention , Eye Movements , Magnetoencephalography , Speech Perception , Speech , Humans , Attention/physiology , Eye Movements/physiology , Male , Female , Adult , Young Adult , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Brain/physiology , Eye-Tracking Technology
17.
JASA Express Lett ; 4(5)2024 May 01.
Article En | MEDLINE | ID: mdl-38804812

Adding to limited research on clear speech in tone languages, productions of Mandarin lexical tones were examined in pentasyllabic sentences. Fourteen participants read sentences imagining a hard-of-hearing addressee or a friend in a casual social setting. Tones produced in clear speech had longer duration, higher intensity, and larger F0 values. This style effect was rarely modulated by tone, preceding tonal context, or syllable position, consistent with an overall signal enhancement strategy. Possible evidence for tone enhancement was observed only in one set of analysis for F0 minimum and F0 range, contrasting tones with low targets and tones with high targets.


Language , Humans , Female , Male , Speech Acoustics , Adult , Young Adult , Speech , Speech Perception/physiology , Phonetics
18.
Trends Hear ; 28: 23312165241256721, 2024.
Article En | MEDLINE | ID: mdl-38773778

This study aimed to investigate the role of hearing aid (HA) usage in language outcomes among preschool children aged 3-5 years with mild bilateral hearing loss (MBHL). The data were retrieved from a total of 52 children with MBHL and 30 children with normal hearing (NH). The association between demographical, audiological factors and language outcomes was examined. Analyses of variance were conducted to compare the language abilities of HA users, non-HA users, and their NH peers. Furthermore, regression analyses were performed to identify significant predictors of language outcomes. Aided better ear pure-tone average (BEPTA) was significantly correlated with language comprehension scores. Among children with MBHL, those who used HA outperformed the ones who did not use HA across all linguistic domains. The language skills of children with MBHL were comparable to those of their peers with NH. The degree of improvement in audibility in terms of aided BEPTA was a significant predictor of language comprehension. It is noteworthy that 50% of the parents expressed reluctance regarding HA use for their children with MBHL. The findings highlight the positive impact of HA usage on language development in this population. Professionals may therefore consider HAs as a viable treatment option for children with MBHL, especially when there is a potential risk of language delay due to hearing loss. It was observed that 25% of the children with MBHL had late-onset hearing loss. Consequently, the implementation of preschool screening or a listening performance checklist is recommended to facilitate early detection.


Child Language , Hearing Aids , Hearing Loss, Bilateral , Language Development , Humans , Male , Child, Preschool , Female , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/psychology , Speech Perception , Case-Control Studies , Correction of Hearing Impairment/instrumentation , Treatment Outcome , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Severity of Illness Index , Comprehension , Hearing , Audiometry, Pure-Tone , Age Factors , Auditory Threshold , Language Tests
19.
Hear Res ; 447: 109023, 2024 Jun.
Article En | MEDLINE | ID: mdl-38733710

Limited auditory input, whether caused by hearing loss or by electrical stimulation through a cochlear implant (CI), can be compensated by the remaining senses. Specifically for CI users, previous studies reported not only improved visual skills, but also altered cortical processing of unisensory visual and auditory stimuli. However, in multisensory scenarios, it is still unclear how auditory deprivation (before implantation) and electrical hearing experience (after implantation) affect cortical audiovisual speech processing. Here, we present a prospective longitudinal electroencephalography (EEG) study which systematically examined the deprivation- and CI-induced alterations of cortical processing of audiovisual words by comparing event-related potentials (ERPs) in postlingually deafened CI users before and after implantation (five weeks and six months of CI use). A group of matched normal-hearing (NH) listeners served as controls. The participants performed a word-identification task with congruent and incongruent audiovisual words, focusing their attention on either the visual (lip movement) or the auditory speech signal. This allowed us to study the (top-down) attention effect on the (bottom-up) sensory cortical processing of audiovisual speech. When compared to the NH listeners, the CI candidates (before implantation) and the CI users (after implantation) exhibited enhanced lipreading abilities and an altered cortical response at the N1 latency range (90-150 ms) that was characterized by a decreased theta oscillation power (4-8 Hz) and a smaller amplitude in the auditory cortex. After implantation, however, the auditory-cortex response gradually increased and developed a stronger intra-modal connectivity. Nevertheless, task efficiency and activation in the visual cortex was significantly modulated in both groups by focusing attention on the visual as compared to the auditory speech signal, with the NH listeners additionally showing an attention-dependent decrease in beta oscillation power (13-30 Hz). In sum, these results suggest remarkable deprivation effects on audiovisual speech processing in the auditory cortex, which partially reverse after implantation. Although even experienced CI users still show distinct audiovisual speech processing compared to NH listeners, pronounced effects of (top-down) direction of attention on (bottom-up) audiovisual processing can be observed in both groups. However, NH listeners but not CI users appear to show enhanced allocation of cognitive resources in visually as compared to auditory attended audiovisual speech conditions, which supports our behavioural observations of poorer lipreading abilities and reduced visual influence on audition in NH listeners as compared to CI users.


Acoustic Stimulation , Attention , Cochlear Implantation , Cochlear Implants , Deafness , Electroencephalography , Persons With Hearing Impairments , Photic Stimulation , Speech Perception , Humans , Male , Female , Middle Aged , Cochlear Implantation/instrumentation , Adult , Prospective Studies , Longitudinal Studies , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Deafness/physiopathology , Deafness/rehabilitation , Deafness/psychology , Case-Control Studies , Aged , Visual Perception , Lipreading , Time Factors , Hearing , Evoked Potentials, Auditory , Auditory Cortex/physiopathology , Evoked Potentials
20.
Cereb Cortex ; 34(13): 84-93, 2024 May 02.
Article En | MEDLINE | ID: mdl-38696598

Multimodal integration is crucial for human interaction, in particular for social communication, which relies on integrating information from various sensory modalities. Recently a third visual pathway specialized in social perception was proposed, which includes the right superior temporal sulcus (STS) playing a key role in processing socially relevant cues and high-level social perception. Importantly, it has also recently been proposed that the left STS contributes to audiovisual integration of speech processing. In this article, we propose that brain areas along the right STS that support multimodal integration for social perception and cognition can be considered homologs to those in the left, language-dominant hemisphere, sustaining multimodal integration of speech and semantic concepts fundamental for social communication. Emphasizing the significance of the left STS in multimodal integration and associated processes such as multimodal attention to socially relevant stimuli, we underscore its potential relevance in comprehending neurodevelopmental conditions characterized by challenges in social communication such as autism spectrum disorder (ASD). Further research into this left lateral processing stream holds the promise of enhancing our understanding of social communication in both typical development and ASD, which may lead to more effective interventions that could improve the quality of life for individuals with atypical neurodevelopment.


Social Cognition , Speech Perception , Temporal Lobe , Humans , Temporal Lobe/physiology , Temporal Lobe/physiopathology , Speech Perception/physiology , Social Perception , Autistic Disorder/physiopathology , Autistic Disorder/psychology , Functional Laterality/physiology
...