Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Mol Ther ; 30(2): 519-533, 2022 02 02.
Artigo em Inglês | MEDLINE | ID: mdl-34298130

RESUMO

Moderate noise exposure may cause acute loss of cochlear synapses without affecting the cochlear hair cells and hearing threshold; thus, it remains "hidden" to standard clinical tests. This cochlear synaptopathy is one of the main pathologies of noise-induced hearing loss (NIHL). There is no effective treatment for NIHL, mainly because of the lack of a proper drug-delivery technique. We hypothesized that local magnetic delivery of gene therapy into the inner ear could be beneficial for NIHL. In this study, we used superparamagnetic iron oxide nanoparticles (SPIONs) and a recombinant adeno-associated virus (AAV) vector (AAV2(quad Y-F)) to deliver brain-derived neurotrophic factor (BDNF) gene therapy into the rat inner ear via minimally invasive magnetic targeting. We found that the magnetic targeting effectively accumulates and distributes the SPION-tagged AAV2(quad Y-F)-BDNF vector into the inner ear. We also found that AAV2(quad Y-F) efficiently transfects cochlear hair cells and enhances BDNF gene expression. Enhanced BDNF gene expression substantially recovers noise-induced BDNF gene downregulation, auditory brainstem response (ABR) wave I amplitude reduction, and synapse loss. These results suggest that magnetic targeting of AAV2(quad Y-F)-mediated BDNF gene therapy could reverse cochlear synaptopathy after NIHL.


Assuntos
Fator Neurotrófico Derivado do Encéfalo , Dependovirus , Animais , Fator Neurotrófico Derivado do Encéfalo/genética , Fator Neurotrófico Derivado do Encéfalo/metabolismo , Cóclea/metabolismo , Dependovirus/genética , Potenciais Evocados Auditivos do Tronco Encefálico , Terapia Genética/métodos , Audição , Fenômenos Magnéticos , Ratos
2.
Ear Hear ; 43(6): 1904-1916, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35544449

RESUMO

OBJECTIVE: Evidence suggests that hearing loss increases the risk of cognitive impairment. However, the relationship between hearing loss and cognition can vary considerably across studies, which may be partially explained by demographic and health factors that are not systematically accounted for in statistical models. DESIGN: Middle-aged to older adult participants (N = 149) completed a web-based assessment that included speech-in-noise (SiN) and self-report measures of hearing, as well as auditory and visual cognitive interference (Stroop) tasks. Correlations between hearing and cognitive interference measures were performed with and without controlling for age, sex, education, depression, anxiety, and self-rated health. RESULTS: The risk of having objective SiN difficulties differed between males and females. All demographic and health variables, except education, influenced the likelihood of reporting hearing difficulties. Small but significant relationships between objective and reported hearing difficulties and the measures of cognitive interference were observed when analyses were controlled for demographic and health factors. Furthermore, when stratifying analyses for males and females, different relationships between hearing and cognitive interference measures were found. Self-reported difficulty with spatial hearing and objective SiN performance were better predictors of inhibitory control in females, whereas self-reported difficulty with speech was a better predictor of inhibitory control in males. This suggests that inhibitory control is associated with different listening abilities in males and females. CONCLUSIONS: The results highlight the importance of controlling for participant characteristics when assessing the relationship between hearing and cognitive interference, which may also be the case for other cognitive functions, but this requires further investigations. Furthermore, this study is the first to show that the relationship between hearing and cognitive interference can be captured using web-based tasks that are simple to implement and administer at home without any assistance, paving the way for future online screening tests assessing the effects of hearing loss on cognition.


Assuntos
Surdez , Perda Auditiva , Percepção da Fala , Pessoa de Meia-Idade , Masculino , Feminino , Humanos , Idoso , Ruído , Audição , Percepção Auditiva , Cognição
3.
Eur J Neurosci ; 54(3): 5016-5037, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34146363

RESUMO

A common concern for individuals with severe-to-profound hearing loss fitted with cochlear implants (CIs) is difficulty following conversations in noisy environments. Recent work has suggested that these difficulties are related to individual differences in brain function, including verbal working memory and the degree of cross-modal reorganization of auditory areas for visual processing. However, the neural basis for these relationships is not fully understood. Here, we investigated neural correlates of visual verbal working memory and sensory plasticity in 14 CI users and age-matched normal-hearing (NH) controls. While we recorded the high-density electroencephalogram (EEG), participants completed a modified Sternberg visual working memory task where sets of letters and numbers were presented visually and then recalled at a later time. Results suggested that CI users had comparable behavioural working memory performance compared with NH. However, CI users had more pronounced neural activity during visual stimulus encoding, including stronger visual-evoked activity in auditory and visual cortices, larger modulations of neural oscillations and increased frontotemporal connectivity. In contrast, during memory retention of the characters, CI users had descriptively weaker neural oscillations and significantly lower frontotemporal connectivity. We interpret the differences in neural correlates of visual stimulus processing in CI users through the lens of cross-modal and intramodal plasticity.


Assuntos
Córtex Auditivo , Implante Coclear , Implantes Cocleares , Surdez , Audição , Humanos , Memória de Curto Prazo
4.
Ear Hear ; 37(5): e322-35, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27556365

RESUMO

OBJECTIVE: To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects. PARTICIPANTS: three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group ("O1"; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group ("O2"; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s). EFRs were analyzed as a function of the AM depth. In condition 2, auditory steady-state responses were recorded to fixed AM depths (100%, 75%, 50%, and 25%) at a rate of 41 Hz. Psychophysics: A 3 AFC (alternative forced choice) procedure was used to track the AM depth needed to detect AM at 41 Hz (AM detection). The minimum AM depth capable of eliciting a statistically detectable EFR was defined as the physiological AM detection threshold. RESULTS: Across all ages, the fixed AM depth auditory steady-state response and swept AM EFR yielded similar response amplitudes. Statistically significant correlations (r = 0.48) were observed between behavioral and physiological AM detection thresholds. Older subjects had slightly higher (not significant) behavioral AM detection thresholds than younger subjects. AM detection thresholds did not correlate with age. All groups showed a sigmoidal EFR amplitude versus AM depth function but the shape of the function differed across groups. The O2 group reached EFR amplitude plateau levels at lower modulation depths than the normal-hearing group and had a narrower neural dynamic range. In the young normal-hearing group, the EFR phase did not differ with AM depth, whereas in the older group, EFR phase showed a consistent decrease with increasing AM depth. The degree of phase change (or phase slope) was significantly correlated to the pure-tone threshold at 4 kHz. CONCLUSIONS: EFRs can be recorded using either the swept modulation depth or the discrete AM depth techniques. Sweep recordings may provide additional valuable information at suprathreshold intensities including the plateau level, slope, and dynamic range. Older subjects had a reduced neural dynamic range compared with younger subjects suggesting that aging affects the ability of the auditory system to encode subtle differences in the depth of AM. The phase-slope differences are likely related to differences in low and high-frequency contributions to EFR. The behavioral-physiological AM depth threshold relationship was significant but likely too weak to be clinically useful in the present individual subjects who did not suffer from apparent temporal processing deficits.


Assuntos
Envelhecimento/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Audição/fisiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
5.
Neuroimage ; 87: 356-62, 2014 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-24188814

RESUMO

There have been a number of studies suggesting that oscillatory alpha activity (~10 Hz) plays a pivotal role in attention by gating information flow to relevant sensory regions. The vast majority of these studies have looked at shifts of attention in the spatial domain and only in a single modality (often visual or sensorimotor). In the current magnetoencephalography (MEG) study, we investigated the role of alpha activity in the suppression of a distracting modality stream. We used a cross-modal attention task where visual cues indicated whether participants had to judge a visual orientation or discriminate the auditory pitch of an upcoming target. The visual and auditory targets were presented either simultaneously or alone, allowing us to behaviorally gauge the "cost" of having a distractor present in each modality. We found that the preparation for visual discrimination (relative to pitch discrimination) resulted in a decrease of alpha power (9-11 Hz) in the early visual cortex, with a concomitant increase in alpha/beta power (14-16 Hz) in the supramarginal gyrus, a region suggested to play a vital role in short-term storage of pitch information (Gaab et al., 2003). On a trial-by-trial basis, alpha power over the visual areas was significantly correlated with increased visual discrimination times, whereas alpha power over the precuneus and right superior temporal gyrus was correlated with increased auditory discrimination times. However, these correlations were only significant when the targets were paired with distractors. Our work adds to increasing evidence that the top-down (i.e. attentional) modulation of alpha activity is a mechanism by which stimulus processing can be gated within the cortex. Here, we find that this phenomenon is not restricted to the domain of spatial attention and can be generalized to other sensory modalities than vision.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Ritmo alfa , Sinais (Psicologia) , Feminino , Humanos , Magnetoencefalografia , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Adulto Jovem
6.
Brain ; 136(Pt 5): 1626-38, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23503620

RESUMO

Abnormal auditory adaptation is a standard clinical tool for diagnosing auditory nerve disorders due to acoustic neuromas. In the present study we investigated auditory adaptation in auditory neuropathy owing to disordered function of inner hair cell ribbon synapses (temperature-sensitive auditory neuropathy) or auditory nerve fibres. Subjects were tested when afebrile for (i) psychophysical loudness adaptation to comfortably-loud sustained tones; and (ii) physiological adaptation of auditory brainstem responses to clicks as a function of their position in brief 20-click stimulus trains (#1, 2, 3 … 20). Results were compared with normal hearing listeners and other forms of hearing impairment. Subjects with ribbon synapse disorder had abnormally increased magnitude of loudness adaptation to both low (250 Hz) and high (8000 Hz) frequency tones. Subjects with auditory nerve disorders had normal loudness adaptation to low frequency tones; all but one had abnormal adaptation to high frequency tones. Adaptation was both more rapid and of greater magnitude in ribbon synapse than in auditory nerve disorders. Auditory brainstem response measures of adaptation in ribbon synapse disorder showed Wave V to the first click in the train to be abnormal both in latency and amplitude, and these abnormalities increased in magnitude or Wave V was absent to subsequent clicks. In contrast, auditory brainstem responses in four of the five subjects with neural disorders were absent to every click in the train. The fifth subject had normal latency and abnormally reduced amplitude of Wave V to the first click and abnormal or absent responses to subsequent clicks. Thus, dysfunction of both synaptic transmission and auditory neural function can be associated with abnormal loudness adaptation and the magnitude of the adaptation is significantly greater with ribbon synapse than neural disorders.


Assuntos
Estimulação Acústica/métodos , Adaptação Fisiológica/fisiologia , Nervo Coclear/patologia , Células Ciliadas Auditivas Internas/fisiologia , Hiperacusia/fisiopatologia , Adolescente , Adulto , Idoso , Percepção Auditiva/fisiologia , Criança , Nervo Coclear/fisiologia , Feminino , Transtornos da Audição/diagnóstico , Transtornos da Audição/fisiopatologia , Humanos , Hiperacusia/diagnóstico , Percepção Sonora/fisiologia , Masculino , Pessoa de Meia-Idade , Adulto Jovem
7.
Audiol Res ; 14(4): 611-624, 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-39051196

RESUMO

BACKGROUND: A cochlear implant (CI) enables deaf people to understand speech but due to technical restrictions, users face great limitations in noisy conditions. Music training has been shown to augment shared auditory and cognitive neural networks for processing speech and music and to improve auditory-motor coupling, which benefits speech perception in noisy listening conditions. These are promising prerequisites for studying multi-modal neurologic music training (NMT) for speech-in-noise (SIN) perception in adult cochlear implant (CI) users. Furthermore, a better understanding of the neurophysiological correlates when performing working memory (WM) and SIN tasks after multi-modal music training with CI users may provide clinicians with a better understanding of optimal rehabilitation. METHODS: Within 3 months, 81 post-lingual deafened adult CI recipients will undergo electrophysiological recordings and a four-week neurologic music therapy multi-modal training randomly assigned to one of three training focusses (pitch, rhythm, and timbre). Pre- and post-tests will analyze behavioral outcomes and apply a novel electrophysiological measurement approach that includes neural tracking to speech and alpha oscillation modulations to the sentence-final-word-identification-and-recall test (SWIR-EEG). Expected outcome: Short-term multi-modal music training will enhance WM and SIN performance in post-lingual deafened adult CI recipients and will be reflected in greater neural tracking and alpha oscillation modulations in prefrontal areas. Prospectively, outcomes could contribute to understanding the relationship between cognitive functioning and SIN besides the technical deficits of the CI. Targeted clinical application of music training for post-lingual deafened adult CI carriers to significantly improve SIN and positively impact the quality of life can be realized.

8.
Brain Sci ; 14(1)2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-38275515

RESUMO

Tinnitus is a prevalent hearing-loss deficit manifested as a phantom (internally generated by the brain) sound that is heard as a high-frequency tone in the majority of afflicted persons. Chronic tinnitus is debilitating, leading to distress, sleep deprivation, anxiety, and even suicidal thoughts. It has been theorized that, in the majority of afflicted persons, tinnitus can be attributed to the loss of high-frequency input from the cochlea to the auditory cortex, known as deafferentation. Deafferentation due to hearing loss develops with aging, which progressively causes tonotopic regions coding for the lost high-frequency coding to synchronize, leading to a phantom high-frequency sound sensation. Approaches to tinnitus remediation that demonstrated promise include inhibitory drugs, the use of tinnitus-specific frequency notching to increase lateral inhibition to the deafferented neurons, and multisensory approaches (auditory-motor and audiovisual) that work by coupling multisensory stimulation to the deafferented neural populations. The goal of this review is to put forward a theoretical framework of a multisensory approach to remedy tinnitus. Our theoretical framework posits that due to vision's modulatory (inhibitory, excitatory) influence on the auditory pathway, a prolonged engagement in audiovisual activity, especially during daily discourse, as opposed to auditory-only activity/discourse, can progressively reorganize deafferented neural populations, resulting in the reduced synchrony of the deafferented neurons and a reduction in tinnitus severity over time.

9.
Sci Rep ; 13(1): 15849, 2023 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-37740012

RESUMO

Language comprehension is a complex process involving an extensive brain network. Brain regions responsible for prosodic processing have been studied in adults; however, much less is known about the neural bases of prosodic processing in children. Using magnetoencephalography (MEG), we mapped regions supporting speech envelope tracking (a marker of prosodic processing) in 80 typically developing children, ages 4-18 years, completing a stories listening paradigm. Neuromagnetic signals coherent with the speech envelope were localized using dynamic imaging of coherent sources (DICS). Across the group, we observed coherence in bilateral perisylvian cortex. We observed age-related increases in coherence to the speech envelope in the right superior temporal gyrus (r = 0.31, df = 78, p = 0.0047) and primary auditory cortex (r = 0.27, df = 78, p = 0.016); age-related decreases in coherence to the speech envelope were observed in the left superior temporal gyrus (r = - 0.25, df = 78, p = 0.026). This pattern may indicate a refinement of the networks responsible for prosodic processing during development, where language areas in the right hemisphere become increasingly specialized for prosodic processing. Altogether, these results reveal a distinct neurodevelopmental trajectory for the processing of prosodic cues, highlighting the presence of supportive language functions in the right hemisphere. Findings from this dataset of typically developing children may serve as a potential reference timeline for assessing children with neurodevelopmental hearing and speech disorders.


Assuntos
Encéfalo , Córtex Cerebral , Adulto , Humanos , Criança , Sinais (Psicologia) , Audição , Idioma
10.
PLoS One ; 18(9): e0291600, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37713394

RESUMO

BACKGROUND: The cochlear implant (CI) has proven to be a successful treatment for patients with severe-to-profound sensorineural hearing loss, however outcome variance exists. We sought to evaluate particular mutations discovered in previously established sensory and neural partition genes and compare post-operative CI outcomes. MATERIALS AND METHODS: Utilizing a prospective cohort study design, blood samples collected from adult patients with non-syndromic hearing loss undergoing CI were tested for 54 genes of interest with high-throughput sequencing. Patients were categorized as having a pathogenic variant in the sensory partition, pathogenic variant in the neural partition, pathogenic variant in both sensory and neural partition, or with no variant identified. Speech perception performance was assessed pre- and 12 months post-operatively. Performance measures were compared to genetic mutation and variant status utilizing a Wilcoxon rank sum test, with P<0.05 considered statistically significant. RESULTS: Thirty-six cochlear implant patients underwent genetic testing and speech understanding measurements. Of the 54 genes that were interrogated, three patients (8.3%) demonstrated a pathogenic mutation in the neural partition (within TMPRSS3 genes), one patient (2.8%) demonstrated a pathogenic mutation in the sensory partition (within the POU4F3 genes). In addition, 3 patients (8.3%) had an isolated neural partition variance of unknown significance (VUS), 5 patients (13.9%) had an isolated sensory partition VUS, 1 patient (2.8%) had a variant in both neural and sensory partition, and 23 patients (63.9%) had no mutation or variant identified. There was no statistically significant difference in speech perception scores between patients with sensory or neural partition pathogenic mutations or VUS. Variable performance was found within patients with TMPRSS3 gene mutations. CONCLUSION: The impact of genetic mutations on post-operative outcomes in CI patients was heterogenous. Future research and dissemination of mutations and subsequent CI performance is warranted to elucidate exact mutations within target genes providing the best non-invasive prognostic capability.


Assuntos
Implante Coclear , Implantes Cocleares , Humanos , Adulto , Estudos Prospectivos , Mutação , Testes Genéticos , Proteínas de Membrana , Proteínas de Neoplasias , Serina Endopeptidases/genética
11.
Front Hum Neurosci ; 16: 1043499, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36419642

RESUMO

There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and "real-world" listening environments and stimuli. Speech in the real world is often accompanied by visual cues, background environmental noise, and is generally in a conversational context, all factors that could affect listening demand. Thus, our objectives were to determine if brain responses to naturalistic speech could index speech perception and listening demand in CI users. Accordingly, we recorded high-density electroencephalogram (EEG) while CI users listened/watched a naturalistic stimulus (i.e., the television show, "The Office"). We used continuous EEG to quantify "speech neural tracking" (i.e., TRFs, temporal response functions) to the show's soundtrack and 8-12 Hz (alpha) brain rhythms commonly related to listening effort. Background noise at three different signal-to-noise ratios (SNRs), +5, +10, and +15 dB were presented to vary the difficulty of following the television show, mimicking a natural noisy environment. The task also included an audio-only (no video) condition. After each condition, participants subjectively rated listening demand and the degree of words and conversations they felt they understood. Fifteen CI users reported progressively higher degrees of listening demand and less words and conversation with increasing background noise. Listening demand and conversation understanding in the audio-only condition was comparable to that of the highest noise condition (+5 dB). Increasing background noise affected speech neural tracking at a group level, in addition to eliciting strong individual differences. Mixed effect modeling showed that listening demand and conversation understanding were correlated to early cortical speech tracking, such that high demand and low conversation understanding occurred with lower amplitude TRFs. In the high noise condition, greater listening demand was negatively correlated to parietal alpha power, where higher demand was related to lower alpha power. No significant correlations were observed between TRF/alpha and clinical speech perception scores. These results are similar to previous findings showing little relationship between clinical speech perception and quality-of-life in CI users. However, physiological responses to complex natural speech may provide an objective measure of aspects of quality-of-life measures like self-perceived listening demand.

12.
Sci Rep ; 12(1): 17749, 2022 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-36273017

RESUMO

Deaf individuals who use a cochlear implant (CI) have remarkably different outcomes for auditory speech communication ability. One factor assumed to affect CI outcomes is visual crossmodal plasticity in auditory cortex, where deprived auditory regions begin to support non-auditory functions such as vision. Previous research has viewed crossmodal plasticity as harmful for speech outcomes for CI users if it interferes with sound processing, while others have demonstrated that plasticity related to visual language may be beneficial for speech recovery. To clarify, we used electroencephalography (EEG) to measure brain responses to a partial face speaking a silent single-syllable word (visual language) in 15 CI users and 13 age-matched typical-hearing controls. We used source analysis on EEG activity to measure crossmodal visual responses in auditory cortex and then compared them to CI users' speech-in-noise listening ability. CI users' brain response to the onset of the video stimulus (face) was larger than controls in left auditory cortex, consistent with crossmodal activation after deafness. CI users also produced a mixture of alpha (8-12 Hz) synchronization and desynchronization in auditory cortex while watching lip movement while controls instead showed desynchronization. CI users with higher speech scores had stronger crossmodal responses in auditory cortex to the onset of the video, but those with lower speech scores had increases in alpha power during lip movement in auditory areas. Therefore, evidence of crossmodal reorganization in CI users does not necessarily predict poor speech outcomes, and differences in crossmodal activation during lip reading may instead relate to strategies or differences that CI users use in audiovisual speech communication.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Humanos , Fala , Surdez/cirurgia , Percepção da Fala/fisiologia
13.
PLoS One ; 16(7): e0254162, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34242290

RESUMO

Listening to speech in noise is effortful for individuals with hearing loss, even if they have received a hearing prosthesis such as a hearing aid or cochlear implant (CI). At present, little is known about the neural functions that support listening effort. One form of neural activity that has been suggested to reflect listening effort is the power of 8-12 Hz (alpha) oscillations measured by electroencephalography (EEG). Alpha power in two cortical regions has been associated with effortful listening-left inferior frontal gyrus (IFG), and parietal cortex-but these relationships have not been examined in the same listeners. Further, there are few studies available investigating neural correlates of effort in the individuals with cochlear implants. Here we tested 16 CI users in a novel effort-focused speech-in-noise listening paradigm, and confirm a relationship between alpha power and self-reported effort ratings in parietal regions, but not left IFG. The parietal relationship was not linear but quadratic, with alpha power comparatively lower when effort ratings were at the top and bottom of the effort scale, and higher when effort ratings were in the middle of the scale. Results are discussed in terms of cognitive systems that are engaged in difficult listening situations, and the implication for clinical translation.


Assuntos
Implantes Cocleares , Fala , Adulto , Percepção Auditiva , Humanos , Masculino , Pessoa de Meia-Idade , Ruído
14.
Front Neurosci ; 14: 124, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32132897

RESUMO

OBJECTIVES: The ability to understand speech is highly variable in people with cochlear implants (CIs) and to date, there are no objective measures that identify the root of this discrepancy. However, behavioral measures of temporal processing such as the temporal modulation transfer function (TMTF) has previously found to be related to vowel and consonant identification in CI users. The acoustic change complex (ACC) is a cortical auditory-evoked potential response that can be elicited by a "change" in an ongoing stimulus. In this study, the ACC elicited by amplitude modulation (AM) change was related to measures of speech perception as well as the amplitude detection threshold in CI users. METHODS: Ten CI users (mean age: 50 years old) participated in this study. All subjects participated in behavioral tests that included both speech and amplitude modulation detection to obtain a TMTF. CI users were categorized as "good" (n = 6) or "poor" (n = 4) based on their speech-in noise score (<50%). 64-channel electroencephalographic recordings were conducted while CI users passively listened to AM change sounds that were presented in a free field setting. The AM change stimulus was white noise with four different AM rates (4, 40, 100, and 300 Hz). RESULTS: Behavioral results show that AM detection thresholds in CI users were higher compared to the normal-hearing (NH) group for all AM rates. The electrophysiological data suggest that N1 responses were significantly decreased in amplitude and their latencies were increased in CI users compared to NH controls. In addition, the N1 latencies for the poor CI performers were delayed compared to the good CI performers. The N1 latency for 40 Hz AM was correlated with various speech perception measures. CONCLUSION: Our data suggest that the ACC to AM change provides an objective index of speech perception abilities that can be used to explain some of the variation in speech perception observed among CI users.

15.
Sci Rep ; 10(1): 6141, 2020 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-32273536

RESUMO

Hearing impairment disrupts processes of selective attention that help listeners attend to one sound source over competing sounds in the environment. Hearing prostheses (hearing aids and cochlear implants, CIs), do not fully remedy these issues. In normal hearing, mechanisms of selective attention arise through the facilitation and suppression of neural activity that represents sound sources. However, it is unclear how hearing impairment affects these neural processes, which is key to understanding why listening difficulty remains. Here, severely-impaired listeners treated with a CI, and age-matched normal-hearing controls, attended to one of two identical but spatially separated talkers while multichannel EEG was recorded. Whereas neural representations of attended and ignored speech were differentiated at early (~ 150 ms) cortical processing stages in controls, differentiation of talker representations only occurred later (~250 ms) in CI users. CI users, but not controls, also showed evidence for spatial suppression of the ignored talker through lateralized alpha (7-14 Hz) oscillations. However, CI users' perceptual performance was only predicted by early-stage talker differentiation. We conclude that multi-talker listening difficulty remains for impaired listeners due to deficits in early-stage separation of cortical speech representations, despite neural evidence that they use spatial information to guide selective attention.


Assuntos
Córtex Cerebral/fisiopatologia , Perda Auditiva/fisiopatologia , Percepção da Fala/fisiologia , Fala/fisiologia , Adolescente , Adulto , Idoso , Atenção/fisiologia , Estudos de Casos e Controles , Córtex Cerebral/fisiologia , Implantes Cocleares , Eletroencefalografia , Perda Auditiva/psicologia , Perda Auditiva/terapia , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
16.
Sci Rep ; 9(1): 11278, 2019 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-31375712

RESUMO

Listening in a noisy environment is challenging for individuals with normal hearing and can be a significant burden for those with hearing impairment. The extent to which this burden is alleviated by a hearing device is a major, unresolved issue for rehabilitation. Here, we found adult users of cochlear implants (CIs) self-reported listening effort during a speech-in-noise task that was positively related to alpha oscillatory activity in the left inferior frontal cortex, canonical Broca's area, and inversely related to speech envelope coherence in the 2-5 Hz range originating in the superior-temporal plane encompassing auditory cortex. Left frontal cortex coherence in the 2-5 Hz range also predicted speech-in-noise identification. These data demonstrate that neural oscillations predict both speech perception ability in noise and listening effort.


Assuntos
Córtex Auditivo/fisiologia , Área de Broca/fisiologia , Lobo Frontal/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Idoso , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Implante Coclear/métodos , Feminino , Perda Auditiva/diagnóstico por imagem , Perda Auditiva/fisiopatologia , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos
17.
Clin Neurophysiol ; 119(9): 2111-24, 2008 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18635394

RESUMO

OBJECTIVE: We examined auditory cortical potentials in normal hearing subjects to spectral changes in continuous low and high frequency pure tones. METHODS: Cortical potentials were recorded to increments of frequency from continuous 250 or 4000Hz tones. The magnitude of change was random and varied from 0% to 50% above the base frequency. RESULTS: Potentials consisted of N100, P200 and a slow negative wave (SN). N100 amplitude, latency and dipole magnitude with frequency increments were significantly greater for low compared to high frequencies. Dipole amplitudes were greater in the right than left hemisphere for both base frequencies. The SN amplitude to frequency changes between 4% and 50% was not significantly related to the magnitude of spectral change. CONCLUSIONS: Modulation of N100 amplitude and latency elicited by spectral change is more pronounced with low compared to high frequencies. SIGNIFICANCE: These data provide electrophysiological evidence that central processing of spectral changes in the cortex differs for low and high frequencies. Some of these differences may be related to both temporal- and spectral-based coding at the auditory periphery. Central representation of frequency change may be related to the different temporal windows of integration across frequencies.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Adulto , Análise de Variância , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Psicofísica , Tempo de Reação/fisiologia
18.
Front Hum Neurosci ; 11: 88, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28286478

RESUMO

Understanding speech in noise (SiN) is a complex task involving sensory encoding and cognitive resources including working memory and attention. Previous work has shown that brain oscillations, particularly alpha rhythms (8-12 Hz) play important roles in sensory processes involving working memory and attention. However, no previous study has examined brain oscillations during performance of a continuous speech perception test. The aim of this study was to measure cortical alpha during attentive listening in a commonly used SiN task (digits-in-noise, DiN) to better understand the neural processes associated with "top-down" cognitive processing in adverse listening environments. We recruited 14 normal hearing (NH) young adults. DiN speech reception threshold (SRT) was measured in an initial behavioral experiment. EEG activity was then collected: (i) while performing the DiN near SRT; and (ii) while attending to a silent, close-caption video during presentation of identical digit stimuli that the participant was instructed to ignore. Three main results were obtained: (1) during attentive ("active") listening to the DiN, a number of distinct neural oscillations were observed (mainly alpha with some beta; 15-30 Hz). No oscillations were observed during attention to the video ("passive" listening); (2) overall, alpha event-related synchronization (ERS) of central/parietal sources were observed during active listening when data were grand averaged across all participants. In some participants, a smaller magnitude alpha event-related desynchronization (ERD), originating in temporal regions, was observed; and (3) when individual EEG trials were sorted according to correct and incorrect digit identification, the temporal alpha ERD was consistently greater on correctly identified trials. No such consistency was observed with the central/parietal alpha ERS. These data demonstrate that changes in alpha activity are specific to listening conditions. To our knowledge, this is the first report that shows almost no brain oscillatory changes during a passive task compared to an active task in any sensory modality. Temporal alpha ERD was related to correct digit identification.

19.
Neuroreport ; 17(11): 1133-7, 2006 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-16837841

RESUMO

Event-related potential correlates of the buildup of precedence effect were examined. Buildup is a type of precedence effect illusion in which perception changes (from hearing two clicks to hearing one click) during a click train. Buildup occurs faster for right-leading than left-leading clicks. Continuous click trains that changed leading sides every 15 clicks were presented. Event-related potential N1 amplitudes became smaller with click train for right-leading only. N1 latency decreased with click trains. Mismatch negativity was seen after lead-lag sides were changed. When the perceived change differed in location (left-to-right), mismatch negativity peaked earlier than when the perceived change differed in location and number of clicks (right-to-left). Results suggest that buildup relates to: N1 refractoriness, event-related potential 'lead domination' and mismatch negativity differences.


Assuntos
Encéfalo/fisiologia , Audição/fisiologia , Percepção/fisiologia , Estimulação Acústica , Adulto , Dominância Cerebral , Eletrofisiologia/métodos , Feminino , Lateralidade Funcional , Humanos , Masculino , Tempo de Reação
20.
Brain Connect ; 6(1): 76-83, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26456242

RESUMO

Using noninvasive neuroimaging, researchers have shown that young children have bilateral and diffuse language networks, which become increasingly left lateralized and focal with development. Connectivity within the distributed pediatric language network has been minimally studied, and conventional neuroimaging approaches do not distinguish task-related signal changes from those that are task essential. In this study, we propose a novel multimodal method to map core language sites from patterns of information flux. We retrospectively analyze neuroimaging data collected in two groups of children, ages 5-18 years, performing verb generation in functional magnetic resonance imaging (fMRI) (n = 343) and magnetoencephalography (MEG) (n = 21). The fMRI data were conventionally analyzed and the group activation map parcellated to define node locations. Neuronal activity at each node was estimated from MEG data using a linearly constrained minimum variance beamformer, and effective connectivity within canonical frequency bands was computed using the phase slope index metric. We observed significant (p ≤ 0.05) effective connections in all subjects. The number of suprathreshold connections was significantly and linearly correlated with participant's age (r = 0.50, n = 21, p ≤ 0.05), suggesting that core language sites emerge as part of the normal developmental trajectory. Across frequencies, we observed significant effective connectivity among proximal left frontal nodes. Within the low frequency bands, information flux was rostrally directed within a focal, left frontal region, approximating Broca's area. At higher frequencies, we observed increased connectivity involving bilateral perisylvian nodes. Frequency-specific differences in patterns of information flux were resolved through fast (i.e., MEG) neuroimaging.


Assuntos
Mapeamento Encefálico , Lobo Frontal/crescimento & desenvolvimento , Lobo Frontal/fisiologia , Idioma , Imageamento por Ressonância Magnética , Vias Neurais/fisiologia , Adolescente , Mapeamento Encefálico/métodos , Criança , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Magnetoencefalografia/métodos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA