Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Ear Hear ; 44(5): 1107-1120, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37144890

RESUMEN

OBJECTIVES: Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN: We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS: In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS: These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Humanos , Habla , Individualidad , Ruido , Percepción del Habla/fisiología
2.
Ear Hear ; 43(3): 849-861, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34751679

RESUMEN

OBJECTIVES: Despite the widespread use of noise reduction (NR) in modern digital hearing aids, our neurophysiological understanding of how NR affects speech-in-noise perception and why its effect is variable is limited. The current study aimed to (1) characterize the effect of NR on the neural processing of target speech and (2) seek neural determinants of individual differences in the NR effect on speech-in-noise performance, hypothesizing that an individual's own capability to inhibit background noise would inversely predict NR benefits in speech-in-noise perception. DESIGN: Thirty-six adult listeners with normal hearing participated in the study. Behavioral and electroencephalographic responses were simultaneously obtained during a speech-in-noise task in which natural monosyllabic words were presented at three different signal-to-noise ratios, each with NR off and on. A within-subject analysis assessed the effect of NR on cortical evoked responses to target speech in the temporal-frontal speech and language brain regions, including supramarginal gyrus and inferior frontal gyrus in the left hemisphere. In addition, an across-subject analysis related an individual's tolerance to noise, measured as the amplitude ratio of auditory-cortical responses to target speech and background noise, to their speech-in-noise performance. RESULTS: At the group level, in the poorest signal-to-noise ratio condition, NR significantly increased early supramarginal gyrus activity and decreased late inferior frontal gyrus activity, indicating a switch to more immediate lexical access and less effortful cognitive processing, although no improvement in behavioral performance was found. The across-subject analysis revealed that the cortical index of individual noise tolerance significantly correlated with NR-driven changes in speech-in-noise performance. CONCLUSIONS: NR can facilitate speech-in-noise processing despite no improvement in behavioral performance. Findings from the current study also indicate that people with lower noise tolerance are more likely to get more benefits from NR. Overall, results suggest that future research should take a mechanistic approach to NR outcomes and individual noise tolerance.


Asunto(s)
Audífonos , Percepción del Habla , Adulto , Humanos , Ruido , Relación Señal-Ruido , Habla , Percepción del Habla/fisiología
3.
J Integr Neurosci ; 21(1): 29, 2022 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-35164465

RESUMEN

Background: Verbal communication comprises the retrieval of semantic and syntactic information elicited by various kinds of words (i.e., parts of speech) in a sentence. Content words, such as nouns and verbs, convey essential information about the overall meaning (semantics) of a sentence, whereas function words, such as prepositions and pronouns, carry less meaning and support the syntax of the sentence. Methods: This study aimed to identify neural correlates of the differential information retrieval processes for several parts of speech (i.e., content and function words, nouns and verbs, and objects and subjects) via electroencephalography performed during English spoken-sentence comprehension in thirteen participants with normal hearing. Recently, phoneme-related information has become a potential acoustic feature to investigate human speech processing. Therefore, in this study, we examined the importance of various parts of speech over sentence processing using information about the onset time of phonemes. Results: The distinction in the strength of cortical responses in language-related brain regions provides the neurological evidence that content words, nouns, and objects are dominant compared to function words, verbs, and subjects in spoken sentences, respectively. Conclusions: The findings of this study may provide insights into the different contributions of certain types of words over others to the overall process of sentence understanding.


Asunto(s)
Mapeo Encefálico , Corteza Cerebral/fisiología , Comprensión/fisiología , Electroencefalografía , Psicolingüística , Percepción del Habla/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
4.
Neuroimage ; 228: 117699, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33387631

RESUMEN

Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.


Asunto(s)
Atención/fisiología , Enmascaramiento Perceptual/fisiología , Percepción del Habla/fisiología , Adulto , Corteza Auditiva , Umbral Auditivo/fisiología , Electroencefalografía , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Masculino , Ruido , Procesamiento de Señales Asistido por Computador , Relación Señal-Ruido , Adulto Joven
5.
J Acoust Soc Am ; 150(3): 2131, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34598595

RESUMEN

Speech perception (especially in background noise) is a critical problem for hearing-impaired listeners and an important issue for cognitive hearing science. Despite a plethora of standardized measures, few single-word closed-set tests uniformly sample the most frequently used phonemes and use response choices that equally sample phonetic features like place and voicing. The Iowa Test of Consonant Perception (ITCP) attempts to solve this. It is a proportionally balanced phonemic word recognition task designed to assess perception of the initial consonant of monosyllabic consonant-vowel-consonant (CVC) words. The ITCP consists of 120 sampled CVC words. Words were recorded from four different talkers (two female) and uniformly sampled from all four quadrants of the vowel space to control for coarticulation. Response choices on each trial are balanced to equate difficulty and sample a single phonetic feature. This study evaluated the psychometric properties of ITCP by examining reliability (test-retest) and validity in a sample of online normal-hearing participants. Ninety-eight participants completed two sessions of the ITCP along with standardized tests of words and sentence in noise (CNC words and AzBio sentences). The ITCP showed good test-retest reliability and convergent validity with two popular tests presented in noise. All the materials to use the ITCP or to construct your own version of the ITCP are freely available [Geller, McMurray, Holmes, and Choi (2020). https://osf.io/hycdu/].


Asunto(s)
Percepción del Habla , Femenino , Humanos , Iowa , Ruido/efectos adversos , Fonética , Reproducibilidad de los Resultados
6.
Neuroimage ; 207: 116360, 2020 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-31760150

RESUMEN

Visual and somatosensory spatial attention both induce parietal alpha (8-14 â€‹Hz) oscillations whose topographical distribution depends on the direction of spatial attentional focus. In the auditory domain, contrasts of parietal alpha power for leftward and rightward attention reveal qualitatively similar lateralization; however, it is not clear whether alpha lateralization changes monotonically with the direction of auditory attention as it does for visual spatial attention. In addition, most previous studies of alpha oscillation did not consider individual differences in alpha frequency, but simply analyzed power in a fixed spectral band. Here, we recorded electroencephalography in human subjects when they directed attention to one of five azimuthal locations. After a cue indicating the direction of an upcoming target sequence of spoken syllables (yet before the target began), alpha power changed in a task-specific manner. Individual peak alpha frequencies differed consistently between central electrodes and parieto-occipital electrodes, suggesting multiple neural generators of task-related alpha. Parieto-occipital alpha increased over the hemisphere ipsilateral to attentional focus compared to the contralateral hemisphere, and changed systematically as the direction of attention shifted from far left to far right. These results showing that parietal alpha lateralization changes smoothly with the direction of auditory attention as in visual spatial attention provide further support to the growing evidence that the frontoparietal attention network is supramodal.


Asunto(s)
Ritmo alfa/fisiología , Atención/fisiología , Lateralidad Funcional/fisiología , Percepción Espacial/fisiología , Adolescente , Adulto , Mapeo Encefálico/métodos , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Adulto Joven
7.
Neuroimage ; 202: 116151, 2019 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-31493531

RESUMEN

Spatial selective attention enables listeners to process a signal of interest in natural settings. However, most past studies on auditory spatial attention used impoverished spatial cues: presenting competing sounds to different ears, using only interaural differences in time (ITDs) and/or intensity (IIDs), or using non-individualized head-related transfer functions (HRTFs). Here we tested the hypothesis that impoverished spatial cues impair spatial auditory attention by only weakly engaging relevant cortical networks. Eighteen normal-hearing listeners reported the content of one of two competing syllable streams simulated at roughly +30° and -30° azimuth. The competing streams consisted of syllables from two different-sex talkers. Spatialization was based on natural spatial cues (individualized HRTFs), individualized IIDs, or generic ITDs. We measured behavioral performance as well as electroencephalographic markers of selective attention. Behaviorally, subjects recalled target streams most accurately with natural cues. Neurally, spatial attention significantly modulated early evoked sensory response magnitudes only for natural cues, not in conditions using only ITDs or IIDs. Consistent with this, parietal oscillatory power in the alpha band (8-14 â€‹Hz; associated with filtering out distracting events from unattended directions) showed significantly less attentional modulation with isolated spatial cues than with natural cues. Our findings support the hypothesis that spatial selective attention networks are only partially engaged by impoverished spatial auditory cues. These results not only suggest that studies using unnatural spatial cues underestimate the neural effects of spatial auditory attention, they also illustrate the importance of preserving natural spatial cues in assistive listening devices to support robust attentional control.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Señales (Psicología) , Procesamiento Espacial/fisiología , Estimulación Acústica , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Vías Nerviosas/fisiología , Percepción del Habla/fisiología , Adulto Joven
8.
Hear Res ; 427: 108649, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36462377

RESUMEN

Cochlear implants (CIs) have evolved to combine residual acoustic hearing with electric hearing. It has been expected that CI users with residual acoustic hearing experience better speech-in-noise perception than CI-only listeners because preserved acoustic cues aid unmasking speech from background noise. This study sought neural substrate of better speech unmasking in CI users with preserved acoustic hearing compared to those with lower degree of acoustic hearing. Cortical evoked responses to speech in multi-talker babble noise were compared between 29 Hybrid (i.e., electric acoustic stimulation or EAS) and 29 electric-only CI users. The amplitude ratio of evoked responses to speech and noise, or internal SNR, was significantly larger in the CI users with EAS. This result indicates that CI users with better residual acoustic hearing exhibit enhanced unmasking of speech from background noise.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Habla , Percepción del Habla/fisiología , Audición , Estimulación Acústica , Estimulación Eléctrica
9.
J Assoc Res Otolaryngol ; 24(6): 607-617, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38062284

RESUMEN

OBJECTIVES: Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. DESIGN: Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. RESULTS: No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users' speech-in-noise performance that was not explained by spectral and temporal resolution. CONCLUSION: Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Habla , Ruido
10.
Nat Commun ; 14(1): 6264, 2023 10 07.
Artículo en Inglés | MEDLINE | ID: mdl-37805497

RESUMEN

The human brain extracts meaning using an extensive neural system for semantic knowledge. Whether broadly distributed systems depend on or can compensate after losing a highly interconnected hub is controversial. We report intracranial recordings from two patients during a speech prediction task, obtained minutes before and after neurosurgical treatment requiring disconnection of the left anterior temporal lobe (ATL), a candidate semantic knowledge hub. Informed by modern diaschisis and predictive coding frameworks, we tested hypotheses ranging from solely neural network disruption to complete compensation by the indirectly affected language-related and speech-processing sites. Immediately after ATL disconnection, we observed neurophysiological alterations in the recorded frontal and auditory sites, providing direct evidence for the importance of the ATL as a semantic hub. We also obtained evidence for rapid, albeit incomplete, attempts at neural network compensation, with neural impact largely in the forms stipulated by the predictive coding framework, in specificity, and the modern diaschisis framework, more generally. The overall results validate these frameworks and reveal an immediate impact and capability of the human brain to adjust after losing a brain hub.


Asunto(s)
Diásquisis , Semántica , Humanos , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética , Lóbulo Temporal/cirugía , Lóbulo Temporal/fisiología
11.
Trends Hear ; 26: 23312165221141143, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36464791

RESUMEN

Auditory selective attention is a crucial top-down cognitive mechanism for understanding speech in noise. Cochlear implant (CI) users display great variability in speech-in-noise performance that is not easily explained by peripheral auditory profile or demographic factors. Thus, it is imperative to understand if auditory cognitive processes such as selective attention explain such variability. The presented study directly addressed this question by quantifying attentional modulation of cortical auditory responses during an attention task and comparing its individual differences with speech-in-noise performance. In our attention experiment, participants with CI were given a pre-stimulus visual cue that directed their attention to either of two speech streams and were asked to select a deviant syllable in the target stream. The two speech streams consisted of the female voice saying "Up" five times every 800 ms and the male voice saying "Down" four times every 1 s. The onset of each syllable elicited distinct event-related potentials (ERPs). At each syllable onset, the difference in the amplitudes of ERPs between the two attentional conditions (attended - ignored) was computed. This ERP amplitude difference served as a proxy for attentional modulation strength. Our group-level analysis showed that the amplitude of ERPs was greater when the syllable was attended than ignored, exhibiting that attention modulated cortical auditory responses. Moreover, the strength of attentional modulation showed a significant correlation with speech-in-noise performance. These results suggest that the attentional modulation of cortical auditory responses may provide a neural marker for predicting CI users' success in clinical tests of speech-in-noise listening.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Femenino , Masculino , Humanos , Habla , Potenciales Evocados Auditivos , Percepción Auditiva
12.
Front Hum Neurosci ; 15: 676992, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34239430

RESUMEN

Selective attention enhances cortical responses to attended sensory inputs while suppressing others, which can be an effective strategy for speech-in-noise (SiN) understanding. Emerging evidence exhibits a large variance in attentional control during SiN tasks, even among normal-hearing listeners. Yet whether training can enhance the efficacy of attentional control and, if so, whether the training effects can be transferred to performance on a SiN task has not been explicitly studied. Here, we introduce a neurofeedback training paradigm designed to reinforce the attentional modulation of auditory evoked responses. Young normal-hearing adults attended one of two competing speech streams consisting of five repeating words ("up") in a straight rhythm spoken by a female speaker and four straight words ("down") spoken by a male speaker. Our electroencephalography-based attention decoder classified every single trial using a template-matching method based on pre-defined patterns of cortical auditory responses elicited by either an "up" or "down" stream. The result of decoding was provided on the screen as online feedback. After four sessions of this neurofeedback training over 4 weeks, the subjects exhibited improved attentional modulation of evoked responses to the training stimuli as well as enhanced cortical responses to target speech and better performance during a post-training SiN task. Such training effects were not found in the Placebo Group that underwent similar attention training except that feedback was given only based on behavioral accuracy. These results indicate that the neurofeedback training may reinforce the strength of attentional modulation, which likely improves SiN understanding. Our finding suggests a potential rehabilitation strategy for SiN deficits.

13.
PLoS One ; 15(8): e0236784, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32745116

RESUMEN

Spectral ripple discrimination (SRD) has been widely used to evaluate the spectral resolution in cochlear implant (CI) recipients based on its strong correlation with speech perception performance. However, despite its usefulness for predicting speech perception outcomes, SRD performance exhibits large across-subject variabilities even among subjects implanted with the same CIs and sound processors. The potential factors of this observation include current spread, nerve survival, and CI mapping. Previous studies have found that the spectral resolution reduces with increasing distance of the stimulation electrode from the auditory nerve fibers (ANFs), attributable to increasing current spread. However, it remains unclear whether the spread of excitation is the only cause of the observation, or whether other factors such as temporal interaction also contribute to it. In this study, we used a computational model to investigate channel interaction upon non-simultaneous stimulation with respect to the electrode-ANF distance, and evaluated the SRD performance for five electrode-ANF distances. The SRD performance was determined based on the similarity between two neurograms in response to standard and inverted stimuli and used to evaluate the spectral resolution in the computational model. The spread of excitation was observed to increase with increasing electrode-ANF distance, consistent with previous findings. Additionally, the preceding pulses delivered from neighboring channels induced a channel interaction that either inhibited or facilitated the neural responses to subsequent pulses depending on the electrode-ANF distance. The SRD performance was also found to decrease with increasing electrode-ANF distance. The findings of this study suggest that variation of the neural responses (inhibition or facilitation) with the electrode-ANF distance in CI users may cause spectral smearing, and hence poor spectral resolution. A computational model such as that used in this study is a useful tool for understanding the neural factors related to CI outcomes, such as cannot be accomplished by behavioral studies alone.


Asunto(s)
Estimulación Acústica/métodos , Implantes Cocleares , Umbral Auditivo/fisiología , Implantación Coclear/métodos , Nervio Coclear/fisiología , Simulación por Computador , Humanos , Percepción del Habla/fisiología
14.
Clin Interv Aging ; 15: 395-406, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32231429

RESUMEN

INTRODUCTION: Older listeners have difficulty understanding speech in unfavorable listening conditions. To compensate for acoustic degradation, cognitive processing skills, such as working memory, need to be engaged. Despite prior findings on the association between working memory and speech recognition in various listening conditions, it is not yet clear whether the modality of stimuli presentation for working memory tasks should be auditory or visual. Given the modality-specific characteristics of working memory, we hypothesized that auditory working memory capacity could predict speech recognition performance in adverse listening conditions for older listeners and that the contribution of auditory working memory to speech recognition would depend on the task and listening condition. METHODS: Seventy-six older listeners and twenty younger listeners completed four kinds of auditory working memory tasks, including digit and speech span tasks, and sentence recognition tasks in four different listening conditions having multi-talker noise and time-compression. For older listeners, cognitive function was screened using the Mini-Mental Status Examination, and audibility was assured. RESULTS: Auditory working memory, as measured by listening span, significantly predicted speech recognition performance in adverse listening conditions for older listeners. A linear regression model showed speech recognition performance for older listeners could be explained by auditory working memory whilst controlling for the impact of age and hearing sensitivity. DISCUSSION: Measuring working memory in the auditory modality facilitated explaining the variance in speech recognition in adverse listening conditions for older listeners. The linguistic features and the complexity of the auditory stimuli may affect the association between working memory and speech recognition performance. CONCLUSION: We demonstrated the contribution of auditory working memory to speech recognition in unfavorable listening conditions in older populations. Taking the modality-specific characteristics of working memory into account may be a key to better understand the difficulty in speech recognition in daily listening conditions for older listeners.


Asunto(s)
Cognición/fisiología , Memoria a Corto Plazo/fisiología , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Anciano , Estudios de Casos y Controles , Femenino , Pruebas Auditivas , Humanos , Lenguaje , Masculino , Persona de Mediana Edad , Reconocimiento en Psicología
15.
Elife ; 82019 11 29.
Artículo en Inglés | MEDLINE | ID: mdl-31782732

RESUMEN

Both visual and auditory spatial selective attention result in lateralized alpha (8-14 Hz) oscillatory power in parietal cortex: alpha increases in the hemisphere ipsilateral to attentional focus. Brain stimulation studies suggest a causal relationship between parietal alpha and suppression of the representation of contralateral visual space. However, there is no evidence that parietal alpha controls auditory spatial attention. Here, we performed high definition transcranial alternating current stimulation (HD-tACS) on human subjects performing an auditory task in which they directed attention based on either spatial or nonspatial features. Alpha (10 Hz) but not theta (6 Hz) HD-tACS of right parietal cortex interfered with attending left but not right auditory space. Parietal stimulation had no effect for nonspatial auditory attention. Moreover, performance in post-stimulation trials returned rapidly to baseline. These results demonstrate a causal, frequency-, hemispheric-, and task-specific effect of parietal alpha brain stimulation on top-down control of auditory spatial attention.


Asunto(s)
Ritmo alfa , Atención , Percepción Auditiva , Lóbulo Parietal/fisiología , Procesamiento Espacial , Adolescente , Adulto , Femenino , Voluntarios Sanos , Humanos , Masculino , Estimulación Transcraneal de Corriente Directa , Adulto Joven
16.
J Neurosci Methods ; 311: 253-258, 2019 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-30389490

RESUMEN

Classification of spoken word-evoked potentials is useful for both neuroscientific and clinical applications including brain-computer interfaces (BCIs). By evaluating whether adopting a biology-based structure improves a classifier's accuracy, we can investigate the importance of such structure in human brain circuitry, and advance BCI performance. In this study, we propose a semantic-hierarchical structure for classifying spoken word-evoked cortical responses. The proposed structure decodes the semantic grouping of the words first (e.g., a body part vs. a number) and then decodes which exact word was heard. The proposed classifier structure exhibited a consistent ∼10% improvement of classification accuracy when compared with a non-hierarchical structure. Our result provides a tool for investigating the neural representation of semantic hierarchy and the acoustic properties of spoken words in human brains. Our results suggest an improved algorithm for BCIs operated by decoding heard, and possibly imagined, words.


Asunto(s)
Encéfalo/fisiología , Modelos Neurológicos , Reconocimiento de Normas Patrones Automatizadas/métodos , Semántica , Procesamiento de Señales Asistido por Computador , Percepción del Habla/fisiología , Adulto , Algoritmos , Electrocorticografía , Potenciales Evocados , Humanos , Masculino , Habla , Adulto Joven
17.
Neuroscience ; 407: 53-66, 2019 05 21.
Artículo en Inglés | MEDLINE | ID: mdl-30853540

RESUMEN

Studies in multiple species, including in post-mortem human tissue, have shown that normal aging and/or acoustic overexposure can lead to a significant loss of afferent synapses innervating the cochlea. Hypothetically, this cochlear synaptopathy can lead to perceptual deficits in challenging environments and can contribute to central neural effects such as tinnitus. However, because cochlear synaptopathy can occur without any measurable changes in audiometric thresholds, synaptopathy can remain hidden from standard clinical diagnostics. To understand the perceptual sequelae of synaptopathy and to evaluate the efficacy of emerging therapies, sensitive and specific non-invasive measures at the individual patient level need to be established. Pioneering experiments in specific mice strains have helped identify many candidate assays. These include auditory brainstem responses, the middle-ear muscle reflex, envelope-following responses, and extended high-frequency audiograms. Unfortunately, because these non-invasive measures can be also affected by extraneous factors other than synaptopathy, their application and interpretation in humans is not straightforward. Here, we systematically examine six extraneous factors through a series of interrelated human experiments aimed at understanding their effects. Using strategies that may help mitigate the effects of such extraneous factors, we then show that these suprathreshold physiological assays exhibit across-individual correlations with each other indicative of contributions from a common physiological source consistent with cochlear synaptopathy. Finally, we discuss the application of these assays to two key outstanding questions, and discuss some barriers that still remain. This article is part of a Special Issue entitled: Hearing Loss, Tinnitus, Hyperacusis, Central Gain.


Asunto(s)
Umbral Auditivo/fisiología , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Individualidad , Acúfeno/etiología , Cóclea/fisiología , Audición/fisiología , Pérdida Auditiva Provocada por Ruido/complicaciones , Humanos , Sinapsis/fisiología , Acúfeno/fisiopatología
18.
Hear Res ; 367: 223-230, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29980380

RESUMEN

BACKGROUND: Pitch perception of complex tones relies on place or temporal fine structure-based mechanisms from resolved harmonics and the temporal envelope of unresolved harmonics. Combining this information is essential for speech-in-noise performance, as it allows segregation of a target speaker from background noise. In hybrid cochlear implant (H-CI) users, low frequency acoustic hearing should provide pitch from resolved harmonics while high frequency electric hearing should provide temporal envelope pitch from unresolved harmonics. How the acoustic and electric auditory inputs interact for H-CI users is largely unknown. Harmonicity and inharmonicity are emergent features of sound in which overtones are concordant or discordant with the fundamental frequency. We hypothesized that some H-CI users would be able to integrate acoustic and electric information for complex tone pitch perception, and that this ability would be correlated with speech-in-noise performance. In this study, we used perception of inharmonicity to demonstrate this integration. METHODS: Fifteen H-CI users with only acoustic hearing below 500 Hz, only electric hearing above 2 kHz, and more than 6 months CI experience, along with eighteen normal hearing (NH) controls, were presented with harmonic and inharmonic sounds. The stimulus was created with a low frequency component, corresponding with the H-CI user's acoustic hearing (fundamental frequency between 125 and 174 Hz), and a high frequency component, corresponding with electric hearing. Subjects were asked to identify the more inharmonic sound, which requires the perceptual integration of the low and high components. Speech-in-noise performance was tested in both groups using the California Consonant Test (CCT), and perception of Consonant-Nucleus-Consonant (CNC) words in quiet and AzBio sentences in noise were tested for the H-CI users. RESULTS: Eight of the H-CI subjects (53%), and all of the NH subjects, scored significantly above chance level for at least one subset of the inharmonicity detection task. Inharmonicity detection ability, but not age or pure tone average, predicted speech scores in a linear model. These results were significantly correlated with speech scores in both quiet and noise for H-CI users, but not with speech in noise performance for NH listeners. Musical experience predicted inharmonicity detection ability, but did not predict speech performance. CONCLUSIONS: We demonstrate integration of acoustic and electric information in H-CI users for complex pitch sensation. The correlation with speech scores in H-CI users might be associated with the ability to segregate a target speaker from background noise using the speaker's fundamental frequency.


Asunto(s)
Implantación Coclear/instrumentación , Implantes Cocleares , Señales (Psicología) , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/rehabilitación , Percepción de la Altura Tonal , Percepción del Habla , Estimulación Acústica , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Audiometría del Habla , Umbral Auditivo , Estudios de Casos y Controles , Comprensión , Estimulación Eléctrica , Femenino , Humanos , Masculino , Persona de Mediana Edad , Personas con Deficiencia Auditiva/psicología , Inteligibilidad del Habla , Factores de Tiempo
20.
PLoS One ; 11(6): e0157722, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27351198

RESUMEN

OBJECTIVE: Although vascular pulsatile tinnitus (VPT) has been classified as "objective", VPT is not easily recognizable or documentable in most cases. In response to this, we have developed transcanal sound recording (TSR) and spectro-temporal analysis (STA) for the objective diagnosis of VPT. By refining our initial method, we were able to apply TSR/STA to post-treatment outcome evaluation, as well as pre-treatment objective diagnosis. METHODS: TSR was performed on seven VPT patients and five normal controls before and after surgical or interventional treatment. VPT was recorded using an inserted microphone with the subjects placed in both upright and supine positions with 1) a neutral head position, 2) head rotated to the tinnitus side, 3) head rotated to the non-tinnitus side, and 4) a neutral position with ipsi-lesional manual cervical compression. The recorded signals were analyzed in both time and time-frequency domains by performing a short-time Fourier transformation. RESULTS: The pre-treatment ear canal signals of all VPT patients demonstrated pulse-synchronous periodic structures and acoustic characteristics that were representative of their presumptive vascular pathologies, whereas those the controls exhibited smaller peaks and weak periodicities. Compared with the pre-treatment signals, the post-treatment signals exhibited significantly reduced peak- and root mean square amplitudes upon time domain analysis. Additionally, further sub-band analysis confirmed that the pulse-synchronous signal of all subjects was not identifiable after treatment and, in particular, that the signal decrement was statistically significant at low frequencies. Moreover, the post-treatment signals of the VPT subjects revealed no significant differences when compared to those of the control group. CONCLUSION: We reconfirmed that the TSR/STA method is an effective modality to objectify VPT. In addition, the potential role of the TSR/STA method in the objective evaluation of treatment outcomes in patients with VPT was proven. Further studies incorporating a larger sample size and more refined recording techniques are warranted.


Asunto(s)
Sonido , Acúfeno/diagnóstico , Adulto , Estudios de Casos y Controles , Femenino , Análisis de Fourier , Pruebas Auditivas/instrumentación , Pruebas Auditivas/métodos , Pruebas Auditivas/normas , Humanos , Masculino , Persona de Mediana Edad , Acúfeno/terapia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA