Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 210
Filter
Add more filters

Publication year range
1.
J Assoc Res Otolaryngol ; 24(6): 619-631, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38079021

ABSTRACT

PURPOSE: The role of the medial olivocochlear system in speech perception in noise has been debated over the years, with studies showing mixed results. One possible reason for this could be the dependence of this relationship on the parameters used in assessing the speech perception ability (age, stimulus, and response-related variables). METHODS: The current study assessed the influence of the type of speech stimuli (monosyllables, words, and sentences), the signal-to-noise ratio (+5, 0, -5, and -10 dB), the metric used to quantify the speech perception ability (percent-correct, SNR-50, and slope of the psychometric function) and age (young vs old) on the relationship between medial olivocochlear reflex (quantified by contralateral inhibition of transient evoked otoacoustic emissions) and speech perception in noise. RESULTS: A linear mixed-effects model revealed no significant contributions of the medial olivocochlear reflex to speech perception in noise. CONCLUSION: The results suggest that there was no evidence of any modulatory influence of the indirectly measured medial olivocochlear reflex strength on speech perception in noise.


Subject(s)
Speech Perception , Speech Perception/physiology , Otoacoustic Emissions, Spontaneous/physiology , Speech , Noise , Reflex , Cochlea/physiology , Olivary Nucleus/physiology , Acoustic Stimulation
2.
Brain Lang ; 247: 105359, 2023 12.
Article in English | MEDLINE | ID: mdl-37951157

ABSTRACT

Visual information from a speaker's face enhances auditory neural processing and speech recognition. To determine whether auditory memory can be influenced by visual speech, the degree of auditory neural adaptation of an auditory syllable preceded by an auditory, visual, or audiovisual syllable was examined using EEG. Consistent with previous findings and additional adaptation of auditory neurons tuned to acoustic features, stronger adaptation of N1, P2 and N2 auditory evoked responses was observed when the auditory syllable was preceded by an auditory compared to a visual syllable. However, although stronger than when preceded by a visual syllable, lower adaptation was observed when the auditory syllable was preceded by an audiovisual compared to an auditory syllable. In addition, longer N1 and P2 latencies were then observed. These results further demonstrate that visual speech acts on auditory memory but suggest competing visual influences in the case of audiovisual stimulation.


Subject(s)
Speech Perception , Humans , Speech Perception/physiology , Speech , Electroencephalography , Visual Perception/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation , Photic Stimulation
3.
Trends Hear ; 27: 23312165231192290, 2023.
Article in English | MEDLINE | ID: mdl-37551089

ABSTRACT

Speech and music both play fundamental roles in daily life. Speech is important for communication while music is important for relaxation and social interaction. Both speech and music have a large dynamic range. This does not pose problems for listeners with normal hearing. However, for hearing-impaired listeners, elevated hearing thresholds may result in low-level portions of sound being inaudible. Hearing aids with frequency-dependent amplification and amplitude compression can partly compensate for this problem. However, the gain required for low-level portions of sound to compensate for the hearing loss can be larger than the maximum stable gain of a hearing aid, leading to acoustic feedback. Feedback control is used to avoid such instability, but this can lead to artifacts, especially when the gain is only just below the maximum stable gain. We previously proposed a deep-learning method called DeepMFC for controlling feedback and reducing artifacts and showed that when the sound source was speech DeepMFC performed much better than traditional approaches. However, its performance using music as the sound source was not assessed and the way in which it led to improved performance for speech was not determined. The present paper reveals how DeepMFC addresses feedback problems and evaluates DeepMFC using speech and music as sound sources with both objective and subjective measures. DeepMFC achieved good performance for both speech and music when it was trained with matched training materials. When combined with an adaptive feedback canceller it provided over 13 dB of additional stable gain for hearing-impaired listeners.


Subject(s)
Hearing Aids , Music , Speech Perception , Humans , Speech , Feedback , Acoustic Stimulation , Signal Processing, Computer-Assisted
4.
Neuroimage ; 278: 120271, 2023 09.
Article in English | MEDLINE | ID: mdl-37442310

ABSTRACT

Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.


Subject(s)
Speech Perception , Speech , Humans , Magnetic Resonance Imaging , Individuality , Visual Perception/physiology , Speech Perception/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Speech Intelligibility , Acoustic Stimulation/methods
5.
Neurobiol Aging ; 126: 1-13, 2023 06.
Article in English | MEDLINE | ID: mdl-36881943

ABSTRACT

Speech comprehension under dynamic cocktail party conditions requires auditory search for relevant speech content and focusing spatial attention on the target talker. Here, we investigated the development of these cognitive processes in a population of 329 participants aged 20-70 years. We used a multi-talker speech detection and perception task in which pairs of words (each consisting of a cue and a target word) were simultaneously presented from lateralized positions. Participants attended to predefined cue words and responded to the corresponding target. Task difficulty was varied by presenting cue and target stimuli at different intensity levels. Decline in performance was observed only in the oldest group (age range 53-70 years) and only in the most difficult condition. The EEG analysis of neurocognitive correlates of lateralized auditory attention and stimulus evaluation (N2ac, LPCpc, alpha power lateralization) revealed age-associated changes in focussing on and processing of task-relevant information, while no such deficits were found on early auditory search and target segregation. Irrespective of age, more challenging listening conditions were associated with an increased allocation of attentional resources.


Subject(s)
Speech Perception , Humans , Aged , Auditory Perception , Attention , Speech , Electroencephalography , Acoustic Stimulation
6.
Cereb Cortex ; 33(12): 7595-7607, 2023 06 08.
Article in English | MEDLINE | ID: mdl-36967114

ABSTRACT

The establishment of cortical representations critical for mounting language is supported by both ongoing neural maturation and experience-expectant plasticity as infants increasingly recognize the linguistic events that occur most often in their surrounding environment. Previous research has demonstrated that enhanced efficiency of syllabic representation and discrimination is facilitated by interactive attention-driven, nonspeech auditory experience. However, experience-dependent effects on syllable processing as a function of nonspeech, passive auditory exposure (PAE), remain unclear. As theta band-specific activity has been shown to support syllabic processing, we chose theta inter-trial phase synchrony to examine the experience-dependent effects of PAE on the processing of a syllable contrast. Results demonstrated that infants receiving PAE increased syllabic processing efficiency. Specifically, compared with controls, the group receiving PAE showed more mature, efficient processing, exhibiting less theta phase synchrony for the standard syllable at 9 months, and at 18 months, for the deviant syllable. Furthermore, the PAE modulatory effect on theta phase synchrony at 7 and 9 months was associated with language scores at 12 and 18 months. These findings confirm that supporting emerging perceptual abilities during early sensitive periods impacts syllabic processing efficiency and aligns with literature demonstrating associations between infant auditory perceptual abilities and later language outcomes.


Subject(s)
Electroencephalography , Speech Perception , Humans , Infant , Electroencephalography/methods , Evoked Potentials, Auditory , Language , Language Development , Linguistics , Acoustic Stimulation/methods
7.
Cereb Cortex ; 33(10): 6486-6493, 2023 05 09.
Article in English | MEDLINE | ID: mdl-36587299

ABSTRACT

Humans excel at constructing mental representations of speech streams in the absence of external auditory input: the internal experience of speech imagery. Elucidating the neural processes underlying speech imagery is critical to understanding this higher-order brain function in humans. Here, using functional magnetic resonance imaging, we investigated the shared and distinct neural correlates of imagined and perceived speech by asking participants to listen to poems articulated by a male voice (perception condition) and to imagine hearing poems spoken by that same voice (imagery condition). We found that compared to baseline, speech imagery and perception activated overlapping brain regions, including the bilateral superior temporal gyri and supplementary motor areas. The left inferior frontal gyrus was more strongly activated by speech imagery than by speech perception, suggesting functional specialization for generating speech imagery. Although more research with a larger sample size and a direct behavioral indicator is needed to clarify the neural systems underlying the construction of complex speech imagery, this study provides valuable insights into the neural mechanisms of the closely associated but functionally distinct processes of speech imagery and perception.


Subject(s)
Speech Perception , Speech , Humans , Male , Brain Mapping , Imagination , Auditory Perception , Magnetic Resonance Imaging
8.
Cereb Cortex ; 33(10): 6273-6281, 2023 05 09.
Article in English | MEDLINE | ID: mdl-36627246

ABSTRACT

When we attentively listen to an individual's speech, our brain activity dynamically aligns to the incoming acoustic input at multiple timescales. Although this systematic alignment between ongoing brain activity and speech in auditory brain areas is well established, the acoustic events that drive this phase-locking are not fully understood. Here, we use magnetoencephalographic recordings of 24 human participants (12 females) while they were listening to a 1 h story. We show that whereas speech-brain coupling is associated with sustained acoustic fluctuations in the speech envelope in the theta-frequency range (4-7 Hz), speech tracking in the low-frequency delta (below 1 Hz) was strongest around onsets of speech, like the beginning of a sentence. Crucially, delta tracking in bilateral auditory areas was not sustained after onsets, proposing a delta tracking during continuous speech perception that is driven by speech onsets. We conclude that both onsets and sustained components of speech contribute differentially to speech tracking in delta- and theta-frequency bands, orchestrating sampling of continuous speech. Thus, our results suggest a temporal dissociation of acoustically driven oscillatory activity in auditory areas during speech tracking, providing valuable implications for orchestration of speech tracking at multiple time scales.


Subject(s)
Auditory Cortex , Speech Perception , Female , Humans , Speech , Acoustic Stimulation/methods , Magnetoencephalography/methods , Auditory Perception
9.
Hear Res ; 427: 108663, 2023 01.
Article in English | MEDLINE | ID: mdl-36502543

ABSTRACT

Noise exposure may damage the synapses that connect inner hair cells with auditory nerve fibers, before outer hair cells are lost. In humans, this cochlear synaptopathy (CS) is thought to decrease the fidelity of peripheral auditory temporal coding. In the current study, the primary hypothesis was that higher middle ear muscle reflex (MEMR) thresholds, as a proxy measure of CS, would be associated with smaller values of the binaural intelligibility level difference (BILD). The BILD, which is a measure of binaural temporal coding, is defined here as the difference in thresholds between the diotic and the antiphasic versions of the digits in noise (DIN) test. This DIN BILD may control for factors unrelated to binaural temporal coding such as linguistic, central auditory, and cognitive factors. Fifty-six audiometrically normal adults (34 females) aged 18 - 30 were tested. The test battery included standard pure tone audiometry, tympanometry, MEMR using a 2 kHz elicitor and 226 Hz and 1 kHz probes, the Noise Exposure Structured Interview, forward digit span test, extended high frequency (EHF) audiometry, and diotic and antiphasic DIN tests. The study protocol was pre-registered prior to data collection. MEMR thresholds did not predict the DIN BILD. Secondary analyses showed no association between MEMR thresholds and the individual diotic and antiphasic DIN thresholds. Greater lifetime noise exposure was non-significantly associated with higher MEMR thresholds, larger DIN BILD values, and lower (better) antiphasic DIN thresholds, but not with diotic DIN thresholds, nor with EHF thresholds. EHF thresholds were associated with neither MEMR thresholds nor any of the DIN outcomes, including the DIN BILD. Results provide no evidence that young, audiometrically normal people incur CS with impacts on binaural temporal processing.


Subject(s)
Ear, Middle , Reflex , Female , Humans , Young Adult , Acoustic Stimulation , Auditory Threshold , Muscles , Audiometry, Pure-Tone
10.
Int J Audiol ; 62(2): 110-117, 2023 02.
Article in English | MEDLINE | ID: mdl-35195043

ABSTRACT

OBJECTIVE: The medial olivocochlear (MOC) reflex provides unmasking of sounds in noise, but its contribution to speech-in-noise perception remains unclear due to conflicting results. This study determined associations between MOC reflex strength and sentence recognition in noise in individuals with normal hearing. DESIGN: MOC reflex strength was assessed using contralateral inhibition of transient-evoked otoacoustic emissions (TEOAEs). Scores on the AzBio sentence task were quantified at three signal-to-noise ratios (SNRs). Additionally, slope and threshold of the psychometric function were computed. Associations between MOC reflex strength and speech-in-noise outcomes were assessed using Spearman rank correlations. STUDY SAMPLE: Nineteen young adults with normal hearing participated, with data from 17 individuals (mean age = 21.8 years) included in the analysis. RESULTS: Contralateral noise significantly decreased the amplitude of TEOAEs. A range of contralateral inhibition values was exhibited across participants. Scores increased significantly with increasing SNR. Contrary to hypotheses, there were no significant correlations between MOC reflex strength and score, nor were there any significant correlations between MOC reflex strength and measures of the psychometric function. CONCLUSIONS: Results found no significant monotonic relationship between MOC reflex strength and sentence recognition in noise. Future work is needed to determine the functional role of the MOC reflex.


Subject(s)
Olivary Nucleus , Otoacoustic Emissions, Spontaneous , Young Adult , Humans , Adult , Otoacoustic Emissions, Spontaneous/physiology , Cochlea/physiology , Noise/adverse effects , Reflex/physiology , Acoustic Stimulation
11.
Cereb Cortex ; 33(7): 3910-3921, 2023 03 21.
Article in English | MEDLINE | ID: mdl-35972410

ABSTRACT

Speech perception depends on the dynamic interplay of bottom-up and top-down information along a hierarchically organized cortical network. Here, we test, for the first time in the human brain, whether neural processing of attended speech is dynamically modulated by task demand using a context-free discrimination paradigm. Electroencephalographic signals were recorded during 3 parallel experiments that differed only in the phonological feature of discrimination (word, vowel, and lexical tone, respectively). The event-related potentials (ERPs) revealed the task modulation of speech processing at approximately 200 ms (P2) after stimulus onset, probably influencing what phonological information to retain in memory. For the phonological comparison of sequential words, task modulation occurred later at approximately 300 ms (N3 and P3), reflecting the engagement of task-specific cognitive processes. The ERP results were consistent with the changes in delta-theta neural oscillations, suggesting the involvement of cortical tracking of speech envelopes. The study thus provides neurophysiological evidence for goal-oriented modulation of attended speech and calls for speech perception models incorporating limited memory capacity and goal-oriented optimization mechanisms.


Subject(s)
Speech Perception , Humans , Speech Perception/physiology , Acoustic Stimulation/methods , Goals , Evoked Potentials/physiology , Speech/physiology , Electroencephalography/methods
12.
J Pers Med ; 12(10)2022 Oct 05.
Article in English | MEDLINE | ID: mdl-36294797

ABSTRACT

Research suggests that cochlear implant (CI) use in elderly people improves speech perception and health-related quality of life (HRQOL). CI provision could also prevent dementia and other comorbidities and support healthy aging. The aim of this study was (1) to prospectively investigate potential changes in HRQOL and speech perception and (2) to identify clinical action points to improve CI treatment. Participants (n = 45) were CI recipients aged 60-90 with postlingual deafness. They were divided into groups, according to age: Group 1 (n = 20) received a CI between the age of 60-70 years; group 2 (n = 25) between the age of 71-90 years. HRQOL and speech perception were assessed preoperatively, and three and twelve months postoperatively. HRQOL and speech perception increased significantly within one year postoperatively in both groups. No difference between groups was found. We conclude that CI treatment improves speech perception and HRQOL in elderly users. Improvement of the referral process for CI treatment and a holistic approach when discussing CI treatment in the elderly population could prevent auditory deprivation and the deterioration of cognitive abilities.

13.
Front Neurosci ; 16: 786939, 2022.
Article in English | MEDLINE | ID: mdl-35733938

ABSTRACT

Understanding speech is essential for adequate social interaction, and its functioning affects health, wellbeing, and quality of life (QoL). Untreated hearing loss (HL) is associated with reduced social activity, depression and cognitive decline. Severe and profound HL is routinely rehabilitated with cochlear implantation. The success of treatment is mostly assessed by performance-based outcome measures such as speech perception. The ultimate goal of cochlear implantation, however, is to improve the patient's QoL. Therefore, patient-reported outcomes measures (PROMs) would be clinically valuable as they assess subjective benefits and overall effectiveness of treatment. The aim of this study was to assess the patient-reported benefits of unilateral cochlear implantation in an unselected Finnish patient cohort of patients with bilateral HL. The study design was a prospective evaluation of 118 patients. The patient cohort was longitudinally followed up with repeated within-subject measurements preoperatively and at 6 and 12 months postoperatively. The main outcome measures were one performance-based speech-in-noise (SiN) test (Finnish Matrix Sentence Test), and two PROMs [Finnish versions of the Speech, Spatial, Qualities of Hearing questionnaire (SSQ) and the Nijmegen Cochlear Implant Questionnaire (NCIQ)]. The results showed significant average improvements in SiN scores, from +0.8 dB signal-to-noise ratio (SNR) preoperatively to -3.7 and -3.8 dB SNR at 6 and12 month follow-up, respectively. Significant improvements were also found for SSQ and NCIQ scores in all subdomains from the preoperative state to 6 and 12 months after first fitting. No clinically significant improvements were observed in any of the outcome measures between 6 and 12 months. Preoperatively, poor SiN scores were associated with low scoring in several subdomains of the SSQ and NCIQ. Poor preoperative SiN scores and low PROMs scoring were significantly associated with larger postoperative improvements. No significant association was found between SiN scores and PROMs postoperatively. This study demonstrates significant benefits of cochlear implantation in the performance-based and patient-reported outcomes in an unselected patient sample. The lack of association between performance and PROMs scores postoperatively suggests that both capture unique aspects of benefit, highlighting the need to clinically implement PROMs in addition to performance-based measures for a more holistic assessment of treatment benefit.

14.
Neuroimage ; 258: 119395, 2022 09.
Article in English | MEDLINE | ID: mdl-35718023

ABSTRACT

The systematic alignment of low-frequency brain oscillations with the acoustic speech envelope signal is well established and has been proposed to be crucial for actively perceiving speech. Previous studies investigating speech-brain coupling in source space are restricted to univariate pairwise approaches between brain and speech signals, and therefore speech tracking information in frequency-specific communication channels might be lacking. To address this, we propose a novel multivariate framework for estimating speech-brain coupling where neural variability from source-derived activity is taken into account along with the rate of envelope's amplitude change (derivative). We applied it in magnetoencephalographic (MEG) recordings while human participants (male and female) listened to one hour of continuous naturalistic speech, showing that a multivariate approach outperforms the corresponding univariate method in low- and high frequencies across frontal, motor, and temporal areas. Systematic comparisons revealed that the gain in low frequencies (0.6 - 0.8 Hz) was related to the envelope's rate of change whereas in higher frequencies (from 0.8 to 10 Hz) it was mostly related to the increased neural variability from source-derived cortical areas. Furthermore, following a non-negative matrix factorization approach we found distinct speech-brain components across time and cortical space related to speech processing. We confirm that speech envelope tracking operates mainly in two timescales (δ and θ frequency bands) and we extend those findings showing shorter coupling delays in auditory-related components and longer delays in higher-association frontal and motor components, indicating temporal differences of speech tracking and providing implications for hierarchical stimulus-driven speech processing.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Female , Humans , Magnetoencephalography , Male , Multivariate Analysis , Speech
15.
BMC Neurosci ; 23(1): 27, 2022 05 06.
Article in English | MEDLINE | ID: mdl-35524192

ABSTRACT

BACKGROUND: Auditory temporal processing plays an important role in speech comprehension. Usually, behavioral tests that require subjects to detect silent gaps embedded within a continuous sound are used to assess the ability of auditory temporal processing in humans. To evaluate auditory temporal processing objectively, the present study aimed to measure the auditory steady state responses (ASSRs) elicited by silent gaps of different lengths embedded within a broadband noise. We presented a broadband noise with 40-Hz silent gaps of 3.125, 6.25, and 12.5 ms. RESULTS: The 40-Hz silent gaps of 3.125, 6.25, and 12.5 ms elicited clear ASSRs. Longer silent gaps elicited larger ASSR amplitudes and ASSR phases significantly differed between conditions. CONCLUSION: The 40 Hz gap-evoked ASSR contributes to our understanding of the neural mechanisms underlying auditory temporal processing and may lead to the development of objective measures of auditory temporal acuity in humans.


Subject(s)
Electroencephalography , Noise , Acoustic Stimulation , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Humans
16.
Neuroimage ; 257: 119311, 2022 08 15.
Article in English | MEDLINE | ID: mdl-35589000

ABSTRACT

Viewing speaker's lip movements facilitates speech perception, especially under adverse listening conditions, but the neural mechanisms of this perceptual benefit at the phonemic and feature levels remain unclear. This fMRI study addressed this question by quantifying regional multivariate representation and network organization underlying audiovisual speech-in-noise perception. Behaviorally, valid lip movements improved recognition of place of articulation to aid phoneme identification. Meanwhile, lip movements enhanced neural representations of phonemes in left auditory dorsal stream regions, including frontal speech motor areas and supramarginal gyrus (SMG). Moreover, neural representations of place of articulation and voicing features were promoted differentially by lip movements in these regions, with voicing enhanced in Broca's area while place of articulation better encoded in left ventral premotor cortex and SMG. Next, dynamic causal modeling (DCM) analysis showed that such local changes were accompanied by strengthened effective connectivity along the dorsal stream. Moreover, the neurite orientation dispersion of the left arcuate fasciculus, the bearing skeleton of auditory dorsal stream, predicted the visual enhancements of neural representations and effective connectivity. Our findings provide novel insight to speech science that lip movements promote both local phonemic and feature encoding and network connectivity in the dorsal pathway and the functional enhancement is mediated by the microstructural architecture of the circuit.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Auditory Perception , Brain Mapping , Humans , Lip , Speech
17.
Brain Lang ; 230: 105122, 2022 07.
Article in English | MEDLINE | ID: mdl-35460953

ABSTRACT

Understanding the effects of statistical regularities on speech processing is a central issue in auditory neuroscience. To investigate the effects of distributional covariance on the neural processing of speech features, we introduce and validate a novel approach: decomposition of time-varying signals into patterns of covariation extracted with Principal Component Analysis. We used this decomposition to assay the sensory representation of pitch covariation patterns in native Chinese listeners and non-native learners of Mandarin Chinese tones. Sensory representations were examined using the frequency-following response, a far-field potential that reflects phase-locked activity from neural ensembles along the auditory pathway. We found a more efficient representation of the covariation patterns that accounted for more redundancy in the form of distributional covariance. Notably, long-term language and short-term training experiences enhanced the sensory representation of these covariation patterns.


Subject(s)
Speech Perception , Speech , Acoustic Stimulation , Acoustics , Electroencephalography , Humans , Pitch Perception/physiology , Speech Perception/physiology
18.
Braz J Otorhinolaryngol ; 88 Suppl 3: S59-S65, 2022.
Article in English | MEDLINE | ID: mdl-35177355

ABSTRACT

OBJECTIVE: To analyze the effect of noise on electrophysiological measurements (P1-N1-P2 complex) of cortical auditory evoked potentials in normal hearing individuals of different ages. METHODS: The inclusion criteria for the study were young individuals, adults and elderly, aged 18-75 years, with auditory thresholds up to 25 dB. Participants were separated according to their age group: G1 (18-25 years old), G2 (31-59 years old) and G3 (60-75 years old). Cortical auditory evoked potentials were elicited with synthetic speech stimulus /da/ presented in two conditions: without masking and with masking (Delta-t 64 ms). The results were expressed and analyzed using statistical measures. RESULTS: High latencies and reduced amplitudes were observed in the Delta-t 64 ms condition, in all age groups. There were significant differences between the groups, both in P1 latencies for the two conditions and in N1 latencies in the Delta-t 64 ms condition. P1 latencies in the condition without masking were lower in G1 and P1 and N1 latencies in the Delta-t 64 ms condition were higher in G3. The described results show the influence of noise on cortical responses in all age groups, with G3 being the most affected by the masking presentation. CONCLUSION: The latency and amplitude measurements vary according to the stimulus presentation condition and age group. The forward masking phenomenon occurred with greater precision in G3. LEVEL OF EVIDENCE: (2c).


Subject(s)
Auditory Cortex , Speech Perception , Adult , Aged , Humans , Adolescent , Young Adult , Middle Aged , Acoustic Stimulation/methods , Speech Perception/physiology , Evoked Potentials, Auditory/physiology , Noise , Speech , Auditory Cortex/physiology
19.
Brain Res ; 1781: 147778, 2022 04 15.
Article in English | MEDLINE | ID: mdl-35007548

ABSTRACT

Covert speech, the mental imagery of speaking, has been studied increasingly to understand and decode thoughts in the context of brain-computer interfaces. In studies of speech comprehension, neural oscillations are thought to play a key role in the temporal encoding of speech. However, little is known about the role of oscillations in covert speech. In this study, we investigated the oscillatory involvements in covert speech and speech perception. Data were collected from 10 participants with 64 channel EEG. Participants heard the words, 'blue' and 'orange', and subsequently mentally rehearsed them. First, continuous wavelet transform was performed on epoched signals and subsequently two-tailed t-tests between two classes (tasks) were conducted to determine statistical differences in frequency and time (t-CWT). In the current experiment, a task comprised speech perception or covert rehearsal of a word while a condition was the discrimination between tasks. Features were extracted using t-CWT and subsequently classified using a support vector machine. θ and γ phase amplitude coupling (PAC) was also assessed within tasks and across conditions between perception and covert activities (i.e. cross-task). All binary classifications accuracies (80-90%) significantly exceeded chance level, supporting the use of t-CWT in determining relative oscillatory involvements. While the perception condition dynamically invoked all frequencies with more prominent θ and α activity, the covert condition favoured higher frequencies with significantly higher γ activity than perception. Moreover, the perception condition produced significant θ-γ PAC, possibly corroborating a reported linkage between syllabic and phonemic sampling. Although this coupling was found to be suppressed in the covert condition, we found significant cross-task coupling between perception θ and covert speech γ. Covert speech processing appears to be largely associated with higher frequencies of EEG. Importantly, the significant cross-task coupling between speech perception and covert speech, in the absence of within-task covert speech PAC, seems to support the notion that the γ- and θ-bands reflect, respectively, shared and unique encoding processes across tasks.


Subject(s)
Brain-Computer Interfaces , Speech Perception , Electroencephalography , Humans , Speech , Wavelet Analysis
20.
J Neurosci ; 42(8): 1477-1490, 2022 02 23.
Article in English | MEDLINE | ID: mdl-34983817

ABSTRACT

Listeners with sensorineural hearing loss (SNHL) struggle to understand speech, especially in noise, despite audibility compensation. These real-world suprathreshold deficits are hypothesized to arise from degraded frequency tuning and reduced temporal-coding precision; however, peripheral neurophysiological studies testing these hypotheses have been largely limited to in-quiet artificial vowels. Here, we measured single auditory-nerve-fiber responses to a connected speech sentence in noise from anesthetized male chinchillas with normal hearing (NH) or noise-induced hearing loss (NIHL). Our results demonstrated that temporal precision was not degraded following acoustic trauma, and furthermore that sharpness of cochlear frequency tuning was not the major factor affecting impaired peripheral coding of connected speech in noise. Rather, the loss of cochlear tonotopy, a hallmark of NH, contributed the most to both consonant-coding and vowel-coding degradations. Because distorted tonotopy varies in degree across etiologies (e.g., noise exposure, age), these results have important implications for understanding and treating individual differences in speech perception for people suffering from SNHL.SIGNIFICANCE STATEMENT Difficulty understanding speech in noise is the primary complaint in audiology clinics and can leave people with sensorineural hearing loss (SNHL) suffering from communication difficulties that affect their professional, social, and family lives, as well as their mental health. We measured single-neuron responses from a preclinical SNHL animal model to characterize salient neural-coding deficits for naturally spoken speech in noise. We found the major mechanism affecting neural coding was not a commonly assumed factor, but rather a disruption of tonotopicity, the systematic mapping of acoustic frequency to cochlear place that is a hallmark of normal hearing. Because the degree of distorted tonotopy varies across hearing-loss etiologies, these results have important implications for precision audiology approaches to diagnosis and treatment of SNHL.


Subject(s)
Hearing Loss, Noise-Induced , Hearing Loss, Sensorineural , Speech Perception , Acoustic Stimulation/methods , Animals , Auditory Threshold/physiology , Hearing Loss, Sensorineural/etiology , Humans , Male , Noise , Speech , Speech Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL