Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 98
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Hum Brain Mapp ; 45(8): e26676, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38798131

RESUMO

Aphasia is a communication disorder that affects processing of language at different levels (e.g., acoustic, phonological, semantic). Recording brain activity via Electroencephalography while people listen to a continuous story allows to analyze brain responses to acoustic and linguistic properties of speech. When the neural activity aligns with these speech properties, it is referred to as neural tracking. Even though measuring neural tracking of speech may present an interesting approach to studying aphasia in an ecologically valid way, it has not yet been investigated in individuals with stroke-induced aphasia. Here, we explored processing of acoustic and linguistic speech representations in individuals with aphasia in the chronic phase after stroke and age-matched healthy controls. We found decreased neural tracking of acoustic speech representations (envelope and envelope onsets) in individuals with aphasia. In addition, word surprisal displayed decreased amplitudes in individuals with aphasia around 195 ms over frontal electrodes, although this effect was not corrected for multiple comparisons. These results show that there is potential to capture language processing impairments in individuals with aphasia by measuring neural tracking of continuous speech. However, more research is needed to validate these results. Nonetheless, this exploratory study shows that neural tracking of naturalistic, continuous speech presents a powerful approach to studying aphasia.


Assuntos
Afasia , Eletroencefalografia , Acidente Vascular Cerebral , Humanos , Afasia/fisiopatologia , Afasia/etiologia , Afasia/diagnóstico por imagem , Masculino , Feminino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Idoso , Percepção da Fala/fisiologia , Adulto , Fala/fisiologia
2.
Ear Hear ; 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39085997

RESUMO

OBJECTIVES: This study investigated the efficiency of a multiplexed amplitude-modulated (AM) stimulus in eliciting auditory steady-state responses. The multiplexed AM stimulus was created by simultaneously modulating speech-shaped noise with three frequencies chosen to elicit different neural generators: 3.1, 40.1, and 102.1 Hz. For comparison, a single AM stimulus was created for each of these frequencies, resulting in three single AM conditions and one multiplex AM condition. DESIGN: Twenty-two bilaterally normal-hearing participants (18 females) listened for 8 minutes to each type of stimuli. The analysis compared the signal to noise ratios (SNRs) and amplitudes of the evoked responses to the single and multiplexed conditions. RESULTS: The results revealed that the SNRs elicited by single AM conditions were, on average, 1.61 dB higher than those evoked by the multiplexed AM condition ( p < 0.05). The single conditions consistently produced a significantly higher SNR when examining various stimulus durations ranging from 1 to 8 minutes. Despite these SNR differences, the frequency spectrum was very similar across and within subjects. In addition, the sensor space patterns across the scalp demonstrated similar trends between the single and multiplexed stimuli for both SNR and amplitudes. Both the single and multiplexed conditions evoked significant auditory steady-state responses within subjects. On average, the multiplexed AM stimulus took 31 minutes for the lower bound of the 95% prediction interval to cross the significance threshold across all three frequencies. In contrast, the single AM stimuli took 45 minutes and 42 seconds. CONCLUSIONS: These findings show that the multiplexed AM stimulus is a promising method to reduce the recording time when simultaneously obtaining information from various neural generators.

3.
J Neurosci ; 2022 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-36041851

RESUMO

When listening to continuous speech, the human brain can track features of the presented speech signal. It has been shown that neural tracking of acoustic features is a prerequisite for speech understanding and can predict speech understanding in controlled circumstances. However, the brain also tracks linguistic features of speech, which may be more directly related to speech understanding. We investigated acoustic and linguistic speech processing as a function of varying speech understanding by manipulating the speech rate. In this paradigm, acoustic and linguistic speech processing is affected simultaneously but in opposite directions: When the speech rate increases, more acoustic information per second is present. In contrast, the tracking of linguistic information becomes more challenging when speech is less intelligible at higher speech rates. We measured the EEG of 18 participants (4 male) who listened to speech at various speech rates. As expected and confirmed by the behavioral results, speech understanding decreased with increasing speech rate. Accordingly, linguistic neural tracking decreased with increasing speech rate, but acoustic neural tracking increased. This indicates that neural tracking of linguistic representations can capture the gradual effect of decreasing speech understanding. In addition, increased acoustic neural tracking does not necessarily imply better speech understanding. This suggests that, although more challenging to measure because of the low signal-to-noise ratio, linguistic neural tracking may be a more direct predictor of speech understanding.Significance Statement:An increasingly popular method to investigate neural speech processing is to measure neural tracking. Although much research has been done on how the brain tracks acoustic speech features, linguistic speech features have received less attention. In this study, we disentangled acoustic and linguistic characteristics of neural speech tracking via manipulating the speech rate. A proper way of objectively measuring auditory and language processing paves the way toward clinical applications: An objective measure of speech understanding would allow for behavioral-free evaluation of speech understanding, which allows to evaluate hearing loss and adjust hearing aids based on brain responses. This objective measure would benefit populations from whom obtaining behavioral measures may be complex, such as young children or people with cognitive impairments.

4.
Neuroimage ; 267: 119841, 2023 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-36584758

RESUMO

BACKGROUND: Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms of bottom-up acoustic analysis and top-down generation of linguistic-based predictions. We studied natural speech processing across the adult lifespan via electroencephalography (EEG) measurements of neural tracking. GOALS: Our goals are to analyze the unique contribution of linguistic speech processing across the adult lifespan using natural speech, while controlling for the influence of acoustic processing. Moreover, we also studied acoustic processing across age. In particular, we focus on changes in spatial and temporal activation patterns in response to natural speech across the lifespan. METHODS: 52 normal-hearing adults between 17 and 82 years of age listened to a naturally spoken story while the EEG signal was recorded. We investigated the effect of age on acoustic and linguistic processing of speech. Because age correlated with hearing capacity and measures of cognition, we investigated whether the observed age effect is mediated by these factors. Furthermore, we investigated whether there is an effect of age on hemisphere lateralization and on spatiotemporal patterns of the neural responses. RESULTS: Our EEG results showed that linguistic speech processing declines with advancing age. Moreover, as age increased, the neural response latency to certain aspects of linguistic speech processing increased. Also acoustic neural tracking (NT) decreased with increasing age, which is at odds with the literature. In contrast to linguistic processing, older subjects showed shorter latencies for early acoustic responses to speech. No evidence was found for hemispheric lateralization in neither younger nor older adults during linguistic speech processing. Most of the observed aging effects on acoustic and linguistic processing were not explained by age-related decline in hearing capacity or cognition. However, our results suggest that the effect of decreasing linguistic neural tracking with advancing age at word-level is also partially due to an age-related decline in cognition than a robust effect of age. CONCLUSION: Spatial and temporal characteristics of the neural responses to continuous speech change across the adult lifespan for both acoustic and linguistic speech processing. These changes may be traces of structural and/or functional change that occurs with advancing age.


Assuntos
Percepção da Fala , Fala , Humanos , Idoso , Fala/fisiologia , Estimulação Acústica/métodos , Percepção da Fala/fisiologia , Eletroencefalografia/métodos , Linguística , Acústica
5.
Ear Hear ; 44(3): 477-493, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36534665

RESUMO

OBJECTIVES: Audiological rehabilitation includes sensory management, auditory training (AT), and counseling and can alleviate the negative consequences associated with (untreated) hearing impairment. AT aims at improving auditory skills through structured analytical (bottom-up) or synthetic (top-down) listening exercises. The evidence for AT to improve auditory outcomes of postlingually deafened adults with a cochlear implant (CI) remains a point of debate due to the relatively limited number of studies and methodological shortcomings. There is a general agreement that more rigorous scientific study designs are needed to determine the effectiveness, generalization, and consolidation of AT for CI users. The present study aimed to investigate the effectiveness of a personalized AT program compared to a nonpersonalized Active Control program with adult CI users in a stratified randomized controlled clinical trial. DESIGN: Off-task outcomes were sentence understanding in noise, executive functioning, and health-related quality of life. Participants were tested before and after 16 weeks of training and after a further 8 months without training. Participant expectations of the training program were assessed before the start of training. RESULTS: The personalized and nonpersonalized AT programs yielded similar results. Significant on-task improvements were observed. Moreover, AT generalized to improved speech understanding in noise for both programs. Half of the CI users reached a clinically relevant improvement in speech understanding in noise of at least 2 dB SNR post-training. These improvements were maintained 8 months after completion of the training. In addition, a significant improvement in quality of life was observed for participants in both treatment groups. Adherence to the training programs was high, and both programs were considered user-friendly. CONCLUSIONS: Training in both treatments yielded similar results. For half of the CI users, AT transferred to better performance with generalization of learning for speech understanding in noise and quality of life. Our study supports the previous findings that AT can be beneficial for some CI users.


Assuntos
Implante Coclear , Implantes Cocleares , Perda Auditiva , Percepção da Fala , Adulto , Humanos , Qualidade de Vida , Perda Auditiva/reabilitação
6.
J Neurosci ; 41(50): 10316-10329, 2021 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-34732519

RESUMO

When listening to speech, our brain responses time lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic representations might reflect unaccounted acoustic processing rather than language processing. Here, we evaluated the potential of several recently proposed linguistic representations as neural markers of speech comprehension. To do so, we investigated EEG responses to audiobook speech of 29 participants (22 females). We examined whether these representations contribute unique information over and beyond acoustic neural tracking and each other. Indeed, not all of these linguistic representations were significantly tracked after controlling for acoustic properties. However, phoneme surprisal, cohort entropy, word surprisal, and word frequency were all significantly tracked over and beyond acoustic properties. We also tested the generality of the associated responses by training on one story and testing on another. In general, the linguistic representations are tracked similarly across different stories spoken by different readers. These results suggests that these representations characterize the processing of the linguistic content of speech.SIGNIFICANCE STATEMENT For clinical applications, it would be desirable to develop a neural marker of speech comprehension derived from neural responses to continuous speech. Such a measure would allow for behavior-free evaluation of speech understanding; this would open doors toward better quantification of speech understanding in populations from whom obtaining behavioral measures may be difficult, such as young children or people with cognitive impairments, to allow better targeted interventions and better fitting of hearing devices.


Assuntos
Compreensão/fisiologia , Linguística , Acústica da Fala , Percepção da Fala/fisiologia , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Processamento de Sinais Assistido por Computador
7.
Eur J Neurosci ; 55(6): 1671-1690, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35263814

RESUMO

We investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers. Compared with their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both populations; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed. Our results support the idea that the neural response latency indicates the efficiency of neural speech processing: More or different brain regions are involved in processing speech, which causes longer communication pathways in the brain. These longer communication pathways hamper the information integration among these brain regions, reflected in longer processing times. Altogether, this suggests decreased neural speech processing efficiency in HI listeners as more time and more or different brain regions are required to process speech. Our results suggest that this reduction in neural speech processing efficiency occurs gradually as hearing deteriorates. From our results, it is apparent that sound amplification does not solve hearing loss. Even when listening to speech in silence at a comfortable loudness, hearing-impaired listeners process speech less efficiently.


Assuntos
Surdez , Perda Auditiva Neurossensorial , Perda Auditiva , Percepção da Fala , Adulto , Humanos , Ruído , Fala , Percepção da Fala/fisiologia
8.
J Neurophysiol ; 126(3): 791-802, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34232756

RESUMO

Auditory processing is affected by advancing age and hearing loss, but the underlying mechanisms are still unclear. We investigated the effects of age and hearing loss on temporal processing of naturalistic stimuli in the auditory system. We used a recently developed objective measure for neural phase-locking to the fundamental frequency of the voice (f0) which uses continuous natural speech as a stimulus, that is, "f0-tracking." The f0-tracking responses from 54 normal-hearing and 14 hearing-impaired adults of varying ages were analyzed. The responses were evoked by a Flemish story with a male talker and contained contributions from both subcortical and cortical sources. Results indicated that advancing age was related to smaller responses with less cortical response contributions. This is consistent with an age-related decrease in neural phase-locking ability at frequencies in the range of the f0, possibly due to decreased inhibition in the auditory system. Conversely, hearing-impaired subjects displayed larger responses compared with age-matched normal-hearing controls. This was due to additional cortical response contributions in the 38- to 50-ms latency range, which were stronger for participants with more severe hearing loss. This is consistent with hearing-loss-induced cortical reorganization and recruitment of additional neural resources to aid in speech perception.NEW & NOTEWORTHY Previous studies disagree on the effects of age and hearing loss on the neurophysiological processing of the fundamental frequency of the voice (f0), in part due to confounding effects. Using a novel electrophysiological technique, natural speech stimuli, and controlled study design, we quantified and disentangled the effects of age and hearing loss on neural f0 processing. We uncovered evidence for underlying neurophysiological mechanisms, including a cortical compensation mechanism for hearing loss, but not for age.


Assuntos
Adaptação Fisiológica , Córtex Cerebral/fisiologia , Perda Auditiva/fisiopatologia , Acústica da Fala , Percepção da Fala , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Vias Auditivas/fisiologia , Vias Auditivas/fisiopatologia , Córtex Cerebral/citologia , Córtex Cerebral/crescimento & desenvolvimento , Córtex Cerebral/fisiopatologia , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação
9.
Eur J Neurosci ; 53(11): 3640-3653, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33861480

RESUMO

Traditional electrophysiological methods to study temporal auditory processing of the fundamental frequency of the voice (f0) often use unnaturally repetitive stimuli. In this study, we investigated f0 processing of meaningful continuous speech. EEG responses evoked by stories in quiet were analysed with a novel method based on linear modelling that characterizes the neural tracking of the f0. We studied both the strength and the spatio-temporal properties of the f0-tracking response. Moreover, different samples of continuous speech (six stories by four speakers: two male and two female) were used to investigate the effect of voice characteristics on the f0 response. The results indicated that response strength is inversely related to f0 frequency and rate of f0 change throughout the story. As a result, the male-narrated stories in this study (low and steady f0) evoked stronger f0-tracking compared to female-narrated stories (high and variable f0), for which many responses were not significant. The spatio-temporal analysis revealed that f0-tracking response generators were not fixed in the brainstem but were voice-dependent as well. Voices with high and variable f0 evoked subcortically dominated responses with a latency between 7 and 12 ms. Voices with low and steady f0 evoked responses that are both subcortically (latency of 13-15 ms) and cortically (latency of 23-26 ms) generated, with the right primary auditory cortex as a likely cortical source. Finally, additional experiments revealed that response strength greatly improves for voices with strong higher harmonics, which is particularly useful to boost the small responses evoked by voices with high f0.


Assuntos
Córtex Auditivo , Percepção da Fala , Voz , Estimulação Acústica , Percepção Auditiva , Tronco Encefálico , Feminino , Humanos , Masculino , Fala
10.
Neuroimage ; 204: 116211, 2020 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-31546052

RESUMO

A common problem in neural recordings is the low signal-to-noise ratio (SNR), particularly when using non-invasive techniques like magneto- or electroencephalography (M/EEG). To address this problem, experimental designs often include repeated trials, which are then averaged to improve the SNR or to infer statistics that can be used in the design of a denoising spatial filter. However, collecting enough repeated trials is often impractical and even impossible in some paradigms, while analyses on existing data sets may be hampered when these do not contain such repeated trials. Therefore, we present a data-driven method that takes advantage of the knowledge of the presented stimulus, to achieve a joint noise reduction and dimensionality reduction without the need for repeated trials. The method first estimates the stimulus-driven neural response using the given stimulus, which is then used to find a set of spatial filters that maximize the SNR based on a generalized eigenvalue decomposition. As the method is fully data-driven, the dimensionality reduction enables researchers to perform their analyses without having to rely on their knowledge of brain regions of interest, which increases accuracy and reduces the human factor in the results. In the context of neural tracking of a speech stimulus using EEG, our method resulted in more accurate short-term temporal response function (TRF) estimates, higher correlations between predicted and actual neural responses, and higher attention decoding accuracies compared to existing TRF-based decoding methods. We also provide an extensive discussion on the central role played by the generalized eigenvalue decomposition in various denoising methods in the literature, and address the conceptual similarities and differences with our proposed method.


Assuntos
Algoritmos , Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Eletroencefalografia/normas , Neuroimagem Funcional/métodos , Processamento de Sinais Assistido por Computador , Adolescente , Adulto , Artefatos , Feminino , Neuroimagem Funcional/normas , Humanos , Masculino , Reprodutibilidade dos Testes , Estudos de Caso Único como Assunto , Percepção da Fala/fisiologia , Fatores de Tempo , Adulto Jovem
11.
Eur J Neurosci ; 52(5): 3375-3393, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32306466

RESUMO

When listening to natural speech, our brain activity tracks the slow amplitude modulations of speech, also called the speech envelope. Moreover, recent research has demonstrated that this neural envelope tracking can be affected by top-down processes. The present study was designed to examine if neural envelope tracking is modulated by the effort that a person expends during listening. Five measures were included to quantify listening effort: two behavioral measures based on a novel dual-task paradigm, a self-report effort measure and two neural measures related to phase synchronization and alpha power. Electroencephalography responses to sentences, presented at a wide range of subject-specific signal-to-noise ratios, were recorded in thirteen young, normal-hearing adults. A comparison of the five measures revealed different effects of listening effort as a function of speech understanding. Reaction times on the primary task and self-reported effort decreased with increasing speech understanding. In contrast, reaction times on the secondary task and alpha power showed a peak-shaped behavior with highest effort at intermediate speech understanding levels. With regard to neural envelope tracking, we found that the reaction times on the secondary task and self-reported effort explained a small part of the variability in theta-band envelope tracking. Speech understanding was found to strongly modulate neural envelope tracking. More specifically, our results demonstrated a robust increase in envelope tracking with increasing speech understanding. The present study provides new insights in the relations among different effort measures and highlights the potential of neural envelope tracking to objectively measure speech understanding in young, normal-hearing adults.


Assuntos
Percepção da Fala , Adulto , Percepção Auditiva , Humanos , Tempo de Reação , Autorrelato , Fala
12.
Ear Hear ; 41(5): 1158-1171, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32833388

RESUMO

OBJECTIVES: To investigate the mechanisms behind binaural and spatial effects in speech understanding for bimodal cochlear implant listeners. In particular, to test our hypothesis that their speech understanding can be characterized by means of monaural signal to noise ratios, rather than complex binaural cue processing such as binaural unmasking. DESIGN: We applied a semantic framework to characterize binaural and spatial effects in speech understanding on an extensive selection of the literature on bimodal listeners. In addition, we performed two experiments in which we measured speech understanding in different masker types (1) using head-related transfer functions, and (2) while adapting the broadband signal to noise ratios in both ears independently. We simulated bimodal hearing with a vocoder in one ear (the cochlear implant side) and a low-pass filter in the other ear (the hearing aid side). By design, the cochlear implant side was the main contributor to speech understanding in our simulation. RESULTS: We found that spatial release from masking can be explained as a simple trade-off between a monaural change in signal to noise at the cochlear implant side (quantified as the head shadow effect) and an opposite change in signal to noise at the hearing aid side (quantified as a change in bimodal benefit). In simulated bimodal listeners, we found that for every 1 dB increase in signal to noise ratio at the hearing aid side, the bimodal benefit improved by approximately 0.4 dB in signal to noise ratio. CONCLUSIONS: Although complex binaural cue processing is often implicated when discussing speech intelligibility in adverse listening conditions, performance can simply be explained based on monaural signal to noise ratios for bimodal listeners.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Sinais (Psicologia) , Humanos , Razão Sinal-Ruído
13.
Ear Hear ; 41(6): 1586-1597, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33136634

RESUMO

OBJECTIVES: Recently, an objective measure of speech intelligibility (SI), based on brain responses derived from the electroencephalogram (EEG), has been developed using isolated Matrix sentences as a stimulus. We investigated whether this objective measure of SI can also be used with natural speech as a stimulus, as this would be beneficial for clinical applications. DESIGN: We recorded the EEG in 19 normal-hearing participants while they listened to two types of stimuli: Matrix sentences and a natural story. Each stimulus was presented at different levels of SI by adding speech weighted noise. SI was assessed in two ways for both stimuli: (1) behaviorally and (2) objectively by reconstructing the speech envelope from the EEG using a linear decoder and correlating it with the acoustic envelope. We also calculated temporal response functions (TRFs) to investigate the temporal characteristics of the brain responses in the EEG channels covering different brain areas. RESULTS: For both stimulus types, the correlation between the speech envelope and the reconstructed envelope increased with increasing SI. In addition, correlations were higher for the natural story than for the Matrix sentences. Similar to the linear decoder analysis, TRF amplitudes increased with increasing SI for both stimuli. Remarkable is that although SI remained unchanged under the no-noise and +2.5 dB SNR conditions, neural speech processing was affected by the addition of this small amount of noise: TRF amplitudes across the entire scalp decreased between 0 and 150 ms, while amplitudes between 150 and 200 ms increased in the presence of noise. TRF latency changes in function of SI appeared to be stimulus specific: the latency of the prominent negative peak in the early responses (50 to 300 ms) increased with increasing SI for the Matrix sentences, but remained unchanged for the natural story. CONCLUSIONS: These results show (1) the feasibility of natural speech as a stimulus for the objective measure of SI; (2) that neural tracking of speech is enhanced using a natural story compared to Matrix sentences; and (3) that noise and the stimulus type can change the temporal characteristics of the brain responses. These results might reflect the integration of incoming acoustic features and top-down information, suggesting that the choice of the stimulus has to be considered based on the intended purpose of the measurement.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Percepção Auditiva , Eletroencefalografia , Humanos , Ruído
14.
J Acoust Soc Am ; 148(2): 815, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32873012

RESUMO

Cochlear implants (CIs) often replace acoustic temporal fine structure by a fixed-rate pulse train. If the pulse timing is arbitrary (that is, not based on the phase information of the acoustic signal), temporal information is quantized by the pulse period. This temporal quantization is probably imperceptible with current clinical devices. However, it could result in large temporal jitter for strategies that aim to improve bilateral and bimodal CI users' perception of interaural time differences (ITDs), such as envelope enhancement. In an experiment with 16 normal-hearing listeners, it is shown that such jitter could deteriorate ITD perception for temporal quantization that corresponds to the often-used stimulation rate of 900 pulses per second (pps): the just-noticeable difference in ITD with quantization was 177 µs as compared to 129 µs without quantization. For smaller quantization step sizes, no significant deterioration of ITD perception was found. In conclusion, the binaural system can only average out the effect of temporal quantization to some extent, such that pulse timing should be well-considered. As this psychophysical procedure was somewhat unconventional, different procedural parameters were compared by simulating a number of commonly used two-down one-up adaptive procedures in Appendix B.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Estimulação Acústica , Testes Auditivos
15.
J Neurophysiol ; 122(2): 601-615, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31141449

RESUMO

When we grow older, understanding speech in noise becomes more challenging. Research has demonstrated the role of auditory temporal and cognitive deficits in these age-related speech-in-noise difficulties. To better understand the underlying neural mechanisms, we recruited young, middle-aged, and older normal-hearing adults and investigated the interplay between speech understanding, cognition, and neural tracking of the speech envelope using electroencephalography. The stimuli consisted of natural speech masked by speech-weighted noise or a competing talker and were presented at several subject-specific speech understanding levels. In addition to running speech, we recorded auditory steady-state responses at low modulation frequencies to assess the effect of age on nonspeech sounds. The results show that healthy aging resulted in a supralinear increase in the speech reception threshold, i.e., worse speech understanding, most pronounced for the competing talker. Similarly, advancing age was associated with a supralinear increase in envelope tracking, with a pronounced enhancement for older adults. Additionally, envelope tracking was found to increase with speech understanding, most apparent for older adults. Because we found that worse cognitive scores were associated with enhanced envelope tracking, our results support the hypothesis that enhanced envelope tracking in older adults is the result of a higher activation of brain regions for processing speech, compared with younger adults. From a cognitive perspective, this could reflect the inefficient use of cognitive resources, often observed in behavioral studies. Interestingly, the opposite effect of age was found for auditory steady-state responses, suggesting a complex interplay of different neural mechanisms with advancing age.NEW & NOTEWORTHY We measured neural tracking of the speech envelope across the adult lifespan and found a supralinear increase in envelope tracking with age. Using a more ecologically valid approach than auditory steady-state responses, we found that young and older, as well as middle-aged, normal-hearing adults showed an increase in envelope tracking with increasing speech understanding and that this association is stronger for older adults.


Assuntos
Envelhecimento/fisiologia , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Mascaramento Perceptivo/fisiologia , Psicolinguística , Adulto Jovem
16.
Ear Hear ; 40(3): 545-554, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30299342

RESUMO

OBJECTIVES: To establish a framework to unambiguously define and relate the different spatial effects in speech understanding: head shadow, redundancy, squelch, spatial release from masking (SRM), and so on. Next, to investigate the contribution of interaural time and level differences to these spatial effects in speech understanding and how this is influenced by the type of masking noise. DESIGN: In our framework, SRM is uniquely characterized as a linear combination of head shadow, binaural redundancy, and binaural squelch. The latter two terms are combined into one binaural term, which we define as binaural contrast: a benefit of interaural differences. In this way, SRM is a simple sum of a monaural and a binaural term. We used the framework to quantify these spatial effects in 10 listeners with normal hearing. The participants performed speech intelligibility tasks in different spatial setups. We used head-related transfer functions to manipulate the presence of interaural time and level differences. We used three spectrally matched masker types: stationary speech-weighted noise, a competing talker, and speech-weighted noise that was modulated with the broadband temporal envelope of the competing talker. RESULTS: We found that (1) binaural contrast was increased by interaural time differences, but reduced by interaural level differences, irrespective of masker type, and (2) large redundancy (the benefit of having identical information in two ears) could reduce binaural contrast and thus also reduce SRM. CONCLUSIONS: Our framework yielded new insights in binaural processing in speech intelligibility. First, interaural level differences disturb speech intelligibility in realistic listening conditions. Therefore, to optimize speech intelligibility in hearing aids, it is more beneficial to improve monaural signal-to-noise ratios rather than to preserve interaural level differences. Second, although redundancy is mostly ignored when considering spatial hearing, it might explain reduced SRM in some cases.


Assuntos
Ruído , Mascaramento Perceptivo/fisiologia , Processamento Espacial/fisiologia , Percepção da Fala/fisiologia , Fala , Adulto , Voluntários Saudáveis , Humanos , Adulto Jovem
17.
Int J Audiol ; 58(3): 132-140, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30513024

RESUMO

OBJECTIVE: The recent integration of automated real-ear measurements (REM) in the fitting software facilitates the hearing aid fitting process. Such a fitting strategy, TargetMatch (TM), was evaluated. Test-retest reliability and matching accuracy were quantified, and compared to a REM-based fitting with manual adjustment. Also, it was investigated whether TM leads to better perceptual outcomes compared to a FirstFit (FF) approach, using software predictions only. Design and study sample: Ten hearing impaired participants were enrolled in a counterbalanced single-blinded cross-over study comparing TM and FF. Aided audibility, speech intelligibility and real-life benefits were assessed. Repeated measurements of both TM and REMs with manual adjustment were performed. RESULTS: Compared to a REM-based fitting with manual adjustment, TM had higher test-retest reliability. Also, TM outperformed the other fitting strategies in terms of matching accuracy. Compared to a FF, improved aided audibility and real-life benefits were found. Speech intelligibility did not improve. CONCLUSIONS: Preliminary data suggest that automated REMs increase the likelihood of meeting amplification targets compared with a FF. REMs integrated in the fitting software provide additional reliability and accuracy compared to traditional REMs. Findings need to be verified in a larger and more varied sample.


Assuntos
Auxiliares de Audição , Software , Adulto , Idoso , Estudos Cross-Over , Humanos , Pessoa de Meia-Idade , Testes de Discriminação da Fala , Adulto Jovem
18.
Ear Hear ; 39(2): 260-268, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28857787

RESUMO

OBJECTIVES: Auditory steady state responses (ASSRs) are used in clinical practice for objective hearing assessments. The response is called steady state because it is assumed to be stable over time, and because it is evoked by a stimulus with a certain periodicity, which will lead to discrete frequency components that are stable in amplitude and phase over time. However, the stimuli commonly used to evoke ASSRs are also known to be able to induce loudness adaptation behaviorally. Researchers and clinicians using ASSRs assume that the response remains stable over time. This study investigates (1) the stability of ASSR amplitudes over time, within one recording, and (2) whether loudness adaptation can be reflected in ASSRs. DESIGN: ASSRs were measured from 14 normal-hearing participants. The ASSRs were evoked by the stimuli that caused the most loudness adaptation in a previous behavioral study, that is, mixed-modulated sinusoids with carrier frequencies of either 500 or 2000 Hz, a modulation frequency of 40 Hz, and a low sensation level of 30 dB SL. For each carrier frequency and participant, 40 repetitions of 92 sec recordings were made. Two types of analyses were used to investigate the ASSR amplitudes over time: with the more traditionally used Fast Fourier Transform and with a novel Kalman filtering approach. Robust correlations between the ASSR amplitudes and behavioral loudness adaptation ratings were also calculated. RESULTS: Overall, ASSR amplitudes were stable. Over all individual recordings, the median change of the amplitudes over time was -0.0001 µV/s. Based on group analysis, a significant but very weak decrease in amplitude over time was found, with the decrease in amplitude over time around -0.0002 µV/s. Correlation coefficients between ASSR amplitudes and behavioral loudness adaptation ratings were significant but low to moderate, with r = 0.27 and r = 0.39 for the 500 and 2000 Hz carrier frequency, respectively. CONCLUSIONS: The decrease in amplitude of ASSRs over time (92 sec) is small. Consequently, it is safe to use ASSRs in clinical practice, and additional correction factors for objective hearing assessments are not needed. Because only small decreases in amplitudes were found, loudness adaptation is probably not reflected by the ASSRs.


Assuntos
Percepção Auditiva/fisiologia , Testes Auditivos/métodos , Audição/fisiologia , Estimulação Acústica , Limiar Auditivo , Eletroencefalografia , Feminino , Humanos , Masculino , Valores de Referência , Adulto Jovem
19.
J Acoust Soc Am ; 143(6): 3720, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29960470

RESUMO

Binaural loudness balancing is performed in research and clinical practice when fitting bilateral hearing devices, and is particularly important for bimodal listeners, who have a bilateral combination of a hearing aid and a cochlear implant. In this study, two psychophysical binaural loudness balancing procedures were compared. Two experiments were carried out. In the first experiment, the effect of procedure (adaptive or adjustment) on the balanced loudness levels was investigated using noise band stimuli, of which some had a frequency shift to simulate bimodal hearing. In the second experiment, the adjustment procedure was extended. The effect of the starting level of the adjustment procedure was investigated and the two procedures were again compared for different reference levels and carrier frequencies. Fourteen normal hearing volunteers participated in the first experiment, and 38 in the second experiment. Although the final averaged loudness balanced levels of both procedures were similar, the adjustment procedure yielded smaller standard deviations across four test sessions. The results of experiment 2 demonstrated that in order to avoid bias, the adjustment procedure should be conducted twice, once starting from below and once from above the expected balanced loudness level.


Assuntos
Implantes Cocleares , Auxiliares de Audição , Percepção Sonora , Estimulação Acústica , Adaptação Psicológica , Estimulação Elétrica , Feminino , Humanos , Masculino , Psicoacústica , Adulto Jovem
20.
J Acoust Soc Am ; 144(2): 940, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-30180705

RESUMO

Different computational models have been developed to study the interaural time difference (ITD) perception. However, only few have used a physiologically inspired architecture to study ITD discrimination. Furthermore, they do not include aspects of hearing impairment. In this work, a framework was developed to predict ITD thresholds in listeners with normal and impaired hearing. It combines the physiologically inspired model of the auditory periphery proposed by Zilany, Bruce, Nelson, and Carney [(2009). J. Acoust. Soc. Am. 126(5), 2390-2412] as a front end with a coincidence detection stage and a neurometric decision device as a back end. It was validated by comparing its predictions against behavioral data for narrowband stimuli from literature. The framework is able to model ITD discrimination of normal-hearing and hearing-impaired listeners at a group level. Additionally, it was used to explore the effect of different proportions of outer- and inner-hair cell impairment on ITD discrimination.


Assuntos
Percepção Auditiva , Orelha/fisiologia , Perda Auditiva/fisiopatologia , Modelos Neurológicos , Tempo de Reação , Adulto , Vias Auditivas/fisiologia , Vias Auditivas/fisiopatologia , Orelha/fisiopatologia , Feminino , Lateralidade Funcional , Humanos , Masculino , Pessoa de Meia-Idade
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa