Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
PLoS One ; 19(2): e0297826, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38330068

RESUMO

Perception of sounds and speech involves structures in the auditory brainstem that rapidly process ongoing auditory stimuli. The role of these structures in speech processing can be investigated by measuring their electrical activity using scalp-mounted electrodes. However, typical analysis methods involve averaging neural responses to many short repetitive stimuli that bear little relevance to daily listening environments. Recently, subcortical responses to more ecologically relevant continuous speech were detected using linear encoding models. These methods estimate the temporal response function (TRF), which is a regression model that minimises the error between the measured neural signal and a predictor derived from the stimulus. Using predictors that model the highly non-linear peripheral auditory system may improve linear TRF estimation accuracy and peak detection. Here, we compare predictors from both simple and complex peripheral auditory models for estimating brainstem TRFs on electroencephalography (EEG) data from 24 participants listening to continuous speech. We also investigate the data length required for estimating subcortical TRFs, and find that around 12 minutes of data is sufficient for clear wave V peaks (>3 dB SNR) to be seen in nearly all participants. Interestingly, predictors derived from simple filterbank-based models of the peripheral auditory system yield TRF wave V peak SNRs that are not significantly different from those estimated using a complex model of the auditory nerve, provided that the nonlinear effects of adaptation in the auditory system are appropriately modelled. Crucially, computing predictors from these simpler models is more than 50 times faster compared to the complex model. This work paves the way for efficient modelling and detection of subcortical processing of continuous speech, which may lead to improved diagnosis metrics for hearing impairment and assistive hearing technology.


Assuntos
Percepção da Fala , Fala , Humanos , Percepção da Fala/fisiologia , Audição/fisiologia , Tronco Encefálico/fisiologia , Eletroencefalografia/métodos , Estimulação Acústica
2.
Sci Rep ; 13(1): 22657, 2023 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-38114599

RESUMO

Vibrotactile stimulation is believed to enhance auditory speech perception, offering potential benefits for cochlear implant (CI) users who may utilize compensatory sensory strategies. Our study advances previous research by directly comparing tactile speech intelligibility enhancements in normal-hearing (NH) and CI participants, using the same paradigm. Moreover, we assessed tactile enhancement considering stimulus non-specific, excitatory effects through an incongruent audio-tactile control condition that did not contain any speech-relevant information. In addition to this incongruent audio-tactile condition, we presented sentences in an auditory only and a congruent audio-tactile condition, with the congruent tactile stimulus providing low-frequency envelope information via a vibrating probe on the index fingertip. The study involved 23 NH listeners and 14 CI users. In both groups, significant tactile enhancements were observed for congruent tactile stimuli (5.3% for NH and 5.4% for CI participants), but not for incongruent tactile stimulation. These findings replicate previously observed tactile enhancement effects. Juxtaposing our study with previous research, the informational content of the tactile stimulus emerges as a modulator of intelligibility: Generally, congruent stimuli enhanced, non-matching tactile stimuli reduced, and neutral stimuli did not change test outcomes. We conclude that the temporal cues provided by congruent vibrotactile stimuli may aid in parsing continuous speech signals into syllables and words, consequently leading to the observed improvements in intelligibility.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Humanos , Inteligibilidade da Fala , Estimulação Acústica , Percepção Auditiva/fisiologia , Percepção da Fala/fisiologia
3.
Semin Hear ; 44(2): 95-105, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37122882

RESUMO

About one-third of all recently published studies on listening effort have used at least one physiological measure, providing evidence of the popularity of such measures in listening effort research. However, the specific measures employed, as well as the rationales used to justify their inclusion, vary greatly between studies, leading to a literature that is fragmented and difficult to integrate. A unified approach that assesses multiple psychophysiological measures justified by a single rationale would be preferable because it would advance our understanding of listening effort. However, such an approach comes with a number of challenges, including the need to develop a clear definition of listening effort that links to specific physiological measures, customized equipment that enables the simultaneous assessment of multiple measures, awareness of problems caused by the different timescales on which the measures operate, and statistical approaches that minimize the risk of type-I error inflation. This article discusses in detail the various obstacles for combining multiple physiological measures in listening effort research and provides recommendations on how to overcome them.

4.
J Acoust Soc Am ; 152(6): 3396, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36586853

RESUMO

Music listening experiences can be enhanced with tactile vibrations. However, it is not known which parameters of the tactile vibration must be congruent with the music to enhance it. Devices that aim to enhance music with tactile vibrations often require coding an acoustic signal into a congruent vibrotactile signal. Therefore, understanding which of these audio-tactile congruences are important is crucial. Participants were presented with a simple sine wave melody through supra-aural headphones and a haptic actuator held between the thumb and forefinger. Incongruent versions of the stimuli were made by randomizing physical parameters of the tactile stimulus independently of the auditory stimulus. Participants were instructed to rate the stimuli against the incongruent stimuli based on preference. It was found making the intensity of the tactile stimulus incongruent with the intensity of the auditory stimulus, as well as misaligning the two modalities in time, had the biggest negative effect on ratings for the melody used. Future vibrotactile music enhancement devices can use time alignment and intensity congruence as a baseline coding strategy, which improved strategies can be tested against.


Assuntos
Música , Percepção do Tato , Humanos , Tato , Percepção Auditiva , Vibração
5.
iScience ; 25(5): 104181, 2022 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-35494228

RESUMO

Sounds reach the ears as a mixture of energy generated by different sources. Listeners extract cues that distinguish different sources from one another, including how similar sounds arrive at the two ears, the interaural coherence (IAC). Here, we find listeners cannot reliably distinguish two completely interaurally coherent sounds from a single sound with reduced IAC. Pairs of sounds heard toward the front were readily confused with single sounds with high IAC, whereas those heard to the sides were confused with single sounds with low IAC. Sounds that hold supra-ethological spatial cues are perceived as more diffuse than can be accounted for by their IAC, and this is accounted for by a computational model comprising a restricted, and sound-frequency dependent, distribution of auditory-spatial detectors. We observed elevated cortical hemodynamic responses for sounds with low IAC, suggesting that the ambiguity elicited by sounds with low interaural similarity imposes elevated cortical load.

6.
Int J Audiol ; 61(2): 166-172, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34106802

RESUMO

OBJECTIVE: To develop and validate an Australian version of a behavioural test for assessing listening task difficulty at high speech intelligibility levels. DESIGN: In the SWIR-Aus test, listeners perform two tasks: identify the last word of each of seven sentences in a list and recall the identified words after each list. First, the test material was developed by creating seven-sentence lists with similar final-word features. Then, for the validation, participant's performance on the SWIR-Aus test was compared when a binary mask noise reduction algorithm was on and off. STUDY SAMPLE: All participants in this study had normal hearing thresholds. Nine participants (23.8-56.0 years) participated in the characterisation of the speech material. Another thirteen participants (18.4-59.1 years) participated in a pilot test to determine the SNR to use at the validation stage. Finally, twenty-four new participants (20.0-56.9 years) participated in the validation of the test. RESULTS: The results of the validation of the test showed that recall and identification scores were significantly better when the binary mask noise reduction algorithm was on compared to off. CONCLUSIONS: The SWIR-Aus test was developed using Australian speech material and can be used for assessing task difficulty at high speech intelligibility levels.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Percepção Auditiva , Austrália , Humanos , Ruído/efeitos adversos
7.
Neurophotonics ; 8(2): 025008, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34036117

RESUMO

Significance: Functional near-infrared spectroscopy (fNIRS) is an increasingly popular tool in auditory research, but the range of analysis procedures employed across studies may complicate the interpretation of data. Aim: We aim to assess the impact of different analysis procedures on the morphology, detection, and lateralization of auditory responses in fNIRS. Specifically, we determine whether averaging or generalized linear model (GLM)-based analysis generates different experimental conclusions when applied to a block-protocol design. The impact of parameter selection of GLMs on detecting auditory-evoked responses was also quantified. Approach: 17 listeners were exposed to three commonly employed auditory stimuli: noise, speech, and silence. A block design, comprising sounds of 5 s duration and 10 to 20 s silent intervals, was employed. Results: Both analysis procedures generated similar response morphologies and amplitude estimates, and both indicated that responses to speech were significantly greater than to noise or silence. Neither approach indicated a significant effect of brain hemisphere on responses to speech. Methods to correct for systemic hemodynamic responses using short channels improved detection at the individual level. Conclusions: Consistent with theoretical considerations, simulations, and other experimental domains, GLM and averaging analyses generate the same group-level experimental conclusions. We release this dataset publicly for use in future development and optimization of algorithms.

8.
J Neural Eng ; 18(4)2021 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-34010826

RESUMO

Objective. Stimulus-elicited changes in electroencephalography (EEG) recordings can be represented using Fourier magnitude and phase features (Makeiget al(2004Trends Cogn. Sci.8204-10)). The present study aimed to quantify how much information about hearing responses are contained in the magnitude, quantified by event-related spectral perturbations (ERSPs); and the phase, quantified by inter-trial coherence (ITC). By testing if one feature contained more information and whether this information was mutually exclusive to the features, we aimed to relate specific EEG magnitude and phase features to hearing perception.Approach.EEG responses were recorded from 20 adults who were presented with acoustic stimuli, and 20 adult cochlear implant users with electrical stimuli. Both groups were presented with short, 50 ms stimuli at varying intensity levels relative to their hearing thresholds. Extracted ERSP and ITC features were inputs for a linear discriminant analysis classifier (Wonget al(2016J. Neural. Eng.13036003)). The classifier then predicted whether the EEG signal contained information about the sound stimuli based on the input features. Classifier decoding accuracy was quantified with the mutual information measure (Cottaris and Elfar (2009J. Neural. Eng.6026007), Hawelleket al(2016Proc. Natl Acad. Sci.11313492-7)), and compared across the two feature sets, and to when both feature sets were combined.Main results. We found that classifiers using either ITC or ERSP feature sets were both able to decode hearing perception, but ITC-feature classifiers were able to decode responses to a lower but still audible stimulation intensity, making ITC more useful than ERSP for hearing threshold estimation. We also found that combining the information from both feature sets did not improve decoding significantly, implying that ERSP brain dynamics has a limited contribution to the EEG response, possibly due to the stimuli used in this study.Significance.We successfully related hearing perception to an EEG measure, which does not require behavioral feedback from the listener; an objective measure is important in both neuroscience research and clinical audiology.


Assuntos
Implantes Cocleares , Potenciais Evocados Auditivos , Estimulação Acústica , Acústica , Limiar Auditivo , Eletroencefalografia , Audição
9.
Front Neurosci ; 15: 636060, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33841081

RESUMO

OBJECTIVES: Previous research using non-invasive (magnetoencephalography, MEG) and invasive (electrocorticography, ECoG) neural recordings has demonstrated the progressive and hierarchical representation and processing of complex multi-talker auditory scenes in the auditory cortex. Early responses (<85 ms) in primary-like areas appear to represent the individual talkers with almost equal fidelity and are independent of attention in normal-hearing (NH) listeners. However, late responses (>85 ms) in higher-order non-primary areas selectively represent the attended talker with significantly higher fidelity than unattended talkers in NH and hearing-impaired (HI) listeners. Motivated by these findings, the objective of this study was to investigate the effect of a noise reduction scheme (NR) in a commercial hearing aid (HA) on the representation of complex multi-talker auditory scenes in distinct hierarchical stages of the auditory cortex by using high-density electroencephalography (EEG). DESIGN: We addressed this issue by investigating early (<85 ms) and late (>85 ms) EEG responses recorded in 34 HI subjects fitted with HAs. The HA noise reduction (NR) was either on or off while the participants listened to a complex auditory scene. Participants were instructed to attend to one of two simultaneous talkers in the foreground while multi-talker babble noise played in the background (+3 dB SNR). After each trial, a two-choice question about the content of the attended speech was presented. RESULTS: Using a stimulus reconstruction approach, our results suggest that the attention-related enhancement of neural representations of target and masker talkers located in the foreground, as well as suppression of the background noise in distinct hierarchical stages is significantly affected by the NR scheme. We found that the NR scheme contributed to the enhancement of the foreground and of the entire acoustic scene in the early responses, and that this enhancement was driven by better representation of the target speech. We found that the target talker in HI listeners was selectively represented in late responses. We found that use of the NR scheme resulted in enhanced representations of the target and masker speech in the foreground and a suppressed representation of the noise in the background in late responses. We found a significant effect of EEG time window on the strengths of the cortical representation of the target and masker. CONCLUSION: Together, our analyses of the early and late responses obtained from HI listeners support the existing view of hierarchical processing in the auditory cortex. Our findings demonstrate the benefits of a NR scheme on the representation of complex multi-talker auditory scenes in different areas of the auditory cortex in HI listeners.

10.
Ear Hear ; 41(5): 1187-1195, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31985534

RESUMO

OBJECTIVES: Functional near-infrared spectroscopy (fNIRS) is a brain imaging technique particularly suitable for hearing studies. However, the nature of fNIRS responses to auditory stimuli presented at different stimulus intensities is not well understood. In this study, we investigated whether fNIRS response amplitude was better predicted by stimulus properties (intensity) or individually perceived attributes (loudness). DESIGN: Twenty-two young adults were included in this experimental study. Four different stimulus intensities of a broadband noise were used as stimuli. First, loudness estimates for each stimulus intensity were measured for each participant. Then, the 4 stimulation intensities were presented in counterbalanced order while recording hemoglobin saturation changes from cortical auditory brain areas. The fNIRS response was analyzed in a general linear model design, using 3 different regressors: a non-modulated, an intensity-modulated, and a loudness-modulated regressor. RESULTS: Higher intensity stimuli resulted in higher amplitude fNIRS responses. The relationship between stimulus intensity and fNIRS response amplitude was better explained using a regressor based on individually estimated loudness estimates compared with a regressor modulated by stimulus intensity alone. CONCLUSIONS: Brain activation in response to different stimulus intensities is more reliant upon individual loudness sensation than physical stimulus properties. Therefore, in measurements using different auditory stimulus intensities or subjective hearing parameters, loudness estimates should be examined when interpreting results.


Assuntos
Córtex Auditivo , Espectroscopia de Luz Próxima ao Infravermelho , Estimulação Acústica , Encéfalo , Audição , Humanos , Percepção Sonora , Som , Adulto Jovem
11.
PLoS One ; 14(2): e0212940, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30817808

RESUMO

Functional near-infrared spectroscopy (fNIRS) is a non-invasive brain imaging technique that measures changes in oxygenated and de-oxygenated hemoglobin concentration and can provide a measure of brain activity. In addition to neural activity, fNIRS signals contain components that can be used to extract physiological information such as cardiac measures. Previous studies have shown changes in cardiac activity in response to different sounds. This study investigated whether cardiac responses collected using fNIRS differ for different loudness of sounds. fNIRS data were collected from 28 normal hearing participants. Cardiac response measures evoked by broadband, amplitude-modulated sounds were extracted for four sound intensities ranging from near-threshold to comfortably loud levels (15, 40, 65 and 90 dB Sound Pressure Level (SPL)). Following onset of the noise stimulus, heart rate initially decreased for sounds of 15 and 40 dB SPL, reaching a significantly lower rate at 15 dB SPL. For sounds at 65 and 90 dB SPL, increases in heart rate were seen. To quantify the timing of significant changes, inter-beat intervals were assessed. For sounds at 40 dB SPL, an immediate significant change in the first two inter-beat intervals following sound onset was found. At other levels, the most significant change appeared later (beats 3 to 5 following sound onset). In conclusion, changes in heart rate were associated with the level of sound with a clear difference in response to near-threshold sounds compared to comfortably loud sounds. These findings may be used alone or in conjunction with other measures such as fNIRS brain activity for evaluation of hearing ability.


Assuntos
Audição/fisiologia , Frequência Cardíaca/fisiologia , Percepção Sonora/fisiologia , Estimulação Acústica , Adulto , Limiar Auditivo/fisiologia , Encéfalo/fisiologia , Feminino , Neuroimagem Funcional , Ruídos Cardíacos/fisiologia , Humanos , Masculino , Espectroscopia de Luz Próxima ao Infravermelho , Adulto Jovem
12.
Hear Res ; 377: 24-33, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30884368

RESUMO

Cochlear implant users require fitting of electrical threshold and comfort levels for optimal access to sound. In this study, we used single-channel cortical auditory evoked responses (CAEPs) obtained from 20 participants using a Nucleus device. A fully objective method to estimate threshold levels was developed, using growth function fitting and the peak phase-locking value feature. Results demonstrated that growth function fitting is a viable method for estimating threshold levels in cochlear implant users, with a strong correlation (r = 0.979, p < 0.001) with behavioral thresholds. Additionally, we compared the threshold estimates using CAEPs acquired from a standard montage (Cz to mastoid) against using a montage of recording channels near the cochlear implant, simulating recording from the device itself. The correlation between estimated and behavioural thresholds remained strong (r = 0.966, p < 0.001), however the recording time needed to be increased to produce a similar estimate accuracy. Finally, a method for estimating comfort levels was investigated, and showed that the comfort level estimates were mildly correlated with behavioral comfort levels (r = 0.50, p = 0.024).


Assuntos
Limiar Auditivo , Implante Coclear/instrumentação , Implantes Cocleares , Eletroencefalografia , Potenciais Evocados Auditivos , Percepção Sonora , Pessoas com Deficiência Auditiva/reabilitação , Ajuste de Prótese , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Estimulação Elétrica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Valor Preditivo dos Testes , Desenho de Prótese , Resultado do Tratamento
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4682-4685, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946907

RESUMO

Cortical pitch responses (CPRs) are generated at the initiation of pitch-bearing sounds. CPR components have been shown to reflect the pitch salience of iterated rippled noise with different temporal periodicity. However, it is unclear whether features of the CPR correlate with the pitch salience of resolved and unresolved harmonics of speech when the temporal periodicity is identical, and whether CPRs could be a neural index for auditory cortical pitch processing. In this study, CPRs were recorded to two speech sounds: a set including only resolved harmonics and a set including only unresolved harmonics. Speech-shaped noise preceding and following the speech was used to temporally discriminate the neural activity coding the onset of acoustic energy from the onset of time-varying pitch. Analysis of CPR peak latency and peak amplitude (Na) showed that the peak latency to speech sounds with only resolved harmonics was significantly shorter than for sounds with unresolved harmonics (p = 0.01), and that peak amplitude to sounds with only resolved harmonics was significantly higher than for sounds with unresolved harmonics (p <; 0.001). Further, the CPR peak phase locking value in response to sounds with only resolved harmonics was significantly higher than to sounds with only unresolved harmonics (p <; 0.001). Our findings suggest that the CPR changes with pitch salience and that CPR is a potentially useful indicator of auditory cortical pitch processing.


Assuntos
Córtex Cerebral , Ruído , Percepção da Altura Sonora , Estimulação Acústica , Córtex Cerebral/fisiologia , Potenciais Evocados Auditivos , Humanos , Idioma , Fonética , Som
14.
J Acoust Soc Am ; 146(6): 4144, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31893708

RESUMO

This study aimed to investigate differences in audio-visual (AV) integration between cochlear implant (CI) listeners and normal-hearing (NH) adults. A secondary aim was to investigate the effect of age differences by examining AV integration in groups of older and younger NH adults. Seventeen CI listeners, 13 similarly aged NH adults, and 16 younger NH adults were recruited. Two speech identification experiments were conducted to evaluate AV integration of speech cues. In the first experiment, reaction times in audio-alone (A-alone), visual-alone (V-alone), and AV conditions were measured during a speeded task in which participants were asked to identify a target sound /aSa/ among 11 alternatives. A race model was applied to evaluate AV integration. In the second experiment, identification accuracies were measured using a closed set of consonants and an open set of consonant-nucleus-consonant words. The authors quantified AV integration using a combination of a probability model and a cue integration model (which model participants' AV accuracy by assuming no or optimal integration, respectively). The results found that experienced CI listeners showed no better AV integration than their similarly aged NH adults. Further, there was no significant difference in AV integration between the younger and older NH adults.


Assuntos
Fatores Etários , Percepção Auditiva/fisiologia , Implantes Cocleares/efeitos adversos , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Idoso , Implante Coclear/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fala/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
15.
Front Neurosci ; 12: 820, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30505262

RESUMO

Accurate perception of voice pitch plays a vital role in speech understanding, especially for tonal languages such as Mandarin. Lexical tones are primarily distinguished by the fundamental frequency (F0) contour of the acoustic waveform. It has been shown that the auditory system could extract the F0 from the resolved and unresolved harmonics, and the tone identification performance of resolved harmonics was better than unresolved harmonics. To evaluate the neural response to the resolved and unresolved components of Mandarin tones in quiet and in speech-shaped noise, we recorded the frequency-following response. In this study, four types of stimuli were used: speech with either only-resolved harmonics or only-unresolved harmonics, both in quiet and in speech-shaped noise. Frequency-following responses (FFRs) were recorded to alternating-polarity stimuli and were added or subtracted to enhance the neural response to the envelope (FFRENV) or fine structure (FFRTFS), respectively. The neural representation of the F0 strength reflected by the FFRENV was evaluated by the peak autocorrelation value in the temporal domain and the peak phase-locking value (PLV) at F0 in the spectral domain. Both evaluation methods showed that the FFRENV F0 strength in quiet was significantly stronger than in noise for speech including unresolved harmonics, but not for speech including resolved harmonics. The neural representation of the temporal fine structure reflected by the FFRTFS was assessed by the PLV at the harmonic near to F1 (4th of F0). The PLV at harmonic near to F1 (4th of F0) of FFRTFS to resolved harmonics was significantly larger than to unresolved harmonics. Spearman's correlation showed that the FFRENV F0 strength to unresolved harmonics was correlated with tone identification performance in noise (0 dB SNR). These results showed that the FFRENV F0 strength to speech sounds with resolved harmonics was not affected by noise. In contrast, the response to speech sounds with unresolved harmonics, which were significantly smaller in noise compared to quiet. Our results suggest that coding resolved harmonics was more important than coding envelope for tone identification performance in noise.

16.
JMIR Res Protoc ; 7(10): e174, 2018 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-30368434

RESUMO

BACKGROUND: Older adults with postlingual sensorineural hearing loss (SNHL) exhibit a poor prognosis that not only includes impaired auditory function but also rapid cognitive decline, especially speech-related cognition, in addition to psychosocial dysfunction and an increased risk of dementia. Consistent with this prognosis, individuals with SNHL exhibit global atrophic brain alteration as well as altered neural function and regional brain organization within the cortical substrates that underlie auditory and speech processing. Recent evidence suggests that the use of hearing aids might ameliorate this prognosis. OBJECTIVE: The objective was to study the effects of a hearing aid use intervention on neurocognitive and psychosocial functioning in individuals with SNHL aged ≥55 years. METHODS: All aspects of this study will be conducted at Swinburne University of Technology (Hawthorn, Victoria, Australia). We will recruit 2 groups (n=30 per group) of individuals with mild to moderate SNHL from both the community and audiology health clinics (Alison Hennessy Audiology, Chelsea Hearing Pty Ltd). These groups will include individuals who have worn a hearing aid for, at least, 12 months or never worn a hearing aid. All participants would be asked to complete, at 2 time points (t) including baseline (t=0) and follow-up (t=6 months), tests of hearing and psychosocial and cognitive function and attend a magnetic resonance imaging (MRI) session. The MRI session will include both structural and functional MRI (sMRI and fMRI) scans, the latter involving the performance of a novel speech processing task. RESULTS: This research is funded by the Barbara Dicker Brain Sciences Foundation Grants, the Australian Research Council, Alison Hennessy Audiology, and Chelsea Hearing Pty Ltd under the Industry Transformation Training Centre Scheme (ARC Project #IC140100023). We obtained the ethics approval on November 18, 2017 (Swinburne University Human Research Ethics Committee protocol number SHR Project 2017/266). The recruitment began in December 2017 and will be completed by December 2020. CONCLUSIONS: This is the first study to assess the effect hearing aid use has on neural, cognitive, and psychosocial factors in individuals with SNHL who have never used hearing aids. Furthermore, this study is expected to clarify the relationships among altered brain structure and function, psychosocial factors, and cognition in response to the hearing aid use. TRIAL REGISTRATION: Australian New Zealand Clinical Trials Registry: ACTRN12617001616369; https://anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12617001616369 (Accessed by WebCite at http://www.webcitation.org/70yatZ9ze). INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR1-10.2196/9916.

17.
Hear Res ; 370: 74-83, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30336355

RESUMO

Cortical auditory evoked potential (CAEP) thresholds have been shown to correlate well with behaviourally determined hearing thresholds. Growth functions of CAEPs show promise as an alternative to single level detection for objective hearing threshold estimation; however, the accuracy and clinical relevance of this method is not well examined. In this study, we used temporal and spectral CAEP features to generate feature growth functions. Spectral features may be more robust than traditional peak-picking methods where CAEP morphology is variable, such as in children or hearing device users. Behavioural hearing thresholds were obtained and CAEPs were recorded in response to a 1 kHz puretone from twenty adults with no hearing loss. Four features, peak-to-peak amplitude, root-mean-square, peak spectral power and peak phase-locking value (PLV) were extracted from the CAEPs. Functions relating each feature with stimulus level were used to calculate objective hearing threshold estimates. We assessed the performance of each feature by calculating the difference between the objective estimate and the behaviourally-determined threshold. We compared the accuracy of the estimates using each feature and found that the peak PLV feature performed best, with a mean threshold error of 2.7 dB and standard deviation of 5.9 dB from behavioural threshold across subjects. We also examined the relation between recording time, data quality and threshold estimate errors, and found that on average for a single threshold, 12.7 minutes of recording was needed for a 95% confidence that the threshold estimate was within 20 dB of the behavioural threshold using the peak-to-peak amplitude feature, while 14 minutes is needed for the peak PLV feature. These results show that the PLV of CAEPs can be used to find a clinically relevant hearing threshold estimate. Its potential stability in differing morphology may be an advantage in testing infants or cochlear implant users.


Assuntos
Córtex Auditivo/fisiologia , Limiar Auditivo , Eletroencefalografia , Potenciais Evocados Auditivos , Testes Auditivos/métodos , Estimulação Acústica , Adulto , Audiometria de Tons Puros , Humanos , Valor Preditivo dos Testes , Psicoacústica , Reprodutibilidade dos Testes , Fatores de Tempo , Adulto Jovem
18.
Front Neurosci ; 12: 581, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30186105

RESUMO

The role of the spatial separation between the stimulating electrodes (electrode separation) in sequential stream segregation was explored in cochlear implant (CI) listeners using a deviant detection task. Twelve CI listeners were instructed to attend to a series of target sounds in the presence of interleaved distractor sounds. A deviant was randomly introduced in the target stream either at the beginning, middle or end of each trial. The listeners were asked to detect sequences that contained a deviant and to report its location within the trial. The perceptual segregation of the streams should, therefore, improve deviant detection performance. The electrode range for the distractor sounds was varied, resulting in different amounts of overlap between the target and the distractor streams. For the largest electrode separation condition, event-related potentials (ERPs) were recorded under active and passive listening conditions. The listeners were asked to perform the behavioral task for the active listening condition and encouraged to watch a muted movie for the passive listening condition. Deviant detection performance improved with increasing electrode separation between the streams, suggesting that larger electrode differences facilitate the segregation of the streams. Deviant detection performance was best for deviants happening late in the sequence, indicating that a segregated percept builds up over time. The analysis of the ERP waveforms revealed that auditory selective attention modulates the ERP responses in CI listeners. Specifically, the responses to the target stream were, overall, larger in the active relative to the passive listening condition. Conversely, the ERP responses to the distractor stream were not affected by selective attention. However, no significant correlation was observed between the behavioral performance and the amount of attentional modulation. Overall, the findings from the present study suggest that CI listeners can use electrode separation to perceptually group sequential sounds. Moreover, selective attention can be deployed on the resulting auditory objects, as reflected by the attentional modulation of the ERPs at the group level.

19.
Front Neural Circuits ; 12: 55, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30087597

RESUMO

Accurate perception of time-variant pitch is important for speech recognition, particularly for tonal languages with different lexical tones such as Mandarin, in which different tones convey different semantic information. Previous studies reported that the auditory nerve and cochlear nucleus can encode different pitches through phase-locked neural activities. However, little is known about how the inferior colliculus (IC) encodes the time-variant periodicity pitch of natural speech. In this study, the Mandarin syllable /ba/ pronounced with four lexical tones (flat, rising, falling then rising and falling) were used as stimuli. Local field potentials (LFPs) and single neuron activity were simultaneously recorded from 90 sites within contralateral IC of six urethane-anesthetized and decerebrate guinea pigs in response to the four stimuli. Analysis of the temporal information of LFPs showed that 93% of the LFPs exhibited robust encoding of periodicity pitch. Pitch strength of LFPs derived from the autocorrelogram was significantly (p < 0.001) stronger for rising tones than flat and falling tones. Pitch strength are also significantly increased (p < 0.05) with the characteristic frequency (CF). On the other hand, only 47% (42 or 90) of single neuron activities were significantly synchronized to the fundamental frequency of the stimulus suggesting that the temporal spiking pattern of single IC neuron could encode the time variant periodicity pitch of speech robustly. The difference between the number of LFPs and single neurons that encode the time-variant F0 voice pitch supports the notion of a transition at the level of IC from direct temporal coding in the spike trains of individual neurons to other form of neural representation.


Assuntos
Eletroencefalografia/métodos , Fenômenos Eletrofisiológicos , Colículos Inferiores/fisiologia , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Animais , China , Feminino , Cobaias , Masculino
20.
Trends Hear ; 22: 2331216518786850, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30022732

RESUMO

An experiment was conducted to investigate the feasibility of using functional near-infrared spectroscopy (fNIRS) to image cortical activity in the language areas of cochlear implant (CI) users and to explore the association between the activity and their speech understanding ability. Using fNIRS, 15 experienced CI users and 14 normal-hearing participants were imaged while presented with either visual speech or auditory speech. Brain activation was measured from the prefrontal, temporal, and parietal lobe in both hemispheres, including the language-associated regions. In response to visual speech, the activation levels of CI users in an a priori region of interest (ROI)-the left superior temporal gyrus or sulcus-were negatively correlated with auditory speech understanding. This result suggests that increased cross-modal activity in the auditory cortex is predictive of poor auditory speech understanding. In another two ROIs, in which CI users showed significantly different mean activation levels in response to auditory speech compared with normal-hearing listeners, activation levels were significantly negatively correlated with CI users' auditory speech understanding. These ROIs were located in the right anterior temporal lobe (including a portion of prefrontal lobe) and the left middle superior temporal lobe. In conclusion, fNIRS successfully revealed activation patterns in CI users associated with their auditory speech understanding.


Assuntos
Córtex Auditivo/fisiopatologia , Implantes Cocleares , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Idoso , Surdez , Estudos de Viabilidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Vitória
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...