Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 295
Filter
Add more filters

Complementary Medicines
Publication year range
1.
Acta Otolaryngol ; 141(sup1): 22-62, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33818263

ABSTRACT

Electric-acoustic stimulation (EAS) is a special treatment modality for those patients who are profoundly deaf in the high-frequency (HF) region and retain usable hearing in the low-frequency (LF) region. Combining the electric stimulation with cochlear implant (CI) in the HF and acoustic amplification of residual hearing using a conventional hearing aid (HA) in the LF region defines EAS. The EAS concept was first proposed by C. von Ilberg from Frankfurt, Germany in the year 1997. In association with MED-EL, all the necessary safety studies were performed in non-human subjects before the first patient received it in 1997. In association with MED-EL, all the necessary safety studies were performed in non-human subjects before the first patient received it in 1999. For the patient to successfully use the EAS concept, the residual hearing needs to be preserved to a high extent and for several years. This requires a highly flexible electrode array in safeguarding the intra-cochlear structures during and after the CI electrode array insertion. Combining the HA unit with the audio processor unit of the CI was necessary for the convenient wearing of the unified audio processor. Fitting of the unified audio processor is another important factor that contributes to the overall success of the EAS treatment. The key translational research efforts at MED-EL were on the development of flexible electrodes, a unified audio processor, innovations in the fitting process, intra-operative monitoring of cochlear health during electrode insertion, pre-operative soft-ware tool to evaluate the cochlear size and electrode selection and some new innovations tried within EAS topic. This article covers the milestones of translational research from the first concept to the widespread clinical use of EAS.


Subject(s)
Acoustic Stimulation/trends , Cochlear Implantation/trends , Cochlear Implants/trends , Electric Stimulation , Acoustic Stimulation/history , Audiometry, Pure-Tone , Auditory Threshold , Cochlear Implantation/history , Cochlear Implants/history , History, 20th Century , History, 21st Century , Humans , Speech Discrimination Tests , Speech Perception
2.
PLoS One ; 15(12): e0244632, 2020.
Article in English | MEDLINE | ID: mdl-33373427

ABSTRACT

A vocoder is used to simulate cochlear-implant sound processing in normal-hearing listeners. Typically, there is rapid improvement in vocoded speech recognition, but it is unclear if the improvement rate differs across age groups and speech materials. Children (8-10 years) and young adults (18-26 years) were trained and tested over 2 days (4 hours) on recognition of eight-channel noise-vocoded words and sentences, in quiet and in the presence of multi-talker babble at signal-to-noise ratios of 0, +5, and +10 dB. Children achieved poorer performance than adults in all conditions, for both word and sentence recognition. With training, vocoded speech recognition improvement rates were not significantly different between children and adults, suggesting that improvement in learning how to process speech cues degraded via vocoding is absent of developmental differences across these age groups and types of speech materials. Furthermore, this result confirms that the acutely measured age difference in vocoded speech recognition persists after extended training.


Subject(s)
Acoustic Stimulation/methods , Speech Discrimination Tests/methods , Adolescent , Adult , Child , Female , Humans , Male , Perceptual Masking , Recognition, Psychology , Speech Perception , Young Adult
3.
J Acoust Soc Am ; 147(1): 446, 2020 01.
Article in English | MEDLINE | ID: mdl-32006956

ABSTRACT

For single-sided deafness cochlear-implant (SSD-CI) listeners, different peripheral representations for electric versus acoustic stimulation, combined with interaural frequency mismatch, might limit the ability to perceive bilaterally presented speech as a single voice. The assessment of binaural fusion often relies on subjective report, which requires listeners to have some understanding of the perceptual phenomenon of object formation. Two experiments explored whether binaural fusion could instead be assessed using judgments of the number of voices in a mixture. In an SSD-CI simulation, normal-hearing listeners were presented with one or two "diotic" voices (i.e., unprocessed in one ear and noise-vocoded in the other) in a mixture with additional monaural voices. In experiment 1, listeners reported how many voices they heard. Listeners generally counted the diotic speech as two separate voices, regardless of interaural frequency mismatch. In experiment 2, listeners identified which of two mixtures contained diotic speech. Listeners performed significantly better with interaurally frequency-matched than with frequency-mismatched stimuli. These contrasting results suggest that listeners experienced partial fusion: not enough to count the diotic speech as one voice, but enough to detect its presence. The diotic-speech detection task (experiment 2) might provide a tool to evaluate fusion and optimize frequency mapping for SSD-CI patients.


Subject(s)
Discrimination, Psychological , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Cochlear Implants , Humans , Psychoacoustics , Signal Processing, Computer-Assisted , Speech Discrimination Tests , Young Adult
4.
J Speech Lang Hear Res ; 63(1): 334-344, 2020 01 22.
Article in English | MEDLINE | ID: mdl-31940258

ABSTRACT

Purpose In a previous paper (Souza, Wright, Blackburn, Tatman, & Gallun, 2015), we explored the extent to which individuals with sensorineural hearing loss used different cues for speech identification when multiple cues were available. Specifically, some listeners placed the greatest weight on spectral cues (spectral shape and/or formant transition), whereas others relied on the temporal envelope. In the current study, we aimed to determine whether listeners who relied on temporal envelope did so because they were unable to discriminate the formant information at a level sufficient to use it for identification and the extent to which a brief discrimination test could predict cue weighting patterns. Method Participants were 30 older adults with bilateral sensorineural hearing loss. The first task was to label synthetic speech tokens based on the combined percept of temporal envelope rise time and formant transitions. An individual profile was derived from linear discriminant analysis of the identification responses. The second task was to discriminate differences in either temporal envelope rise time or formant transitions. The third task was to discriminate spectrotemporal modulation in a nonspeech stimulus. Results All listeners were able to discriminate temporal envelope rise time at levels sufficient for the identification task. There was wide variability in the ability to discriminate formant transitions, and that ability predicted approximately one third of the variance in the identification task. There was no relationship between performance in the identification task and either amount of hearing loss or ability to discriminate nonspeech spectrotemporal modulation. Conclusions The data suggest that listeners who rely to a greater extent on temporal cues lack the ability to discriminate fine-grained spectral information. The fact that the amount of hearing loss was not associated with the cue profile underscores the need to characterize individual abilities in a more nuanced way than can be captured by the pure-tone audiogram.


Subject(s)
Auditory Threshold , Cues , Hearing Loss, Bilateral/psychology , Hearing Loss, Sensorineural/psychology , Speech Perception , Acoustic Stimulation/methods , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Female , Hearing Aids , Hearing Tests , Humans , Male , Middle Aged , Speech Discrimination Tests
5.
J Am Acad Audiol ; 31(6): 412-441, 2020 06.
Article in English | MEDLINE | ID: mdl-31968207

ABSTRACT

BACKGROUND: In the 1950s, with monitored live voice testing, the vu meter time constant and the short durations and amplitude modulation characteristics of monosyllabic words necessitated the use of the carrier phrase amplitude to monitor (indirectly) the presentation level of the words. This practice continues with recorded materials. To relieve the carrier phrase of this function, first the influence that the carrier phrase has on word recognition performance needs clarification, which is the topic of this study. PURPOSE: Recordings of Northwestern University Auditory Test No. 6 by two female speakers were used to compare word recognition performances with and without the carrier phrases when the carrier phrase and test word were (1) in the same utterance stream with the words excised digitally from the carrier (VA-1 speaker) and (2) independent of one another (VA-2 speaker). The 50-msec segment of the vowel in the target word with the largest root mean square amplitude was used to equate the target word amplitudes. RESEARCH DESIGN: A quasi-experimental, repeated measures design was used. STUDY SAMPLE: Twenty-four young normal-hearing adults (YNH; M = 23.5 years; pure-tone average [PTA] = 1.3-dB HL) and 48 older hearing loss listeners (OHL; M = 71.4 years; PTA = 21.8-dB HL) participated in two, one-hour sessions. DATA COLLECTION AND ANALYSES: Each listener had 16 listening conditions (2 speakers × 2 carrier phrase conditions × 4 presentation levels) with 100 randomized words, 50 different words by each speaker. Each word was presented 8 times (2 carrier phrase conditions × 4 presentation levels [YNH, 0- to 24-dB SL; OHL, 6- to 30-dB SL]). The 200 recorded words for each condition were randomized as 8, 25-word tracks. In both test sessions, one practice track was followed by 16 tracks alternated between speakers and randomized by blocks of the four conditions. Central tendency and repeated measures analyses of variance statistics were used. RESULTS: With the VA-1 speaker, the overall mean recognition performances were 6.0% (YNH) and 8.3% (OHL) significantly better with the carrier phrase than without the carrier phrase. These differences were in part attributed to the distortion of some words caused by the excision of the words from the carrier phrases. With the VA-2 speaker, recognition performances on the with and without carrier phrase conditions by both listener groups were not significantly different, except for one condition (YNH listeners at 8-dB SL). The slopes of the mean functions were steeper for the YNH listeners (3.9%/dB to 4.8%/dB) than for the OHL listeners (2.4%/dB to 3.4%/dB) and were <1%/dB steeper for the VA-1 speaker than for the VA-2 speaker. Although the mean results were clear, the variability in performance differences between the two carrier phrase conditions for the individual participants and for the individual words was striking and was considered in detail. CONCLUSION: The current data indicate that word recognition performances with and without the carrier phrase (1) were different when the carrier phrase and target word were produced in the same utterance with poorer performances when the target words were excised from their respective carrier phrases (VA-1 speaker), and (2) were the same when the carrier phrase and target word were produced as independent utterances (VA-2 speaker).


Subject(s)
Hearing Loss , Speech Acoustics , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry , Female , Healthy Volunteers , Hearing Loss/physiopathology , Humans , Male , Middle Aged , Speech Discrimination Tests
6.
J Speech Lang Hear Res ; 62(10): 3763-3770, 2019 10 25.
Article in English | MEDLINE | ID: mdl-31589541

ABSTRACT

Purpose This study explores the role of overt and covert contrasts in speech perception by children with speech sound disorder (SSD). Method Three groups of preschool-aged children (typically developing speech and language [TD], SSD with /s/~/ʃ/ contrast [SSD-contrast], and SSD with /s/~/ʃ/ collapse [SSD-collapse]) completed an identification task targeting /s/~/ʃ/ minimal pairs. The stimuli were produced by 3 sets of talkers: children with TD, children with SSD, and the participant himself/herself. We conducted a univariate general linear model to investigate differences in perception of tokens produced by different speakers and differences in perception between the groups of listeners. Results The TD and SSD-contrast groups performed similarly when perceiving tokens produced by themselves or other children. The SSD-collapse group perceived all speakers more poorly than the other 2 groups of children, performing at chance for perception of their own speech. Children who produced a covert contrast did not perceive their own speech more accurately than children who produced no identifiable acoustic contrast. Conclusion Preschool-aged children have not yet developed adultlike phonological representations. Collapsing phoneme production, even with a covert contrast, may indicate poor perception of the collapsed phonemes.


Subject(s)
Child Language , Phonetics , Speech Perception , Speech Sound Disorder/psychology , Acoustic Stimulation/methods , Acoustic Stimulation/psychology , Child, Preschool , Female , Humans , Linear Models , Male , Speech , Speech Discrimination Tests , Speech Sound Disorder/physiopathology , Task Performance and Analysis
7.
Hear Res ; 371: 11-18, 2019 01.
Article in English | MEDLINE | ID: mdl-30439570

ABSTRACT

The understanding of speech in noise relies (at least partially) on spectrotemporal modulation sensitivity. This sensitivity can be measured by spectral ripple tests, which can be administered at different presentation levels. However, it is not known how presentation level affects spectrotemporal modulation thresholds. In this work, we present behavioral data for normal-hearing adults which show that at higher ripple densities (2 and 4 ripples/oct), increasing presentation level led to worse discrimination thresholds. Results of a computational model suggested that the higher thresholds could be explained by a worsening of the spectrotemporal representation in the auditory nerve due to broadening of cochlear filters and neural activity saturation. Our results demonstrate the importance of taking presentation level into account when administering spectrotemporal modulation detection tests.


Subject(s)
Speech Perception/physiology , Acoustic Stimulation , Adult , Auditory Threshold/physiology , Cochlear Nerve/physiology , Female , Humans , Male , Models, Neurological , Models, Psychological , Speech Acoustics , Speech Discrimination Tests/methods , Speech Discrimination Tests/statistics & numerical data , Young Adult
8.
Res Dev Disabil ; 83: 57-68, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30142574

ABSTRACT

AIMS: We employed a discrimination-choice procedure, embedded in a custom-made videogame, to evaluate whether youth with Autism Spectrum Disorder (ASD), including nonverbal individuals, distinguish sentences on the basis of emotional tone-of-voice and generalize linguistic information across speaker gender. METHODS AND PROCEDURES: Thirteen youth with ASD (7-21 years) and 13 age-matched typical controls heard pairs of pre-recorded sentences varying in lexical content and prosody (e.g., enthusiastic "Dave rode a bike'' vs. grouchy "Mark held a key''). After training to select a target sentence, participants heard test probes comprising re-combinations of the content and prosodic features of the sentences. Interspersed generalization trials used a voice opposite in gender to the voice used in training. OUTCOMES AND RESULTS: Youth with ASD were less accurate than controls in discriminating sentences based on emotional tone-of-voice. Nonverbal and verbal youth did not differ in this regard. The ASD group showed only slight decrements in generalizing to the opposite-gender voice. CONCLUSIONS AND IMPLICATIONS: The finding of intact generalization of linguistic information across male/female speakers contrasts with the widely held view that autism is characterized by deficits in generalization. This suggests the need to test generalization under varying task demands to identify limits on performance.


Subject(s)
Autism Spectrum Disorder , Emotions , Speech Discrimination Tests , Acoustic Stimulation/methods , Adolescent , Autism Spectrum Disorder/diagnosis , Autism Spectrum Disorder/psychology , Child , Female , Humans , Language , Male , Speech Discrimination Tests/instrumentation , Speech Discrimination Tests/methods , Speech Perception , Video Games , Voice Quality , Young Adult
9.
J Acoust Soc Am ; 143(4): 2527, 2018 04.
Article in English | MEDLINE | ID: mdl-29716288

ABSTRACT

The degrading influence of noise on various critical bands of speech was assessed. A modified version of the compound method [Apoux and Healy (2012) J. Acoust. Soc. Am. 132, 1078-1087] was employed to establish this noise susceptibility for each speech band. Noise was added to the target speech band at various signal-to-noise ratios to determine the amount of noise required to reduce the contribution of that band by 50%. It was found that noise susceptibility is not equal across the speech spectrum, as is commonly assumed and incorporated into modern indexes. Instead, the signal-to-noise ratio required to equivalently impact various speech bands differed by as much as 13 dB. This noise susceptibility formed an irregular pattern across frequency, despite the use of multi-talker speech materials designed to reduce the potential influence of a particular talker's voice. But basic trends in the pattern of noise susceptibility across the spectrum emerged. Further, no systematic relationship was observed between noise susceptibility and speech band importance. It is argued here that susceptibility to noise and band importance are different phenomena, and that this distinction may be underappreciated in previous works.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Auditory Threshold/physiology , Hearing/physiology , Noise , Speech Perception/physiology , Adult , Female , Humans , Male , Signal-To-Noise Ratio , Speech Discrimination Tests , Young Adult
10.
J Acoust Soc Am ; 143(4): 2195, 2018 04.
Article in English | MEDLINE | ID: mdl-29716302

ABSTRACT

Better-ear glimpsing (BEG) is an auditory phenomenon that helps understanding speech in noise by utilizing interaural level differences (ILDs). The benefit provided by BEG is limited in hearing-impaired (HI) listeners by reduced audibility at high frequencies. Rana and Buchholz [(2016). J. Acoust. Soc. Am. 140(2), 1192-1205] have shown that artificially enhancing ILDs at low and mid frequencies can help HI listeners understanding speech in noise, but the achieved benefit is smaller than in normal-hearing (NH) listeners. To understand how far this difference is explained by differences in audibility, audibility was carefully controlled here in ten NH and ten HI listeners and speech reception thresholds (SRTs) in noise were measured in a spatially separated and co-located condition as a function of frequency and sensation level. Maskers were realized by noise-vocoded speech and signals were spatialized using artificially generated broadband ILDs. The spatial benefit provided by BEG and SRTs improved consistently with increasing sensation level, but was limited in the HI listeners by loudness discomfort. Further, the HI listeners performed similar to NH listeners when differences in audibility were compensated. The results help to understand the hearing aid gain that is required to maximize the spatial benefit provided by ILDs as a function of frequency.


Subject(s)
Auditory Perception/physiology , Auditory Threshold/physiology , Hearing Loss, Sensorineural/physiopathology , Hearing/physiology , Speech Intelligibility , Speech Perception/physiology , Acoustic Stimulation , Adult , Aged , Case-Control Studies , Female , Humans , Speech Discrimination Tests , Young Adult
11.
Hear Res ; 350: 91-99, 2017 07.
Article in English | MEDLINE | ID: mdl-28460253

ABSTRACT

Results in studies concerning cortical changes in idiopathic sudden sensorineural hearing loss (ISSNHL) are not homogeneous, in particular due to the different neuroimaging techniques implemented and the diverse stages of ISSNHL studied. Considering the recent advances in state-of-the-art positron emission tomography (PET) cameras, the aim of this study was to gain more insight into the neuroanatomical differences associated with the earliest stages of unilateral ISSNHL and clinical-perceptual performance changes. After an audiological examination including the mean auditory threshold (mean AT), mean speech discrimination score (mean SDS) and Tinnitus Handicap Inventory (THI), 14 right-handed ISSNHL patients underwent brain [18F]fluorodeoxyglucose (FDG)-PET within 72 h of the onset of symptoms. When compared to an homogeneous group of 35 healthy subjects by means of statistical parametric mapping, a relative increase in FDG uptake was found in the right superior and medial frontal gyrus as well as in the right anterior cingulate cortex in ISSNHL patients. Conversely, the same group showed a significant relative decrease in FDG uptake in the right middle temporal, precentral and postcentral gyrus as well as in the left posterior cingulate cortex, left lingual, superior, middle temporal and middle frontal gyrus and in the left insula. Regression analysis showed a positive correlation between mean THI and glucose consumption in the right anterior cingulate cortex and a positive correlation between mean SDS and glucose consumption in the left precentral gyrus. The relative changes in FDG uptake found in these brain regions and the positive correlation with mean SDS and THI scores in ISSNHL could possibly highlight new aspects of cerebral rearrangement, contributing to further explain changes in those functions that support speech recognition during the sudden impairment of unilateral auditory input.


Subject(s)
Auditory Cortex/metabolism , Energy Metabolism , Hearing Loss, Sensorineural/metabolism , Hearing Loss, Sudden/metabolism , Acoustic Stimulation , Adult , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiopathology , Auditory Threshold , Brain Mapping/methods , Case-Control Studies , Disability Evaluation , Female , Fluorodeoxyglucose F18/administration & dosage , Hearing , Hearing Loss, Sensorineural/diagnostic imaging , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/psychology , Hearing Loss, Sudden/diagnostic imaging , Hearing Loss, Sudden/physiopathology , Hearing Loss, Sudden/psychology , Humans , Male , Middle Aged , Positron Emission Tomography Computed Tomography , Radiopharmaceuticals/administration & dosage , Speech Discrimination Tests , Speech Perception
12.
Clin Linguist Phon ; 31(7-9): 598-611, 2017.
Article in English | MEDLINE | ID: mdl-28362227

ABSTRACT

Studies of speech production in French-speaking cochlear-implanted (CI) children are very scarce. Yet, difficulties in speech production have been shown to impact the intelligibility of these children. The goal of this study is to understand the effect of long-term use of cochlear implant on speech production, and more precisely on the coordination of laryngeal-oral gestures in stop production. The participants were all monolingual French children: 13 6;6- to 10;7-year-old CI children and 20 age-matched normally hearing (NH) children. We compared /p/, /t/, /k/, /b/, /d/ and /g/ in word-initial consonant-vowel sequences, produced in isolation in two different tasks, and we studied the effects of CI use, vowel context, task and age factors (i.e. chronological age, age at implantation and duration of implant use). Statistical analyses show a difference in voicing production between groups for voiceless consonants (shorter Voice Onset Times for CI children), with significance reached only for /k/, but no difference for voiced consonants. Our study indicates that in the long run, use of CI seems to have limited effects on the acquisition of oro-laryngeal coordination needed to produce voicing, except for specific difficulties located on velars. In a follow-up study, further acoustic analyses on vowel and fricative production by the same children reveal more difficulties, which suggest that cochlear implantation impacts frequency-based features (second formant of vowels and spectral moments of fricatives) more than durational cues (voicing).


Subject(s)
Acoustic Stimulation , Cochlear Implants , Speech Discrimination Tests , Voice , Child , Cochlear Implantation , Female , France , Humans , Language , Male , Phonetics
13.
Codas ; 29(1): e20160082, 2017 Mar 09.
Article in Portuguese, English | MEDLINE | ID: mdl-28300960

ABSTRACT

INTRODUCTION: The ability to recognize the sounds of speech enables an efficient communication. This ability must always be considered when communication disorders are evaluated. In this study, sentences of the Hearing in Noise Test (HINT), originally developed in English and adapted to Brazilian Portuguese, were used to evaluate speech recognition in silence and in the presence of noise. Although this test can be an important clinical tool, it is noticed that it has not been used in audiological clinical practice in Brazil. One possible reason is the lack of standardization of some aspects of the test, including the methods adopted to analyze the patient's answers. PURPOSE: The aim of this study was to analyze different judgment criteria of individuals' answers during sentence recognition thresholds measurement using the HINT in Brazilian Portuguese. METHODS: The study was conducted with 30 young adults (three groups of 10 people), between 18 and 25 years old, of both genders, with normal hearing. HINT sentences were adapted to Brazilian Portuguese and speech recognition thresholds were determined in the presence of noise by using three judgment criteria published in Brazilian literature. A single variation analysis was performed to compare the average threshold between the three groups. The maximum error probability to reject the null hypothesis was 5%. RESULTS: The mean and standard deviations of thresholds, respectively, were: 59.90 ± 1.43 dB SPL; 59.60 ± 0.53 dB SPL and 59.95 ± 0.6 dB SPL. There was no statistically significant difference between the means (F = 0.398; p> 0.05). CONCLUSION: Regardless the judging criteria used, results obtained in all groups were equivalent.


Subject(s)
Acoustic Stimulation/methods , Auditory Threshold/physiology , Language , Speech Perception/physiology , Adolescent , Adult , Audiometry, Speech , Female , Hearing Tests , Humans , Male , Reproducibility of Results , Signal-To-Noise Ratio , Speech Discrimination Tests , Young Adult
14.
CoDAS ; 29(1): e20160082, 2017. tab, graf
Article in Portuguese | LILACS | ID: biblio-840099

ABSTRACT

RESUMO Introdução A capacidade em perceber os sons da fala possibilita ao ser humano comunicar-se de forma eficiente. Esse aspecto deve ser considerado na avaliação dos distúrbios da comunicação humana. O Hearing in Noise Test (HINT) foi adaptado para o português brasileiro e faz uso de sentenças para avaliar o reconhecimento de fala em silêncio e na presença de ruído competitivo. O teste pode ser uma ferramenta clínica importante, embora não se observe a utilização do HINT na prática clínica audiológica no Brasil. Um dos motivos dessa ausência pode estar relacionado com a falta de padronização em alguns aspectos do teste, incluindo o julgamento de respostas apresentadas durante sua aplicação. Objetivo Analisar os diferentes critérios de julgamento de respostas obtidas de indivíduos submetidos à pesquisa de limiares de reconhecimento das sentenças do HINT em português brasileiro. Método A pesquisa foi realizada com 30 adultos jovens (três grupos de 10 pessoas), entre 18 e 25 anos, de ambos os gêneros, com audição normal. Os indivíduos foram submetidos ao teste de reconhecimento de sentenças em presença de ruído competitivo com uso de sentenças do HINT, adaptado para o português brasileiro. Foram determinados limiares de reconhecimento de sentenças na presença de ruído, através da utilização de três critérios de julgamento publicados na literatura brasileira. Foi realizada uma análise de variação única para comparar as médias dos limiares entre os três grupos. A probabilidade máxima de erro para rejeitar a hipótese nula foi de 5%. Resultados As médias e os desvios padrão dos limiares de reconhecimento de sentenças, respectivamente, foram: 59,90 dB NPS ± 1,43; 59,60 dB NPS ± 0,53 e 59,95 dB NPS ± 0,6. Não houve diferença, estatisticamente significativa, entre as médias (F=0,398; p>0,05). Conclusão Independentemente do critério de julgamento utilizado, as respostas obtidas de indivíduos submetidos à pesquisa de limiares de reconhecimento de sentenças na presença de ruído, foram semelhantes.


ABSTRACT Introduction The ability to recognize the sounds of speech enables an efficient communication. This ability must always be considered when communication disorders are evaluated. In this study, sentences of the Hearing in Noise Test (HINT), originally developed in English and adapted to Brazilian Portuguese, were used to evaluate speech recognition in silence and in the presence of noise. Although this test can be an important clinical tool, it is noticed that it has not been used in audiological clinical practice in Brazil. One possible reason is the lack of standardization of some aspects of the test, including the methods adopted to analyze the patient’s answers. Purpose The aim of this study was to analyze different judgment criteria of individuals’ answers during sentence recognition thresholds measurement using the HINT in Brazilian Portuguese. Methods The study was conducted with 30 young adults (three groups of 10 people), between 18 and 25 years old, of both genders, with normal hearing. HINT sentences were adapted to Brazilian Portuguese and speech recognition thresholds were determined in the presence of noise by using three judgment criteria published in Brazilian literature. A single variation analysis was performed to compare the average threshold between the three groups. The maximum error probability to reject the null hypothesis was 5%. Results The mean and standard deviations of thresholds, respectively, were: 59.90 ± 1.43 dB SPL; 59.60 ± 0.53 dB SPL and 59.95 ± 0.6 dB SPL. There was no statistically significant difference between the means (F = 0.398; p> 0.05). Conclusion Regardless the judging criteria used, results obtained in all groups were equivalent.


Subject(s)
Humans , Male , Female , Adolescent , Adult , Young Adult , Auditory Threshold/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Language , Audiometry, Speech , Speech Discrimination Tests , Reproducibility of Results , Signal-To-Noise Ratio , Hearing Tests
15.
J Am Acad Audiol ; 27(9): 701-713, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27718347

ABSTRACT

BACKGROUND: Although most cochlear implant (CI) users achieve improvements in speech perception, there is still a wide variability in speech perception outcomes. There is a growing body of literature that supports the relationship between individual differences in temporal processing and speech perception performance in CI users. Previous psychophysical studies have emphasized the importance of temporal acuity for overall speech perception performance. Measurement of gap detection thresholds (GDTs) is the most common measure currently used to assess temporal resolution. However, most GDT studies completed with CI participants used direct electrical stimulation not acoustic stimulation and they used psychoacoustic research paradigms that are not easy to administer clinically. Therefore, it is necessary to determine if the variance in GDTs assessed with clinical measures of temporal processing such as the Randomized Gap Detection Test (RGDT) can be used to explain the variability in speech perception performance. PURPOSE: The primary goal of this study was to investigate the relationship between temporal processing and speech perception performance in CI users. RESEARCH DESIGN: A correlational study investigating the relationship between behavioral GDTs (assessed with the RGDT or the Expanded Randomized Gap Detection Test) and commonly used speech perception measures (assessed with the Speech Recognition Test [SRT], Central Institute for the Deaf W-22 Word Recognition Test [W-22], Consonant-Nucleus-Consonant Test [CNC], Arizona Biomedical Sentence Recognition Test [AzBio], Bamford-Kowal-Bench Speech-in-Noise Test [BKB-SIN]). STUDY SAMPLE: Twelve postlingually deafened adult CI users (24-83 yr) and ten normal-hearing (NH; 22-30 yr) adults participated in the study. DATA COLLECTION AND ANALYSIS: The data were collected in a sound-attenuated test booth. After measuring pure-tone thresholds, GDTs and speech perception performance were measured. The difference in performance between-participant groups on the aforementioned tests, as well as the correlation between GDTs and speech perception performance was examined. The correlations between participants' biologic factors, performance on the RGDT and speech perception measures were also explored. RESULTS: Although some CI participants performed as well as the NH listeners, the majority of the CI participants displayed temporal processing impairments (GDTs > 20 msec) and poorer speech perception performance than NH participants. A statistically significant difference was found between the NH and CI test groups in GDTs and some speech tests (SRT, W-22, and BKB-SIN). For the CI group, there were significant correlations between GDTs and some measures of speech perception (CNC Phoneme, AzBio, BKB-SIN); however, no significant correlations were found between biographic factors and GDTs or speech perception performance. CONCLUSIONS: Results support the theory that the variability in temporal acuity in CI users contributes to the variability in speech performance. Results also indicate that it is reasonable to use the clinically available RGDT to identify CI users with temporal processing impairments for further appropriate rehabilitation.


Subject(s)
Cochlear Implants , Speech Perception , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Case-Control Studies , Cochlear Implantation , Deafness/physiopathology , Deafness/therapy , Female , Humans , Male , Middle Aged , Noise , Speech Discrimination Tests , Time Factors , Young Adult
16.
Braz. j. otorhinolaryngol. (Impr.) ; 82(3): 334-340, tab, graf
Article in English | LILACS | ID: lil-785818

ABSTRACT

ABSTRACT INTRODUCTION: Hearing loss can negatively influence the communication performance of individuals, who should be evaluated with suitable material and in situations of listening close to those found in everyday life. OBJECTIVE: To analyze and compare the performance of patients with mild-to-moderate sensorineural hearing loss in speech recognition tests carried out in silence and with noise, according to the variables ear (right and left) and type of stimulus presentation. METHODS: The study included 19 right-handed individuals with mild-to-moderate symmetrical bilateral sensorineural hearing loss, submitted to the speech recognition test with words in different modalities and speech test with white noise and pictures. RESULTS: There was no significant difference between right and left ears in any of the tests. The mean number of correct responses in the speech recognition test with pictures, live voice, and recorded monosyllables was 97.1%, 85.9%, and 76.1%, respectively, whereas after the introduction of noise, the performance decreased to 72.6% accuracy. CONCLUSIONS: The best performances in the Speech Recognition Percentage Index were obtained using monosyllabic stimuli, represented by pictures presented in silence, with no significant differences between the right and left ears. After the introduction of competitive noise, there was a decrease in individuals' performance.


Resumo Introdução: A perda auditiva pode influenciar negativamente o desempenho comunicativo e estes indivíduos devem ser avaliados com material adequado e em situações de escuta próximas às observadas no cotidiano. Objetivo: Analisar e comparar o desempenho de indivíduos com perda auditiva neurossensorial de grau leve a moderado em testes de reconhecimento de fala apresentados no silêncio e no ruído segundo as variáveis orelha e tipos de apresentação do estímulo. Método: Participaram do estudo 19 indivíduos destros com perda auditiva neurossensorial bilateral simétrica de grau leve a moderado, submetidos ao teste de reconhecimento de fala com palavras em diferentes modalidades e ao teste de fala com ruído branco com figuras. Resultados: Não houve diferença significante entre as orelhas direita e esquerda para nenhum dos testes realizados. A média de acertos no teste de reconhecimento de fala com figuras, viva voz e monossílabos gravados foi 97,1%; 85,9% e 76,1%, respectivamente, e 72,6% de acertos no teste com ruído. Conclusões: O melhor desempenho no Índice Percentual de Reconhecimento de Fala foi obtido utilizando como estímulos monossílabos representados por figuras apresentados no silêncio, sem diferenças significantes entre as orelhas direita e esquerda. Com a introdução do ruído competitivo, houve descréscimo no desempenho dos indivíduos.


Subject(s)
Humans , Male , Female , Adolescent , Adult , Middle Aged , Young Adult , Speech Discrimination Tests/methods , Speech Perception/physiology , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Sensorineural/physiopathology , Sound Localization , Acoustic Stimulation , Severity of Illness Index
17.
Neuropsychologia ; 87: 169-181, 2016 07 01.
Article in English | MEDLINE | ID: mdl-27212057

ABSTRACT

There is a high degree of variability in speech intelligibility outcomes across cochlear-implant (CI) users. To better understand how auditory cognition affects speech intelligibility with the CI, we performed an electroencephalography study in which we examined the relationship between central auditory processing, cognitive abilities, and speech intelligibility. Postlingually deafened CI users (N=13) and matched normal-hearing (NH) listeners (N=13) performed an oddball task with words presented in different background conditions (quiet, stationary noise, modulated noise). Participants had to categorize words as living (targets) or non-living entities (standards). We also assessed participants' working memory (WM) capacity and verbal abilities. For the oddball task, we found lower hit rates and prolonged response times in CI users when compared with NH listeners. Noise-related prolongation of the N1 amplitude was found for all participants. Further, we observed group-specific modulation effects of event-related potentials (ERPs) as a function of background noise. While NH listeners showed stronger noise-related modulation of the N1 latency, CI users revealed enhanced modulation effects of the N2/N4 latency. In general, higher-order processing (N2/N4, P3) was prolonged in CI users in all background conditions when compared with NH listeners. Longer N2/N4 latency in CI users suggests that these individuals have difficulties to map acoustic-phonetic features to lexical representations. These difficulties seem to be increased for speech-in-noise conditions when compared with speech in quiet background. Correlation analyses showed that shorter ERP latencies were related to enhanced speech intelligibility (N1, N2/N4), better lexical fluency (N1), and lower ratings of listening effort (N2/N4) in CI users. In sum, our findings suggest that CI users and NH listeners differ with regards to both the sensory and the higher-order processing of speech in quiet as well as in noisy background conditions. Our results also revealed that verbal abilities are related to speech processing and speech intelligibility in CI users, confirming the view that auditory cognition plays an important role for CI outcome. We conclude that differences in auditory-cognitive processing contribute to the variability in speech performance outcomes observed in CI users.


Subject(s)
Brain/physiopathology , Cochlear Implants , Cognition/physiology , Hearing Loss/physiopathology , Hearing Loss/rehabilitation , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Aged , Electroencephalography , Evoked Potentials , Female , Humans , Language , Language Tests , Male , Memory, Short-Term , Middle Aged , Neuropsychological Tests , Speech Discrimination Tests
18.
J Am Acad Audiol ; 27(2): 85-102, 2016 02.
Article in English | MEDLINE | ID: mdl-26905529

ABSTRACT

BACKGROUND: Cochlear implants (CIs) have been shown to improve children's speech recognition over traditional amplification when severe-to-profound sensorineural hearing loss is present. Despite improvements, understanding speech at low-level intensities or in the presence of background noise remains difficult. In an effort to improve speech understanding in challenging environments, Cochlear Ltd. offers preprocessing strategies that apply various algorithms before mapping the signal to the internal array. Two of these strategies include Autosensitivity Control™ (ASC) and Adaptive Dynamic Range Optimization (ADRO(®)). Based on the previous research, the manufacturer's default preprocessing strategy for pediatrics' everyday programs combines ASC + ADRO(®). PURPOSE: The purpose of this study is to compare pediatric speech perception performance across various preprocessing strategies while applying a specific programming protocol using increased threshold levels to ensure access to very low-level sounds. RESEARCH DESIGN: This was a prospective, cross-sectional, observational study. Participants completed speech perception tasks in four preprocessing conditions: no preprocessing, ADRO(®), ASC, and ASC + ADRO(®). STUDY SAMPLE: Eleven pediatric Cochlear Ltd. CI users were recruited: six bilateral, one unilateral, and four bimodal. INTERVENTION: Four programs, with the participants' everyday map, were loaded into the processor with different preprocessing strategies applied in each of the four programs: no preprocessing, ADRO(®), ASC, and ASC + ADRO(®). DATA COLLECTION AND ANALYSIS: Participants repeated consonant-nucleus-consonant (CNC) words presented at 50 and 70 dB SPL in quiet and Hearing in Noise Test (HINT) sentences presented adaptively with competing R-Space(TM) noise at 60 and 70 dB SPL. Each measure was completed as participants listened with each of the four preprocessing strategies listed above. Test order and conditions were randomized. A repeated-measures analysis of was used to compare each preprocessing strategy for the group. Critical differences were used to determine significant score differences between each preprocessing strategy for individual participants. RESULTS: For CNC words presented at 50 dB SPL, the group data revealed significantly better scores using ASC + ADRO(®) compared to all other preprocessing conditions while ASC resulted in poorer scores compared to ADRO(®) and ASC + ADRO(®). Group data for HINT sentences presented in 70 dB SPL of R-Space(TM) noise revealed significantly improved scores using ASC and ASC + ADRO(®) compared to no preprocessing, with ASC + ADRO(®) scores being better than ADRO(®) alone scores. Group data for CNC words presented at 70 dB SPL and adaptive HINT sentences presented in 60 dB SPL of R-Space(TM) noise showed no significant difference among conditions. Individual data showed that the preprocessing strategy yielding the best scores varied across measures and participants. CONCLUSIONS: Group data reveal an advantage with ASC + ADRO(®) for speech perception presented at lower levels and in higher levels of background noise. Individual data revealed that the optimal preprocessing strategy varied among participants, indicating that a variety of preprocessing strategies should be explored for each CI user considering his or her performance in challenging listening environments.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Acoustic Stimulation/methods , Adolescent , Analysis of Variance , Auditory Threshold/physiology , Child , Cross-Sectional Studies , Female , Humans , Male , Mental Processes/physiology , Noise , Perceptual Masking/physiology , Speech Discrimination Tests , Speech Perception/physiology
19.
PLoS One ; 11(1): e0145610, 2016.
Article in English | MEDLINE | ID: mdl-26730702

ABSTRACT

Vowel identification in noise using consonant-vowel-consonant (CVC) logatomes was used to investigate a possible interplay of speech information from different frequency regions. It was hypothesized that the periodicity conveyed by the temporal envelope of a high frequency stimulus can enhance the use of the information carried by auditory channels in the low-frequency region that share the same periodicity. It was further hypothesized that this acts as a strobe-like mechanism and would increase the signal-to-noise ratio for the voiced parts of the CVCs. In a first experiment, different high-frequency cues were provided to test this hypothesis, whereas a second experiment examined more closely the role of amplitude modulations and intact phase information within the high-frequency region (4-8 kHz). CVCs were either natural or vocoded speech (both limited to a low-pass cutoff-frequency of 2.5 kHz) and were presented in stationary 3-kHz low-pass filtered masking noise. The experimental results did not support the hypothesized use of periodicity information for aiding low-frequency perception.


Subject(s)
Noise , Pitch Perception/physiology , Speech Acoustics , Speech Intelligibility/physiology , Acoustic Stimulation , Adult , Cues , Female , Humans , Male , Perceptual Masking/physiology , Phonetics , Random Allocation , Speech Discrimination Tests , Voice
20.
Auris Nasus Larynx ; 43(5): 495-500, 2016 Oct.
Article in English | MEDLINE | ID: mdl-26739945

ABSTRACT

OBJECTIVE: To assess possible delayed recovery of the maximum speech discrimination score (SDS) when the audiometric threshold ceases to change. METHODS: We retrospectively examined 20 patients with idiopathic sudden sensorineural hearing loss (ISSNHL) (gender: 9 males and 11 females, age: 24-71 years). The findings of pure-tone average (PTA), maximum SDS, auditory brainstem responses (ABRs), and tinnitus handicap inventory (THI) were compared among the three periods of 1-3 months, 6-8 months, and 11-13 months after ISSNHL onset. RESULTS: No significant differences were noted in PTA, whereas an increase of greater than or equal to 10% in maximum SDS was recognized in 9 patients (45%) from the period of 1-3 months to the period of 11-13 months. Four of the 9 patients showed 20% or more recovery of maximum SDS. No significant differences were observed in the interpeak latency difference between waves I and V and the interaural latency difference of wave V in ABRs, whereas an improvement in the THI grade was recognized in 11 patients (55%) from the period of 1-3 months to the period of 11-13 months. CONCLUSION: The present study suggested the incidence of maximum SDS restoration over 1 year after ISSNHL onset. These findings may be because of the effects of auditory plasticity via the central auditory pathway.


Subject(s)
Evoked Potentials, Auditory, Brain Stem , Hearing Loss, Sudden/physiopathology , Recovery of Function , Speech Perception , Tinnitus/physiopathology , Adenosine Triphosphate/therapeutic use , Adrenal Cortex Hormones/therapeutic use , Adult , Aged , Audiometry, Pure-Tone , Dexamethasone/therapeutic use , Female , Hearing Loss, Sudden/therapy , Humans , Hyperbaric Oxygenation , Injection, Intratympanic , Male , Middle Aged , Retrospective Studies , Speech Discrimination Tests , Surveys and Questionnaires , Time Factors , Vitamin B 12/therapeutic use , Vitamin B Complex/therapeutic use , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL