Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; 67(1): 282-295, 2024 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-38092067

RESUMO

PURPOSE: The aim of this study was to determine the effects of residual hearing slopes and cochlear implant frequency map settings on bimodal and electric-acoustic stimulation (EAS) benefits in speech perception. METHOD: Adults with normal hearing were recruited for simulated bimodal and EAS hearing. Sentence perception was measured unilaterally and bilaterally. For the acoustic stimulation, three slopes of high-frequency hearing loss were created using low-pass filters with a cutoff frequency of 500 Hz: steep (96 dB/octave), medium (48 dB/octave), and shallow (24 dB/octave). For the electric stimulation, an eight-channel sinewave vocoder was used with an output frequency range (1000-7938 Hz) with three input frequency ranges to create frequency maps, overlap (188-7938 Hz), meet (500-7938 Hz), and gap (750-7938 Hz), relative to the cutoff frequency in the acoustic stimulation. RESULTS: The largest bimodal/EAS benefit occurred with the shallow slope, and the smallest occurred with the steep slope. The effects of the slopes on bimodal/EAS benefit were greatest with the meet or gap map and the least with the overlap map. EAS benefit was greater than bimodal benefit at higher signal-to-noise ratios regardless of frequency map. CONCLUSIONS: The results indicate that correlation between bimodal/EAS benefit and residual hearing could potentially improve if slopes were considered. The optimal frequency map differed with different slopes, suggesting that the slopes of residual hearing should be carefully considered in fitting bimodal and EAS hearing. EAS hearing provided greater benefit over bimodal hearing, suggesting that spectrotemporal integration was better within one ear (i.e., EAS) than across ears (i.e., bimodal).


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Humanos , Estimulação Acústica/métodos , Audição/fisiologia , Percepção da Fala/fisiologia , Estimulação Elétrica/métodos
2.
Am J Audiol ; 32(1): 170-181, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36580493

RESUMO

PURPOSE: The purpose of this study was to determine the feasibility of online testing in a quiet room for three auditory perception experiments in normal-hearing listeners: speech, music, and binaural cue. METHOD: Under Experiment 1, sentence perception was measured using fixed signal-to-noise ratios (SNRs: +10 dB, 0 dB, and -10 dB) and using adaptive speech reception threshold (SRT) procedures. The correct scores were compared between quiet room and soundproof booth listening environments. Experiment 2 was designed to compare melodic contour identification between the two listening environments. Melodic contour identification was assessed with 1, 2, and 4 semitone spacings. Under Experiment 3, interaural level difference (ILD) and interaural time differences (ITD) were measured as a function of carrier frequency. For both measures, two modulated tones (400-ms duration and 100-Hz modulation rate) were sequentially presented through headphones to both ears, and subjects were asked to indicate whether the sound moved to the left or right ear. The measured ITD and ILD were then compared between the two listening environments. RESULTS: There were no significant differences in any outcome measures (SNR- and SRT-based speech perception, melodic contour identification, and ITD/ILD) between the two listening environments. CONCLUSIONS: These results suggest that normal-hearing listeners may not require a controlled listening environment in any of the three auditory assessments. As comparable data can be obtained via the online testing tool, using the online auditory experiments is recommended.


Assuntos
Música , Percepção da Fala , Humanos , Sinais (Psicologia) , Fala , Audição , Limiar Auditivo
3.
Front Psychol ; 13: 1009463, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36337493

RESUMO

Dichotic spectral integration range, or DSIR, was measured for consonant recognition with normal-hearing listeners. DSIR is defined as a frequency range needed from 0 to 8,000 Hz band in one ear for consonant recognition when low-frequency information of the same consonant was presented to the opposite ear. DSIR was measured under the three signal processing conditions: (1) unprocessed, (2) target: intensified target spectro-temporal regions by 6 dB responsible for consonant recognition, and (3) target minus conflicting: intensified target regions minus spectro-temporal regions that increase confusion. Each consonant was low-pass filtered with a cutoff frequency of 250, 500, 750, and 1,000 Hz, and then was presented in the left ear or low-frequency (LF) ear. To create dichotic listening, the same consonant was simultaneously presented to the right ear or high-frequency (HF) ear. This was high-pass filtered with an initial cutoff frequency of 7,000 Hz, which was adjusted using an adaptive procedure to find the maximum high-pass cutoff for 99.99% correct consonant recognition. Mean DSIRs spanned from 3,198-8,000 Hz to 4,668-8,000 Hz (i.e., mid-to-high frequencies were unnecessary), depending on low-frequency information in the LF ear. DSIRs narrowed (i.e., required less frequency information) with increasing low-frequency information in the LF ear. However, the mean DSIRs were not significantly affected by the signal processing except at the low-pass cutoff frequency of 250 Hz. The individual consonant analyses revealed that /ta/, /da/, /sa/, and /za/ required the smallest DSIR, while /ka/, /ga/, /fa/, and /va/ required the largest DSIRs. DSIRs also narrowed with increasing low-frequency information for the two signal processing conditions except for 250 vs. 1,000 Hz under the target-conflicting condition. The results suggest that consonant recognition is possible with large amounts of spectral information missing if complementary spectral information is integrated across ears. DSIR is consonant-specific and relatively consistent, regardless of signal processing. The results will help determine the minimum spectral range needed in one ear for consonant recognition if limited low spectral information is available in the opposite ear.

4.
Front Psychol ; 13: 918914, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36051201

RESUMO

A previous study demonstrated that consonant recognition improved significantly in normal hearing listeners when useful frequency and time ranges were intensified by 6 dB. The goal of this study was to determine whether bilateral cochlear implant (BCI) and bilateral hearing aid (BHA) users experienced similar enhancement on consonant recognition with these intensified spectral and temporal cues in noise. In total, 10 BCI and 10 BHA users participated in a recognition test using 14 consonants. For each consonant, we used the frequency and time ranges that are critical for its recognition (called "target frequency and time range"), identified from normal hearing listeners. Then, a signal processing tool called the articulation-index gram (AI-Gram) was utilized to add a 6 dB gain to target frequency and time ranges. Consonant recognition was monaurally and binaurally measured under two signal processing conditions, unprocessed and intensified target frequency and time ranges at +5 and +10 dB signal-to-noise ratio and in quiet conditions. We focused on three comparisons between the BCI and BHA groups: (1) AI-Gram benefits (i.e., before and after intensifying target ranges by 6 dB), (2) enhancement in binaural benefits (better performance with bilateral devices compared to the better ear alone) via the AI-Gram processing, and (3) reduction in binaural interferences (poorer performance with bilateral devices compared to the better ear alone) via the AI-Gram processing. The results showed that the mean AI-Gram benefit was significantly improved for the BCI (max 5.9%) and BHA (max 5.2%) groups. However, the mean binaural benefit was not improved after AI-Gram processing. Individual data showed wide ranges of the AI-Gram benefit (max -1 to 23%) and binaural benefit (max -7.6 to 13%) for both groups. Individual data also showed a decrease in binaural interference in both groups after AI-Gram processing. These results suggest that the frequency and time ranges, intensified by the AI-Gram processing, contribute to consonant enhancement for monaural and binaural listening and both BCI and BHA technologies. The intensified frequency and time ranges helped to reduce binaural interference but contributed less to the synergistic binaural benefit in consonant recognition for both groups.

5.
J Am Acad Audiol ; 32(8): 521-527, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34965598

RESUMO

BACKGROUND: Cochlear implant technology allows for acoustic and electric stimulations to be combined across ears (bimodal) and within the same ear (electric acoustic stimulation [EAS]). Mechanisms used to integrate speech acoustics may be different between the bimodal and EAS hearing, and the configurations of hearing loss might be an important factor for the integration. Thus, differentiating the effects of different configurations of hearing loss on bimodal or EAS benefit in speech perception (differences in performance with combined acoustic and electric stimulations from a better stimulation alone) is important. PURPOSE: Using acoustic simulation, we determined how consonant recognition was affected by different configurations of hearing loss in bimodal and EAS hearing. RESEARCH DESIGN: A mixed design was used with one between-subject variable (simulated bimodal group vs. simulated EAS group) and one within-subject variable (acoustic stimulation alone, electric stimulation alone, and combined acoustic and electric stimulations). STUDY SAMPLE: Twenty adult subjects (10 for each group) with normal hearing were recruited. DATA COLLECTION AND ANALYSIS: Consonant perception was unilaterally or bilaterally measured in quiet. For the acoustic stimulation, four different simulations of hearing loss were created by band-pass filtering consonants with a fixed lower cutoff frequency of 100 Hz and each of the four upper cutoff frequencies of 250, 500, 750, and 1,000 Hz. For the electric stimulation, an eight-channel noise vocoder was used to generate a typical spectral mismatch by using fixed input (200-7,000 Hz) and output (1,000-7,000 Hz) frequency ranges. The effects of simulated hearing loss on consonant recognition were compared between the two groups. RESULTS: Significant bimodal and EAS benefits occurred regardless of the configurations of hearing loss and hearing technology (bimodal vs. EAS). Place information was better transmitted in EAS hearing than in bimodal hearing. CONCLUSION: These results suggest that configurations of hearing loss are not a significant factor for integrating consonant information between acoustic and electric stimulations. The results also suggest that mechanisms used to integrate consonant information may be similar between bimodal and EAS hearing.


Assuntos
Implante Coclear , Implantes Cocleares , Perda Auditiva , Percepção da Fala , Estimulação Acústica , Acústica , Adulto , Estimulação Elétrica , Audição , Humanos
6.
Front Psychol ; 12: 733100, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34867614

RESUMO

In this paper, the effects of intensifying useful frequency and time regions (target frequency and time ranges) and the removal of detrimental frequency and time regions (conflicting frequency and time ranges) for consonant enhancement were determined. Thirteen normal-hearing (NH) listeners participated in two experiments. In the first experiment, the target and conflicting frequency and time ranges for each consonant were identified under a quiet, dichotic listening condition by analyzing consonant confusion matrices. The target frequency range was defined as the frequency range that provided the highest performance and was decreased 40% from the peak performance from both high-pass filtering (HPF) and low-pass filtering (LPF) schemes. The conflicting frequency range was defined as the frequency range that yielded the peak errors of the most confused consonants and was 20% less than the peak error from both filtering schemes. The target time range was defined as a consonant segment that provided the highest performance and was decreased 40% from that peak performance when the duration of the consonant was systematically truncated from the onset. The conflicting time ranges were defined on the coincided target time range because, if they temporarily coincide, the conflicting frequency ranges would be the most detrimental factor affecting the target frequency ranges. In the second experiment, consonant recognition was binaurally measured in noise under three signal processing conditions: unprocessed, intensified target ranges by a 6-dB gain (target), and combined intensified target and removed conflicting ranges (target-conflicting). The results showed that consonant recognition improved significantly with the target condition but greatly deteriorated with a target-conflicting condition. The target condition helped transmit voicing and manner cues while the target-conflicting condition limited the transmission of these cues. Confusion analyses showed that the effect of the signal processing on consonant improvement was consonant-specific: the unprocessed condition was the best for /da, pa, ma, sa/; the target condition was the best for /ga, fa, va, za, ʒa/; and the target-conflicting condition was the best for /na, ʃa/. Perception of /ba, ta, ka/ was independent of the signal processing. The results suggest that enhancing the target ranges is an efficient way to improve consonant recognition while the removal of conflicting ranges negatively impacts consonant recognition.

7.
Am J Audiol ; 30(2): 266-274, 2021 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-33769845

RESUMO

Purpose We compared frequency difference limens (FDLs) in normal-hearing listeners under two listening conditions: sequential and simultaneous. Method Eighteen adult listeners participated in three experiments. FDL was measured using a method of limits for comparison frequency. In the sequential listening condition, the tones were presented with a half-second time interval in between, but for the simultaneous listening condition, the tones were presented simultaneously. For the first experiment, one of four reference tones (125, 250, 500, or 750 Hz), which was presented to the left ear, was paired with one of four starting comparison tones (250, 500, 750, or 1000 Hz), which was presented to the right ear. The second and third experiments had the same testing conditions as the first experiment except with two- and three-tone complexes, comparison tones. The subjects were asked if the tones sounded the same or different. When a subject chose "different," the comparison frequency decreased by 10% of the frequency difference between the reference and comparison tones. FDLs were determined when the subjects chose "same" 3 times in a row. Results FDLs were significantly broader (worse) with simultaneous listening than with sequential listening for the two- and three-tone complex conditions but not for the single-tone condition. The FDLs were narrowest (best) with the three-tone complex under both listening conditions. FDLs broadened as the testing frequencies increased for the single tone and the two-tone complex. The FDLs were not broadened at frequencies > 250 Hz for the three-tone complex. Conclusion The results suggest that sequential and simultaneous frequency discriminations are mediated by different processes at different stages in the auditory pathway for complex tones, but not for pure tones.


Assuntos
Percepção Auditiva , Testes Auditivos , Vias Auditivas , Limiar Diferencial , Audição , Humanos
8.
Am J Audiol ; 30(1): 160-169, 2021 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-33621127

RESUMO

Purpose To measure the effect of testing conditions (in the soundproof booth vs. quiet room), test order, and number of test sessions on spectral and temporal processing in normal-hearing (NH) listeners. Method Thirty-two adult NH listeners participated in the three experiments. For all three experiments, the stimuli were presented to the left ear at the subjects' most comfortable level through headphones. All tests were administered in an adaptive three-alternative forced-choice paradigm. Experiment 1 was designed to compare the effect of soundproof booth and quiet room test conditions on amplitude modulation detection threshold and modulation frequency discrimination threshold with each of the five modulation frequencies. Experiment 2 was designed to compare the effect of two test orders on the frequency discrimination thresholds under the quiet room test conditions. The thresholds were first measured in the ascending and descending order of four pure tones, and then with counterbalanced order. For Experiment 3, the amplitude discrimination threshold under the quiet room testing condition was assessed 3 times to determine the effect of the number of test sessions. Then the thresholds were compared over the sessions. Results Results showed no significant effect of test environment. The test order is an important variable for frequency discrimination, particularly between piano tunes and pure tones. Results also show no significant difference across test sessions. Conclusions These results suggest that a controlled test environment may not be required in spectral and temporal assessment for NH listeners. Under the quiet test environment, a single outcome measure is sufficient, but test orders should be counterbalanced.


Assuntos
Percepção do Tempo , Adulto , Limiar Auditivo , Audição , Humanos
9.
Ann Otol Rhinol Laryngol ; 128(6_suppl): 139S-145S, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31092038

RESUMO

OBJECTIVES: The present study investigated the effects of 3-dimensional deep search (3DDS) signal processing on the enhancement of consonant perception in bimodal and normal hearing listeners. METHODS: Using an articulation-index gram and 3DDS signal processing, consonant segments that greatly affected performance were identified and intensified with a 6-dB gain. Then consonant recognition was measured unilaterally and bilaterally before and after 3DDS processing both in quiet and noise. RESULTS: The 3DDS signal processing provided a benefit to both groups, with greater benefit occurring in noise than quiet. The benefit rendered by 3DDS was the greatest in binaural listening condition. Ability to integrate acoustic features across ears was also enhanced with 3DDS processing. In listeners with normal hearing, manner and place of articulation were improved in binaural listening condition. In bimodal listeners, voicing and manner and place of articulation were also improved in bimodal and hearing aid ear-alone conditions. CONCLUSIONS: Consonant recognition was improved with 3DDS in both groups. This observed benefit suggests 3DDS can be used as an auditory training tool for improved integration and for bimodal users who receive little or no benefit from their current bimodal hearing.


Assuntos
Implantes Cocleares , Auxiliares de Audição , Perda Auditiva/fisiopatologia , Perda Auditiva/reabilitação , Processamento de Sinais Assistido por Computador , Percepção da Fala , Adulto , Idoso , Estudos de Casos e Controles , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
10.
Cochlear Implants Int ; 20(3): 106-115, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30694120

RESUMO

OBJECTIVES: To optimize patient's maps in Electric Acoustic Stimulation (EAS) users based on the degree of post-operative aided hearing thresholds. METHODS: Twenty-one adult EAS patients participated in this study. Patients were subdivided into three groups, based on their unaided hearing threshold: (1) electric complementary (EC, n = 6) patients with ≤30 dB HL at 125-500 Hz with severe to profound hearing loss at higher frequencies who only use electric stimulation, (2) EAS (n = 8) patients with 30-70 dB HL from 125 to 250 Hz and profound hearing loss in high frequencies who use combined EAS, and (3) Marginal-EAS (M-EAS, n = 7) patients with 70-95 dB HL at frequencies ≤250 Hz who use combined EAS. Sentence perception in noise, melodic contour identification, and subjective preference were measured using Full Overlap, Narrow Overlap, Gap, and Meet maps. RESULT: Of the 21 patients that participated, 12 subjects were classified as complete hearing preservation and 9 subjects were classified as partial hearing preservation. The highest performing maps in sentence-in-noise perception and melodic contour identification were Gap, Meet, and Full Overlap for the EC, EAS, and the M-EAS groups, respectively. These results are consistently across different test materials and align with subject preference as well. CONCLUSION: These results suggest that clinical fitting in EAS listening should be individually tailored. EAS performance can be enhanced by optimizing maps between acoustic and electric stimulation based on the degree of aided hearing thresholds.


Assuntos
Estimulação Acústica/métodos , Limiar Auditivo , Surdez/fisiopatologia , Estimulação Elétrica/métodos , Percepção da Fala , Adulto , Audiometria de Tons Puros , Implante Coclear , Surdez/terapia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Período Pós-Operatório , Resultado do Tratamento , Adulto Jovem
12.
Cochlear Implants Int ; 16(3): 159-67, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25329752

RESUMO

Objectives The present study characterizes the relationship between bimodal benefit and hearing aid (HA) performance, cochlear implant (CI) performance, and the difference in the performances of the two devices. Methods Fourteen adult bimodal listeners participated in the study. Consonant, vowel, and sentence recognition were measured in quiet and noise (at a +5 and +10 dB signal-to-noise ratio (SNR)) with an HA alone, a CI alone, and with the combined use of an HA and CI in each listener. Speech and noise were presented directly in front of the listener. Results The correlation analyses showed that bimodal benefit was significantly associated with the difference in performances of a CI and an HA in all testing materials, with HA-alone performance in vowel recognition, and with CI-alone performance in sentence recognition. However, regression analyses showed that the independent contribution of the difference in performance across ears to bimodal benefit was significant, irrespective of the testing material or the SNR: the smaller the difference, the greater the benefit. Further, the independent contributions of HA-only performance and CI-alone performance were not significant factors in predicting the existence of bimodal benefit across testing materials and SNRs when the effect of the difference between CI and HA performance was removed from the model. Conclusion The results suggest that bimodal benefit is limited by how effectively the modalities integrate, rather than HA-only or CI-alone performance, and that this integration is facilitated when the performances of the modalities are similar.


Assuntos
Implantes Cocleares , Correção de Deficiência Auditiva/instrumentação , Surdez/reabilitação , Auxiliares de Audição , Percepção da Fala , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria da Fala , Terapia Combinada , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Razão Sinal-Ruído , Fala
13.
Ear Hear ; 34(3): 273-9, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-22968427

RESUMO

OBJECTIVES: This study investigated whether a spectral mismatch across ears influences the benefit of redundancy, squelch, and head shadow differently in speech perception using acoustic simulation of bilateral cochlear implant (CI) processing. DESIGN: Ten normal-hearing subjects participated in the study, and acoustic simulations of CIs were used to test these subjects. Sentence recognition, presented unilaterally and bilaterally, was measured at +5 dB and +10 dB signal-to-noise ratios (SNRs) with bilaterally matched and mismatched conditions. Unilateral and bilateral CIs were simulated using 8-channel sine wave vocoders. Binaural spectral mismatch was introduced by changing the relative simulated insertion depths across ears. Subjects were tested while listening with headphones; head-related transfer functions were applied before the vocoder processing to preserve natural interaural level and time differences. RESULTS: For both SNRs, greater and more consistent binaural benefit of squelch and redundancy occurred for the matched condition whereas binaural interference of squelch and redundancy occurred for the mismatched condition. However, significant binaural benefit of head shadow existed irrespective of spectral mismatches and SNRs. CONCLUSIONS: The results suggest that bilateral spectral mismatch may have a negative impact on the binaural benefit of squelch and redundancy for bilateral CI users. The results also suggest that clinical mapping should be carefully administrated for bilateral CI users to minimize the difference in spectral patterns between the two CIs.


Assuntos
Estimulação Acústica/métodos , Implantes Cocleares , Razão Sinal-Ruído , Percepção da Fala/fisiologia , Adulto , Análise de Variância , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
14.
Otol Neurotol ; 33(7): 1161-8, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22772002

RESUMO

OBJECTIVE: To compare the speech perception benefit, provided by a contralateral hearing aid (HA) or a second cochlear implant (CI). STUDY DESIGN: Repeated measures. PATIENTS: A total of 25 adult subjects participated in the study, including 12 bilateral (10 female and 2 male patients) and 13 bimodal (6 female and 7 male subjects) users. All bilateral users were sequentially implanted. The bimodal users were separated into a poor group (n = 5, aided pure-tone average (PTA) of 55 dB HL or greater at audiometric frequencies of 1 kHz or lesser) and a good group (n = 8, aided PTA < 55 dB HL). MAIN OUTCOME MEASURES: Consonant, vowel, and sentence recognition was measured in quiet and noise at +5 dB and +10 dB signal-to-noise ratios (SNRs). Speech recognition performance was evaluated under three listening conditions: CI alone, HA alone, and CI+HA for bimodal users; first CI alone, second CI alone, and first CI + second CI for bilateral users when speech and noise were presented from the front. RESULTS: There was no significant difference in the binaural benefit between the good bimodal and bilateral groups in vowel and sentence recognition. However, the binaural benefit is significantly better in the bilateral group than in the poor bimodal group for all 3 speech measures. CONCLUSION: These results suggest that the aided pure-tone average at audiometric frequencies of 1 kHz or lesser may serve as one of the clinical criteria for the recommendation of whether bimodal patients should consider a second cochlear implant to maximize their binaural listening ability.


Assuntos
Implante Coclear , Implantes Cocleares , Perda Auditiva Neurossensorial/terapia , Percepção da Fala/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Seleção de Pacientes , Resultado do Tratamento
15.
J Speech Lang Hear Res ; 55(1): 105-24, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22199183

RESUMO

PURPOSE: In this study, the authors aimed to identify speech information processed by a hearing aid (HA) that is additive to information processed by a cochlear implant (CI) as a function of signal-to-noise ratio (SNR). METHOD: Speech recognition was measured with CI alone, HA alone, and CI + HA. Ten participants were separated into 2 groups; good (aided pure-tone average [PTA] < 55 dB) and poor (aided PTA ≥ 55 dB) at audiometric frequencies ≤ 1 kHz in HA. RESULTS: Results showed that the good-aided PTA group derived a clear bimodal benefit (performance difference between CI + HA and CI alone) for vowel and sentence recognition in noise, whereas the poor-aided PTA group received little benefit across speech tests and SNRs. Results also showed that a better aided PTA helped in processing cues embedded in both low and high frequencies; none of these cues was significantly perceived by the poor-aided PTA group. CONCLUSIONS: The aided PTA is an important indicator for bimodal advantage in speech perception. The lack of bimodal benefits in the poor group may be attributed to the nonoptimal HA fitting. Bimodal listening provides a synergistic effect for cues in both low- and high-frequency components in speech.


Assuntos
Implantes Cocleares , Auxiliares de Audição , Perda Auditiva/reabilitação , Razão Sinal-Ruído , Percepção da Fala , Estimulação Acústica , Adulto , Idoso , Análise de Variância , Audiometria da Fala , Limiar Diferencial , Estimulação Elétrica/instrumentação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Espectrografia do Som , Testes de Discriminação da Fala
16.
J Speech Lang Hear Res ; 55(2): 460-73, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22199184

RESUMO

PURPOSE: Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, HT (f), grouping HI listeners with HT (f) is widely practiced. In this article, the relationship between consonant recognition and HT (f) is considered over a range of signal-to-noise ratios (SNRs). METHOD: Confusion matrices (CMs) from 25 HI ears were generated in response to 16 consonant-vowel syllables presented at 6 different SNRs. Individual differences scaling (INDSCAL) was applied to both feature-based matrices and CMs in order to evaluate the relationship between HT (f) and consonant recognition among HI listeners. RESULTS: The results showed no predictive relationship between the percent error scores (Pe) and HT (f) across SNRs. The multiple regression models showed that the HT (f) accounted for 39% of the total variance of the slopes of the Pe. Feature-based INDSCAL analysis showed consistent grouping of listeners across SNRs, but not in terms of HT (f). Systematic relationship between measures was also not defined by CM-based INDSCAL analysis across SNRs. CONCLUSIONS: HT (f) did not account for the majority of the variance (39%) in consonant recognition in noise when the complete body of the CM was considered.


Assuntos
Limiar Auditivo/fisiologia , Perda Auditiva Neurossensorial/fisiopatologia , Ruído , Fonética , Percepção da Fala/fisiologia , Adolescente , Adulto , Audiometria de Tons Puros , Audiometria da Fala , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Análise de Regressão , Razão Sinal-Ruído , Adulto Jovem
17.
J Acoust Soc Am ; 130(2): EL94-100, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21877777

RESUMO

The present study investigated the effects of binaural spectral mismatch on binaural benefits in the context of bilateral cochlear implants using acoustic simulations. Binaural spectral mismatch was systematically manipulated by simulating changes in the relative insertion depths across ears. Sentence recognition, presented unilaterally and bilaterally, were measured in normal-hearing listeners in quiet and noise at +5 dB signal-to-noise ratio. Significant binaural benefits were observed when the interaural difference in insertion depth was 1 mm or less. This result suggests a dependence of the binaural benefit on redundant speech information, rather than on similarity in performance across ears.


Assuntos
Implante Coclear/instrumentação , Ruído/efeitos adversos , Mascaramento Perceptivo , Reconhecimento Psicológico , Percepção da Fala , Estimulação Acústica , Adulto , Limiar Auditivo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Processamento de Sinais Assistido por Computador , Teste do Limiar de Recepção da Fala , Adulto Jovem
18.
Int J Audiol ; 50(8): 554-65, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21696329

RESUMO

OBJECTIVE: The full benefit of bilateral cochlear implants may depend on the unilateral performance with each device, the speech materials, processing ability of the user, and/or the listening environment. In this study, bilateral and unilateral speech performances were evaluated in terms of recognition of phonemes and sentences presented in quiet or in noise. DESIGN: Speech recognition was measured for unilateral left, unilateral right, and bilateral listening conditions; speech and noise were presented at 0° azimuth. The 'binaural benefit' was defined as the difference between bilateral performance and unilateral performance with the better ear. STUDY SAMPLE: Nine adults with bilateral cochlear implants participated. RESULTS: On average, results showed a greater binaural benefit in noise than in quiet for all speech tests. More importantly, the binaural benefit was greater when unilateral performance was similar across ears. As the difference in unilateral performance between ears increased, the binaural advantage decreased; this functional relationship was observed across the different speech materials and noise levels even though there was substantial intra- and inter-subject variability. CONCLUSIONS: The results indicate that subjects who show symmetry in speech recognition performance between implanted ears in general show a large binaural benefit.


Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Correção de Deficiência Auditiva/psicologia , Perda Auditiva/reabilitação , Pessoas com Deficiência Auditiva/reabilitação , Reconhecimento Psicológico , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Feminino , Lateralidade Funcional , Perda Auditiva/psicologia , Humanos , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/psicologia , Desenho de Prótese , Testes de Discriminação da Fala , Teste do Limiar de Recepção da Fala , Adulto Jovem
19.
J Acoust Soc Am ; 127(3): EL87-92, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20329812

RESUMO

While considerable evidence suggests that bilateral cochlear implant (CI) users' sound localization abilities rely primarily on interaural level difference (ILD) cues, and only secondarily, if at all, on interaural time difference (ITD) cues, this evidence has largely been indirect. This study used head-related transfer functions (HRTFs) to independently manipulate ITD and ILD cues and directly measure their contribution to bilateral CI users' localization abilities. The results revealed a strong reliance on ILD cues, but some CI users also made use of ITD cues. The results also suggest a complex interaction between ITD and ILD cues.


Assuntos
Implantes Cocleares , Sinais (Psicologia) , Percepção Sonora/fisiologia , Localização de Som/fisiologia , Percepção do Tempo/fisiologia , Estimulação Acústica , Humanos
20.
Ear Hear ; 31(3): 401-6, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20090531

RESUMO

OBJECTIVES: Stratified sampling plans can increase the accuracy and facilitate the interpretation of a dataset characterizing a large population. However, such sampling plans have found minimal use in hearing aid (HA) research, in part because of a paucity of quantitative data on the characteristics of HA users. The goal of this study was to devise a quantitatively derived stratified sampling plan for HA research, so that such studies will be more representative and generalizable, and the results obtained using this method are more easily reinterpreted as the population changes. DESIGN: Pure-tone average (PTA) and age information were collected for 84,200 HAs acquired in 2006 and 2007. The distribution of PTA and age was quantified for each HA type and for a composite of all HA users. RESULTS: Based on their respective distributions, PTA and age were each divided into three groups, the combination of which defined the stratification plan. The most populous PTA and age group was also subdivided, allowing greater homogeneity within strata. Finally, the percentage of users in each stratum was calculated. CONCLUSIONS: This article provides a stratified sampling plan for HA research, based on a quantitative analysis of the distribution of PTA and age for HA users. Adopting such a sampling plan will make HA research results more representative and generalizable. In addition, data acquired using such plans can be reinterpreted as the HA population changes.


Assuntos
Audiometria de Tons Puros/estatística & dados numéricos , Auxiliares de Audição/estatística & dados numéricos , Perda Auditiva/epidemiologia , Perda Auditiva/terapia , Modelos Estatísticos , Adulto , Distribuição por Idade , Idoso , Idoso de 80 Anos ou mais , Limiar Auditivo , Bases de Dados Factuais , Humanos , Pessoa de Meia-Idade , Estudos de Amostragem , Estados Unidos/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...