Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 154(5): 3168-3172, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37966331

RESUMO

The frequency range audible to humans can extend from 20 Hz to 20 kHz, but only a portion of this range-the lower end up to 8 kHz-has been systematically explored because extended high-frequency (EHF) information above this low range has been considered unnecessary for speech comprehension. This special issue presents a collection of research studies exploring the presence of EHF information in the acoustic signal and its perceptual utility. The papers address the role of EHF hearing in auditory perception, the impact of EHF hearing loss on speech perception in specific populations and occupational settings, the importance of EHF in speech recognition and in providing speaker-related information, the utility of acoustic EHF energy in fricative sounds, and ultrasonic vocalizations in mice in relation to human hearing. Collectively, the research findings offer new insights and converge in showing that not only is EHF energy present in the speech spectrum, but listeners can utilize EHF cues in speech processing and recognition, and EHF hearing loss has detrimental effects on perception of speech and non-speech sounds. Together, this collection challenges the conventional notion that EHF information has minimal functional significance.


Assuntos
Perda Auditiva Neurossensorial , Percepção da Fala , Humanos , Animais , Camundongos , Audição , Percepção Auditiva , Ruído , Som , Limiar Auditivo
2.
Semin Hear ; 42(3): 175-185, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34594083

RESUMO

Understanding speech in noise is difficult for individuals with normal hearing and is even more so for individuals with hearing loss. Difficulty understanding speech in noise is one of the primary reasons people seek hearing assistance. Despite amplification, many hearing aid users still struggle to understand speech in noise. In response to this persistent problem, hearing aid manufacturers have invested significantly in developing new solutions. Any solution is not without its tradeoffs, and decisions must be made when optimizing and implementing them. Much of this happens behind the scenes, and casual observers fail to appreciate the nuances of developing new hearing aid technologies. The difficulty of communicating this information to clinicians may hinder the use or the fine-tuning of the various technologies available today. The purpose of this issue of Seminars in Hearing is to educate professionals and students in audiology, hearing science, and engineering about different approaches to combat problems related to environmental and wind noise using technologies that include classification, directional microphones, binaural signal processing, beamformers, motion sensors, and machine learning. To accomplish this purpose, some of the top researchers and engineers from the world's largest hearing aid manufacturers agreed to share their unique insights.

3.
J Acoust Soc Am ; 150(3): 1635, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598609

RESUMO

Hearing aids are commonly fit with ear canals partially or fully open-a condition that increases the risk of acoustic feedback. Feedback limits the audiometric fitting range of devices by limiting usable gain. To guide clinical decision making and device selection, we developed the Peak Height Insertion Gain (PHIG) method to detect feedback spikes in the short-term insertion gain derived from audio recordings. Using a manikin, 145 audio recordings of a speech signal were obtained from seven hearing aids. Each hearing aid was programmed for a moderate high-frequency hearing loss with systematic variations in frequency response, gain, and feedback suppression; this created audio recordings that varied the presence and strength of feedback. Using subjective ratings from 13 expert judges, the presence of feedback was determined and then classified according to its temporal and tonal qualities. These classifications were used to optimize parameters for two versions of the PHIG method based on global and local analyses. When specificity was fixed at 0.95, the sensitivity of the global analysis was 0.86 and increased to 0.95 when combined with the local analysis. Without compromising performance, a clinically expedient version of the PHIG method can be obtained using only a single measurement.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial , Percepção da Fala , Acústica , Audiometria , Retroalimentação , Humanos
4.
J Acoust Soc Am ; 149(5): 3449, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-34241110

RESUMO

Active mechanisms that regulate cochlear gain are hypothesized to influence speech-in-noise perception. However, evidence of a relationship between the amount of cochlear gain reduction and speech-in-noise recognition is mixed. Findings may conflict across studies because different signal-to-noise ratios (SNRs) were used to evaluate speech-in-noise recognition. Also, there is evidence that ipsilateral elicitation of cochlear gain reduction may be stronger than contralateral elicitation, yet, most studies have investigated the contralateral descending pathway. The hypothesis that the relationship between ipsilateral cochlear gain reduction and speech-in-noise recognition depends on the SNR was tested. A forward masking technique was used to quantify the ipsilateral cochlear gain reduction in 24 young adult listeners with normal hearing. Speech-in-noise recognition was measured with the PRESTO-R sentence test using speech-shaped noise presented at -3, 0, and +3 dB SNR. Interestingly, greater cochlear gain reduction was associated with lower speech-in-noise recognition, and the strength of this correlation increased as the SNR became more adverse. These findings support the hypothesis that the SNR influences the relationship between ipsilateral cochlear gain reduction and speech-in-noise recognition. Future studies investigating the relationship between cochlear gain reduction and speech-in-noise recognition should consider the SNR and both descending pathways.


Assuntos
Implantes Cocleares , Percepção da Fala , Audição , Humanos , Ruído/efeitos adversos , Razão Sinal-Ruído , Fala , Adulto Jovem
5.
Int J Audiol ; 58(10): 661-669, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31140345

RESUMO

Objective: Adaptive compression methods in hearing aids have been developed to maximise audibility while preserving temporal envelope modulations. Increasing the number of channels may improve listening comfort for loud sounds. However, the effects of this on speech recognition in different environmental conditions are unknown. This study evaluated the effects of different channel architectures and adaptive compression properties on speech recognition in noise and reverberation. Design: Sentences were mixed with steady or modulated noise at three signal-to-noise ratios (SNRs). These were processed with and without reverberation and amplified with four proprietary adaptive compression methods or linear amplification. Study sample: 36 listeners with mild to moderately-severe hearing loss. Results: Adaptive compression improved speech recognition over linear amplification to a small extent, with no significant differences among methods using 4 or 24 channels or a combination thereof. These effects remained across the different background noise and reverberation conditions. Conclusions: Increasing the number of channels does not negatively affect speech recognition in noise and reverberation when adaptive compression is used. If future research shows that increasing the number of channels improves listening comfort for loud sounds, these results indicate that adaptive compression methods with as many as 24 channels are viable options for hearing aids.


Assuntos
Auxiliares de Audição , Perda Auditiva/reabilitação , Ruído , Percepção da Fala , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
6.
J Speech Lang Hear Res ; 62(5): 1486-1505, 2019 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-31063023

RESUMO

Purpose Frequency lowering in hearing aids can cause listeners to perceive [s] as [ʃ]. The S-SH Confusion Test, which consists of 66 minimal word pairs spoken by 6 female talkers, was designed to help clinicians and researchers document these negative side effects. This study's purpose was to use this new test to evaluate the hypothesis that these confusions will increase to the extent that low frequencies are altered. Method Twenty-one listeners with normal hearing were each tested on 7 conditions. Three were control conditions that were low-pass filtered at 3.3, 5.0, and 9.1 kHz. Four conditions were processed with nonlinear frequency compression (NFC): 2 had a 3.3-kHz maximum audible output frequency (MAOF), with a start frequency (SF) of 1.6 or 2.2 kHz; 2 had a 5.0-kHz MAOF, with an SF of 1.6 or 4.0 kHz. Listeners' responses were analyzed using concepts from signal detection theory. Response times were also collected as a measure of cognitive processing. Results Overall, [s] for [ʃ] confusions were minimal. As predicted, [ʃ] for [s] confusions increased for NFC conditions with a lower versus higher MAOF and with a lower versus higher SF. Response times for trials with correct [s] responses were shortest for the 9.1-kHz control and increased for the 5.0- and 3.3-kHz controls. NFC response times were also significantly longer as MAOF and SF decreased. The NFC condition with the highest MAOF and SF had statistically shorter response times than its control condition, indicating that, under some circumstances, NFC may ease cognitive processing. Conclusions Large differences in the S-SH Confusion Test across frequency-lowering conditions show that it can be used to document a major negative side effect associated with frequency lowering. Smaller but significant differences in response times for correct [s] trials indicate that NFC can help or hinder cognitive processing, depending on its settings.


Assuntos
Auxiliares de Audição , Percepção da Fala , Adulto , Feminino , Testes Auditivos , Humanos , Masculino , Som , Adulto Jovem
7.
J Am Acad Audiol ; 28(9): 823-837, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28972471

RESUMO

BACKGROUND: Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL). PURPOSE: To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL. RESEARCH DESIGN: Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification. STUDY SAMPLE: Fourteen children (8-16 yr) and 14 adults (19-65 yr) with mild-to-severe SNHL. INTERVENTION: Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure. DATA COLLECTION AND ANALYSIS: Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT. RESULTS: Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age. CONCLUSION: Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial/fisiopatologia , Acústica da Fala , Percepção da Fala/fisiologia , Adolescente , Adulto , Idoso , Criança , Audição/fisiologia , Perda Auditiva Neurossensorial/psicologia , Perda Auditiva Neurossensorial/reabilitação , Humanos , Pessoa de Meia-Idade , Razão Sinal-Ruído
8.
J Acoust Soc Am ; 142(2): 908, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28863610

RESUMO

This study investigated how six different amplification methods influence acoustic properties, and subsequently perception, of high-frequency cues in fricatives that have been processed with conventional full bandwidth amplification or nonlinear frequency compression (NFC)-12 conditions total. Amplification methods included linear gain, fast/slow-acting wide dynamic range compression crossed with fixed/individualized compression parameters, and a method with adaptive time constants. Twenty-one hearing-impaired listeners identified seven fricatives in nonsense syllables produced by female talkers. For NFC stimuli, frequency-compressed filters that precisely aligned 1/3-octave bands between input and output were used to quantify effective compression ratio, audibility, and temporal envelope modulation relative to the input. Results indicated significant relationships between these acoustic properties, each of which contributed significantly to fricative recognition across the entire corpus of stimuli. Recognition was significantly better for NFC stimuli compared with full bandwidth stimuli, regardless of the amplification method, which had complementary effects on audibility and envelope modulation. Furthermore, while there were significant differences in recognition across the amplification methods, they were not consistent across phonemes. Therefore, neither recognition nor acoustic data overwhelmingly suggest that one amplification method should be used over another for transmission of high-frequency cues in isolated syllables. Longer duration stimuli and more realistic listening conditions should be examined.


Assuntos
Estimulação Acústica/métodos , Acústica , Sinais (Psicologia) , Perda Auditiva Neurossensorial/psicologia , Pessoas com Deficiência Auditiva/psicologia , Reconhecimento Psicológico , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria da Fala , Feminino , Audição , Perda Auditiva Neurossensorial/diagnóstico , Perda Auditiva Neurossensorial/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Mascaramento Perceptivo , Índice de Gravidade de Doença , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Inteligibilidade da Fala
9.
J Acoust Soc Am ; 141(2): EL127, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-28253693

RESUMO

While all languages differentiate speech sounds by manner of articulation, none of the acoustic correlates proposed to date seem to account for how these contrasts are encoded in the speech signal. The present study describes power spectral entropy (PSE), which quantifies the amount of potential information conveyed in the power spectrum of a given sound. Results of acoustic analyses of speech samples extracted from the Texas Instruments-Massachusetts Institute of Technology database reveal a statistically significant correspondence between PSE and American English major classes of manner of articulation. Thus, PSE accurately captures an acoustic correlate of manner of articulation in American English.

10.
J Am Acad Audiol ; 27(8): 647-60, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27564442

RESUMO

BACKGROUND: Listening in challenging situations requires explicit cognitive resources to decode and process speech. Traditional speech recognition tests are limited in documenting this cognitive effort, which may differ greatly between individuals or listening conditions despite similar scores. A sequential sentence paradigm was designed to be more sensitive to individual differences in demands on verbal processing during speech recognition. PURPOSE: The purpose of this study was to establish the feasibility, validity, and equivalency of test materials in the sequential sentence paradigm as well as to evaluate the effects of masker type, signal-to-noise ratio (SNR), and working memory (WM) capacity on performance in the task. RESEARCH DESIGN: Listeners heard a pair of sentences and repeated aloud the second sentence (immediate recall) and then wrote down the first sentence (delayed recall). Sentence lists were from the Perceptually Robust English Sentence Test Open-set (PRESTO) test. In experiment I, listeners completed a traditional speech recognition task. In experiment II, listeners completed the sequential sentence task at one SNR. In experiment III, the masker type (steady noise versus multitalker babble) and SNR were varied to demonstrate the effects of WM as the speech material increased in difficulty. STUDY SAMPLE: Young, normal-hearing adults (total n = 53) from the Purdue University community completed one of the three experiments. DATA COLLECTION AND ANALYSIS: Keyword scoring of the PRESTO lists was completed for both the immediate- and delayed-recall sentences. The Verbal Letter Monitoring task, a test of WM, was used to separate listeners into a low-WM or high-WM group. RESULTS: Experiment I indicated that mean recognition on the single-sentence task was highly variable between the original PRESTO lists. Modest rearrangement of the sentences yielded 18 statistically equivalent lists (mean recognition = 65.0%, range = 64.4-65.7%), which were used in the sequential sentence task in experiment II. In the new test paradigm, recognition of the immediate-recall sentences was not statistically different from the single-sentence task, indicating that there were no cognitive load effects from the delayed-recall sentences. Finally, experiment III indicated that multitalker babble was equally detrimental compared to steady-state noise for immediate recall of sentences for both low- and high-WM groups. On the other hand, delayed recall of sentences in multitalker babble was disproportionately more difficult for the low-WM group compared with the high-WM group. CONCLUSIONS: The sequential sentence paradigm is a feasible test format with mostly equivalent lists. Future studies using this paradigm may need to consider individual differences in WM to see the full range of effects across different conditions. Possible applications include testing the efficacy of various signal-processing techniques in clinical populations.


Assuntos
Audição , Testes de Discriminação da Fala , Percepção da Fala , Adolescente , Adulto , Feminino , Humanos , Masculino , Ruído , Razão Sinal-Ruído , Adulto Jovem
11.
Am J Audiol ; 25(3): 232-45, 2016 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-27367972

RESUMO

PURPOSE: A representative sample of the literature on minimal hearing loss (MHL) was reviewed to provide evidence of challenges faced by children with MHL and to establish the need for evidence-based options for early intervention. METHOD: Research articles published from 1950 to 2013 were searched in the Medline database using the keywords minimal hearing loss, unilateral hearing loss, and mild hearing loss. References cited in retrieved articles were also reviewed. RESULTS: In total, 69 articles contained relevant information about pediatric outcomes and/or intervention for unilateral hearing loss, 50 for mild hearing loss, and 6 for high-frequency hearing loss. Six challenges associated with MHL emerged, and 6 interventions were indicated. Evidence indicates that although some individuals may appear to have no observable speech-language or academic difficulties, others experience considerable difficulties. It also indicates that even though children with MHL may appear to catch up in some areas, difficulties in select domains continue into adulthood. CONCLUSIONS: Evidence indicates significant risks associated with untreated MHL. Evidence also demonstrates the need for early intervention and identifies several appropriate intervention strategies; however, no single protocol is appropriate for all children. Therefore, families should be educated about the impact of MHL and about available interventions so that informed decisions can be made.


Assuntos
Logro , Prática Clínica Baseada em Evidências , Perda Auditiva Unilateral/fisiopatologia , Desenvolvimento da Linguagem , Localização de Som , Percepção da Fala , Criança , Pré-Escolar , Intervenção Médica Precoce , Emoções , Perda Auditiva Unilateral/complicações , Perda Auditiva Unilateral/psicologia , Perda Auditiva Unilateral/reabilitação , Humanos , Lactente , Transtornos do Desenvolvimento da Linguagem/etiologia , Instituições Acadêmicas , Índice de Gravidade de Doença
12.
J Acoust Soc Am ; 139(2): 938-57, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26936574

RESUMO

By varying parameters that control nonlinear frequency compression (NFC), this study examined how different ways of compressing inaudible mid- and/or high-frequency information at lower frequencies influences perception of consonants and vowels. Twenty-eight listeners with mild to moderately severe hearing loss identified consonants and vowels from nonsense syllables in noise following amplification via a hearing aid simulator. Low-pass filtering and the selection of NFC parameters fixed the output bandwidth at a frequency representing a moderately severe (3.3 kHz, group MS) or a mild-to-moderate (5.0 kHz, group MM) high-frequency loss. For each group (n = 14), effects of six combinations of NFC start frequency (SF) and input bandwidth [by varying the compression ratio (CR)] were examined. For both groups, the 1.6 kHz SF significantly reduced vowel and consonant recognition, especially as CR increased; whereas, recognition was generally unaffected if SF increased at the expense of a higher CR. Vowel recognition detriments for group MS were moderately correlated with the size of the second formant frequency shift following NFC. For both groups, significant improvement (33%-50%) with NFC was confined to final /s/ and /z/ and to some VCV tokens, perhaps because of listeners' limited exposure to each setting. No set of parameters simultaneously maximized recognition across all tokens.


Assuntos
Perda Auditiva/psicologia , Percepção da Altura Sonora , Reconhecimento Psicológico , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria da Fala , Limiar Auditivo , Sinais (Psicologia) , Feminino , Perda Auditiva/diagnóstico , Humanos , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Dinâmica não Linear , Mascaramento Perceptivo , Índice de Gravidade de Doença , Espectrografia do Som
13.
J Acoust Soc Am ; 138(5): 3061-72, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26627780

RESUMO

The Neural-Scaled Entropy (NSE) model quantifies information in the speech signal that has been altered beyond simple gain adjustments by sensorineural hearing loss (SNHL) and various signal processing. An extension of Cochlear-Scaled Entropy (CSE) [Stilp, Kiefte, Alexander, and Kluender (2010). J. Acoust. Soc. Am. 128(4), 2112-2126], NSE quantifies information as the change in 1-ms neural firing patterns across frequency. To evaluate the model, data from a study that examined nonlinear frequency compression (NFC) in listeners with SNHL were used because NFC can recode the same input information in multiple ways in the output, resulting in different outcomes for different speech classes. Overall, predictions were more accurate for NSE than CSE. The NSE model accurately described the observed degradation in recognition, and lack thereof, for consonants in a vowel-consonant-vowel context that had been processed in different ways by NFC. While NSE accurately predicted recognition of vowel stimuli processed with NFC, it underestimated them relative to a low-pass control condition without NFC. In addition, without modifications, it could not predict the observed improvement in recognition for word final /s/ and /z/. Findings suggest that model modifications that include information from slower modulations might improve predictions across a wider variety of conditions.


Assuntos
Entropia , Perda Auditiva Neurossensorial/psicologia , Modelos Neurológicos , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Vias Auditivas/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Dinâmica não Linear , Processamento de Sinais Assistido por Computador
14.
Ear Hear ; 36(2): e35-49, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25470368

RESUMO

OBJECTIVES: The purpose of this study was to investigate the joint effects that wide dynamic range compression (WDRC) release time (RT) and number of channels have on recognition of sentences in the presence of steady and modulated maskers at different signal-to-noise ratios (SNRs). How the different combinations of WDRC parameters affect output SNR and the role this plays in the observed findings were also investigated. DESIGN: Twenty-four listeners with mild to moderate sensorineural hearing loss identified sentences mixed with steady or modulated maskers at three SNRs (-5, 0, and +5 dB) that had been processed using a hearing aid simulator with six combinations of RT (40 and 640 msec) and number of channels (4, 8, and 16). Compression parameters were set using the Desired Sensation Level v5.0a prescriptive fitting method. For each condition, amplified speech and masker levels and the resultant long-term output SNR were measured. RESULTS: Speech recognition with WDRC depended on the combination of RT and number of channels, with the greatest effects observed at 0 dB input SNR, in which mean speech recognition scores varied by 10 to 12% across WDRC manipulations. Overall, effect sizes were generally small. Across both masker types and the three SNRs tested, the best speech recognition was obtained with eight channels, regardless of RT. Increased speech levels, which favor audibility, were associated with the short RT and with an increase in the number of channels. These same conditions also increased masker levels by an even greater amount, for a net decrease in the long-term output SNR. Changes in long-term SNR across WDRC conditions were found to be strongly associated with changes in the temporal envelope shape as quantified by the Envelope Difference Index; however, neither of these factors fully explained the observed differences in speech recognition. CONCLUSIONS: A primary finding of this study was that the number of channels had a modest effect when analyzed at each level of RT, with results suggesting that selecting eight channels for a given RT might be the safest choice. Effects were smaller for RT, with results suggesting that short RT was slightly better when only 4 channels were used and that long RT was better when 16 channels were used. Individual differences in how listeners were influenced by audibility, output SNR, temporal distortion, and spectral distortion may have contributed to the size of the effects found in this study. Because only general suppositions could made for how each of these factors may have influenced the overall results of this study, future research would benefit from exploring the predictive value of these and other factors in selecting the processing parameters that maximize speech recognition for individuals.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Razão Sinal-Ruído , Percepção da Fala/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Perda Auditiva Neurossensorial/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Índice de Gravidade de Doença , Software
15.
Ear Hear ; 35(5): 519-32, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24699702

RESUMO

OBJECTIVES: The authors have demonstrated that the limited bandwidth associated with conventional hearing aid amplification prevents useful high-frequency speech information from being transmitted. The purpose of this study was to examine the efficacy of two popular frequency-lowering algorithms and one novel algorithm (spectral envelope decimation) in adults with mild to moderate sensorineural hearing loss and in normal-hearing controls. DESIGN: Participants listened monaurally through headphones to recordings of nine fricatives and affricates spoken by three women in a vowel-consonant context. Stimuli were mixed with speech-shaped noise at 10 dB SNR and recorded through a Widex Inteo IN-9 and a Phonak Naída UP V behind-the-ear (BTE) hearing aid. Frequency transposition (FT) is used in the Inteo and nonlinear frequency compression (NFC) used in the Naída. Both devices were programmed to lower frequencies above 4 kHz, but neither device could lower frequencies above 6 to 7 kHz. Each device was tested under four conditions: frequency lowering deactivated (FT-off and NFC-off), frequency lowering activated (FT and NFC), wideband (WB), and a fourth condition unique to each hearing aid. The WB condition was constructed by mixing recordings from the first condition with high-pass filtered versions of the source stimuli. For the Inteo, the fourth condition consisted of recordings made with the same settings as the first, but with the noise-reduction feature activated (FT-off). For the Naída, the fourth condition was the same as the first condition except that source stimuli were preprocessed by a novel frequency compression algorithm, spectral envelope decimation (SED), designed in MATLAB, which allowed for a more complete lowering of the 4 to 10 kHz input band. A follow-up experiment with NFC used Phonak's Naída SP V BTE, which could also lower a greater range of input frequencies. RESULTS: For normal-hearing and hearing-impaired listeners, performance with FT was significantly worse compared with that in the other conditions. Consistent with previous findings, performance for the hearing-impaired listeners in the WB condition was significantly better than in the FT-off condition. In addition, performance in the SED and WB conditions were both significantly better than in the NFC-off condition and the NFC condition with 6 kHz input bandwidth. There were no significant differences between SED and WB, indicating that improvements in fricative identification obtained by increasing bandwidth can also be obtained using this form of frequency compression. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for the phonemes /s/ and /z/. In the follow-up experiment, performance in the NFC condition with 10 kHz input bandwidth was significantly better than NFC-off, replicating the results obtained with SED. Furthermore, listeners who performed poorly with NFC-off tended to show the most improvement with NFC. CONCLUSIONS: Improvements in the identification of stimuli chosen to be sensitive to the effects of frequency lowering have been demonstrated using two forms of frequency compression (NFC and SED) in individuals with mild to moderate high-frequency sensorineural hearing loss. However, negative results caution against using FT for this population. Results also indicate that the advantage of an extended bandwidth as reported here and elsewhere applies to the input bandwidth for frequency compression (NFC/SED) when the start frequency is ≥4 kHz.


Assuntos
Algoritmos , Auxiliares de Audição , Perda Auditiva Neurossensorial/fisiopatologia , Percepção da Fala/fisiologia , Adolescente , Adulto , Idoso , Percepção Auditiva/fisiologia , Estudos de Casos e Controles , Desenho de Equipamento , Feminino , Perda Auditiva Neurossensorial/reabilitação , Humanos , Masculino , Pessoa de Meia-Idade , Espectrografia do Som , Adulto Jovem
16.
PLoS One ; 6(9): e24630, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21949736

RESUMO

An algorithm that operates in real-time to enhance the salient features of speech is described and its efficacy is evaluated. The Contrast Enhancement (CE) algorithm implements dynamic compressive gain and lateral inhibitory sidebands across channels in a modified winner-take-all circuit, which together produce a form of suppression that sharpens the dynamic spectrum. Normal-hearing listeners identified spectrally smeared consonants (VCVs) and vowels (hVds) in quiet and in noise. Consonant and vowel identification, especially in noise, were improved by the processing. The amount of improvement did not depend on the degree of spectral smearing or talker characteristics. For consonants, when results were analyzed according to phonetic feature, the most consistent improvement was for place of articulation. This is encouraging for hearing aid applications because confusions between consonants differing in place are a persistent problem for listeners with sensorineural hearing loss.


Assuntos
Algoritmos , Percepção da Fala/fisiologia , Estimulação Acústica , Análise de Variância , Simulação por Computador , Feminino , Humanos , Idioma , Masculino , Ruído , Razão Sinal-Ruído , Fatores de Tempo
17.
J Acoust Soc Am ; 128(4): 2112-26, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20968382

RESUMO

Some evidence, mostly drawn from experiments using only a single moderate rate of speech, suggests that low-frequency amplitude modulations may be particularly important for intelligibility. Here, two experiments investigated intelligibility of temporally distorted sentences across a wide range of simulated speaking rates, and two metrics were used to predict results. Sentence intelligibility was assessed when successive segments of fixed duration were temporally reversed (exp. 1), and when sentences were processed through four third-octave-band filters, the outputs of which were desynchronized (exp. 2). For both experiments, intelligibility decreased with increasing distortion. However, in exp. 2, intelligibility recovered modestly with longer desynchronization. Across conditions, performances measured as a function of proportion of utterance distorted converged to a common function. Estimates of intelligibility derived from modulation transfer functions predict a substantial proportion of the variance in listeners' responses in exp. 1, but fail to predict performance in exp. 2. By contrast, a metric of potential information, quantified as relative dissimilarity (change) between successive cochlear-scaled spectra, is introduced. This metric reliably predicts listeners' intelligibility across the full range of speaking rates in both experiments. Results support an information-theoretic approach to speech perception and the significance of spectral change rather than physical units of time.


Assuntos
Cóclea/fisiologia , Fonética , Acústica da Fala , Inteligibilidade da Fala , Estimulação Acústica , Audiometria , Entropia , Humanos , Masculino , Modelos Teóricos , Espectrografia do Som , Fatores de Tempo
18.
Atten Percept Psychophys ; 72(2): 470-80, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20139460

RESUMO

Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds-extensively edited samples produced by a French horn and a tenor saxophone-following either resynthesized speech or a short passage of music. Preceding contexts were "colored" by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.


Assuntos
Associação , Percepção Auditiva , Percepção de Cores , Música , Fonética , Espectrografia do Som , Percepção da Fala , Adolescente , Feminino , Humanos , Masculino , Prática Psicológica , Psicoacústica , Adulto Jovem
19.
J Acoust Soc Am ; 128(6): 3597-13, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21218892

RESUMO

The auditory system calibrates to reliable properties of a listening environment in ways that enhance sensitivity to less predictable (more informative) aspects of sounds. These reliable properties may be spectrally local (e.g., peaks) or global (e.g., gross tilt), but the time course over which the auditory system registers and calibrates to these properties is unknown. Understanding temporal properties of this perceptual calibration is essential for revealing underlying mechanisms that serve to increase sensitivity to changing and informative properties of sounds. Relative influence of the second formant (F(2)) and spectral tilt was measured for identification of /u/ and /i/ following precursor contexts that were harmonic complexes with frequency-modulated resonances. Precursors filtered to match F(2) or tilt of following vowels induced perceptual calibration (diminished influence) to F(2) and tilt, respectively. Calibration to F(2) was greatest for shorter duration precursors (250 ms), which implicates physiologic and/or perceptual mechanisms that are sensitive to onsets. In contrast, calibration to tilt was greatest for precursors with longer durations and higher repetition rates because greater opportunities to sample the spectrum result in more stable estimates of long-term global spectral properties. Possible mechanisms that promote sensitivity to change are discussed.


Assuntos
Vias Auditivas/fisiologia , Sinais (Psicologia) , Acústica da Fala , Percepção da Fala , Estimulação Acústica , Audiometria da Fala , Humanos , Mascaramento Perceptivo , Espectrografia do Som , Fatores de Tempo
20.
J Speech Lang Hear Res ; 52(3): 653-70, 2009 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18952854

RESUMO

PURPOSE: To evaluate how perceptual importance of spectral tilt is altered when formant information is degraded by sensorineural hearing loss. METHOD: Eighteen listeners with mild to moderate hearing impairment (HI listeners) and 20-23 listeners with normal hearing (NH listeners) identified synthesized stimuli that varied in second formant (F(2)) frequency and spectral tilt. Experiments 1 and 2 examined utterance-initial stops (/ba/ and /da/), and Experiments 3 and 4 examined medial stops (/aba/ and /ada/). Spectral tilt was manipulated at either consonant onset (Experiments 1 and 3), vowels (Experiments 2 and 4), or both (Experiment 5). RESULTS: Regression analyses revealed that HI listeners weighted F(2) substantially less than NH listeners. There was no difference in absolute tilt weights between groups. However, HI listeners emphasized tilt as much as F(2) for medial stops. NH listeners weighted tilt primarily when F(2) was ambiguous, whereas HI listeners weighted tilt significantly more than NH listeners on unambiguous F(2) endpoints. CONCLUSIONS: Attenuating changes in spectral tilt can be as deleterious as taking away F(2) information for HI listeners. Recordings through a wide dynamic range compression hearing aid show compromised changes in spectral tilt, compressed in range by up to 50%.


Assuntos
Perda Auditiva Neurossensorial/psicologia , Fonética , Percepção da Fala , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Auxiliares de Audição , Testes Auditivos , Humanos , Funções Verossimilhança , Modelos Lineares , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Análise e Desempenho de Tarefas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...