Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Z Gerontol Geriatr ; 56(4): 283-289, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37103645

RESUMEN

BACKGROUND: Hearing aid technology has proven to be successful in the rehabilitation of hearing loss, but its performance is still limited in difficult everyday conditions characterized by noise and reverberation. OBJECTIVE: Introduction to the current state of hearing aid technology and presentation of the current state of research and future developments. METHODS: The current literature was analyzed and several specific new developments are presented. RESULTS: Both objective and subjective data from empirical studies show the limitations of the current technology. Examples of current research show the potential of machine learning-based algorithms and multimodal signal processing for improving speech processing and perception, of using virtual reality for improving hearing device fitting and of mobile health technology for improving hearing health services. CONCLUSION: Hearing device technology will remain a key factor in the rehabilitation of hearing impairments. New technology, such as machine learning and multimodal signal processing, virtual reality and mobile health technology, will improve speech enhancement, individual fitting and communication training, thus providing better support for all hearing-impaired patients, including older patients with disabilities or declining cognitive skills.


Asunto(s)
Audífonos , Pérdida Auditiva , Percepción del Habla , Humanos , Audífonos/psicología , Pérdida Auditiva/diagnóstico , Ruido
2.
J Acoust Soc Am ; 151(2): 712, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35232067

RESUMEN

Humans are able to follow a speaker even in challenging acoustic conditions. The perceptual mechanisms underlying this ability remain unclear. A computational model of attentive voice tracking, consisting of four computational blocks: (1) sparse periodicity-based auditory features (sPAF) extraction, (2) foreground-background segregation, (3) state estimation, and (4) top-down knowledge, is presented. The model connects the theories about auditory glimpses, foreground-background segregation, and Bayesian inference. It is implemented with the sPAF, sequential Monte Carlo sampling, and probabilistic voice models. The model is evaluated by comparing it with the human data obtained in the study by Woods and McDermott [Curr. Biol. 25(17), 2238-2246 (2015)], which measured the ability to track one of two competing voices with time-varying parameters [fundamental frequency (F0) and formants (F1,F2)]. Three model versions were tested, which differ in the type of information used for the segregation: version (a) uses the oracle F0, version (b) uses the estimated F0, and version (c) uses the spectral shape derived from the estimated F0 and oracle F1 and F2. Version (a) simulates the optimal human performance in conditions with the largest separation between the voices, version (b) simulates the conditions in which the separation in not sufficient to follow the voices, and version (c) is closest to the human performance for moderate voice separation.


Asunto(s)
Percepción del Habla , Voz , Acústica , Teorema de Bayes , Simulación por Computador , Humanos , Periodicidad , Acústica del Lenguaje
3.
Int J Audiol ; : 1-10, 2022 Dec 13.
Artículo en Inglés | MEDLINE | ID: mdl-36512479

RESUMEN

Objective: Distorted loudness perception is one of the main complaints of hearing aid users. Measuring loudness perception in the clinic as experienced in everyday listening situations is important for loudness-based hearing aid fitting. Little research has been done comparing loudness perception in the field and in the laboratory.Design: Participants rated the loudness in the field and in the laboratory of 36 driving actions. The field measurements were recorded with a 360° camera and a tetrahedral microphone. The recorded stimuli, which are openly accessible, were presented in three conditions in the laboratory: 360° video recordings with a head-mounted display, video recordings with a desktop monitor and audio-only.Study samples: Thirteen normal-hearing participants and 18 hearing-impaired participants with hearing aids.Results: The driving actions were rated as louder in the laboratory than in the field for the condition with a desktop monitor and for the audio-only condition. The less realistic a laboratory condition was, the more likely it was for a participant to rate a driving action as louder. The field-laboratory loudness differences were bigger for louder sounds.Conclusion: The results of this experiment indicate the importance of increasing realism and immersion when measuring loudness in the clinic.

4.
Int J Audiol ; 61(4): 311-321, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34109902

RESUMEN

OBJECTIVE: The aim was to create and validate an audiovisual version of the German matrix sentence test (MST), which uses the existing audio-only speech material. DESIGN: Video recordings were recorded and dubbed with the audio of the existing German MST. The current study evaluates the MST in conditions including audio and visual modalities, speech in quiet and noise, and open and closed-set response formats. SAMPLE: One female talker recorded repetitions of the German MST sentences. Twenty-eight young normal-hearing participants completed the evaluation study. RESULTS: The audiovisual benefit in quiet was 7.0 dB in sound pressure level (SPL). In noise, the audiovisual benefit was 4.9 dB in signal-to-noise ratio (SNR). Speechreading scores ranged from 0% to 84% speech reception in visual-only sentences (mean = 50%). Audiovisual speech reception thresholds (SRTs) had a larger standard deviation than audio-only SRTs. Audiovisual SRTs improved successively with increasing number of lists performed. The final video recordings are openly available. CONCLUSIONS: The video material achieved similar results as the literature in terms of gross speech intelligibility, despite the inherent asynchronies of dubbing. Due to ceiling effects, adaptive procedures targeting 80% intelligibility should be used. At least one or two training lists should be performed.


Asunto(s)
Percepción del Habla , Femenino , Humanos , Ruido/efectos adversos , Inteligibilidad del Habla , Prueba del Umbral de Recepción del Habla/métodos , Grabación en Video
5.
Eur J Neurosci ; 51(5): 1353-1363, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-29855099

RESUMEN

Human listeners robustly decode speech information from a talker of interest that is embedded in a mixture of spatially distributed interferers. A relevant question is which time-frequency segments of the speech are predominantly used by a listener to solve such a complex Auditory Scene Analysis task. A recent psychoacoustic study investigated the relevance of low signal-to-noise ratio (SNR) components of a target signal on speech intelligibility in a spatial multitalker situation. For this, a three-talker stimulus was manipulated in the spectro-temporal domain such that target speech time-frequency units below a variable SNR threshold (SNRcrit ) were discarded while keeping the interferers unchanged. The psychoacoustic data indicate that only target components at and above a local SNR of about 0 dB contribute to intelligibility. This study applies an auditory scene analysis "glimpsing" model to the same manipulated stimuli. Model data are found to be similar to the human data, supporting the notion of "glimpsing," that is, that salient speech-related information is predominantly used by the auditory system to decode speech embedded in a mixture of sounds, at least for the tested conditions of three overlapping speech signals. This implies that perceptually relevant auditory information is sparse and may be processed with low computational effort, which is relevant for neurophysiological research of scene analysis and novelty processing in the auditory system.


Asunto(s)
Percepción del Habla , Estimulación Acústica , Umbral Auditivo , Humanos , Enmascaramiento Perceptual , Psicoacústica , Relación Señal-Ruido , Sonido , Inteligibilidad del Habla
6.
Ear Hear ; 41 Suppl 1: 48S-55S, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33105259

RESUMEN

The benefit from directional hearing devices predicted in the lab often differs from reported user experience, suggesting that laboratory findings lack ecological validity. This difference may be partly caused by differences in self-motion between the lab and real-life environments. This literature review aims to provide an overview of the methods used to measure and quantify self-motion, the test environments, and the measurement paradigms. Self-motion is the rotation and translation of the head and torso and movement of the eyes. Studies were considered which explicitly assessed or controlled self-motion within the scope of hearing and hearing device research. The methods and outcomes of the reviewed studies are compared and discussed in relation to ecological validity. The reviewed studies demonstrate interactions between hearing device benefit and self-motion, such as a decreased benefit from directional microphones due to a more natural head movement when the test environment and task include realistic complexity. Identified factors associated with these interactions include the presence of audiovisual cues in the environment, interaction with conversation partners, and the nature of the tasks being performed. This review indicates that although some aspects of the interactions between self-motion and hearing device benefit have been shown and many methods for assessment and analysis of self-motion are available, it is still unclear to what extent individual factors affect the ecological validity of the findings. Further research is required to relate lab-based measures of self-motion to the individual's real-life hearing ability.


Asunto(s)
Audífonos , Percepción del Habla , Señales (Psicología) , Audición , Humanos , Movimiento (Física)
7.
Ear Hear ; 41 Suppl 1: 31S-38S, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33105257

RESUMEN

To assess perception with and performance of modern and future hearing devices with advanced adaptive signal processing capabilities, novel evaluation methods are required that go beyond already established methods. These novel methods will simulate to a certain extent the complexity and variability of acoustic conditions and acoustic communication styles in real life. This article discusses the current state and the perspectives of virtual reality technology use in the lab for designing complex audiovisual communication environments for hearing assessment and hearing device design and evaluation. In an effort to increase the ecological validity of lab experiments, that is, to increase the degree to which lab data reflect real-life hearing-related function, and to support the development of improved hearing-related procedures and interventions, this virtual reality lab marks a transition from conventional (audio-only) lab experiments to the field. The first part of the article introduces and discusses the notion of the communication loop as a theoretical basis for understanding the factors that are relevant for acoustic communication in real life. From this, requirements are derived that allow an assessment of the extent to which a virtual reality lab reflects these factors, and which may be used as a proxy for ecological validity. The most important factor of real-life communication identified is a closed communication loop among the actively behaving participants. The second part of the article gives an overview of the current developments towards a virtual reality lab at Oldenburg University that aims at interactive and reproducible testing of subjects with and without hearing devices in challenging communication conditions. The extent to which the virtual reality lab in its current state meets the requirements defined in the first part is discussed, along with its limitations and potential further developments. Finally, data are presented from a qualitative study that compared subject behavior and performance in two audiovisual environments presented in the virtual reality lab-a street and a cafeteria-with the corresponding field environments. The results show similarities and differences in subject behavior and performance between the lab and the field, indicating that the virtual reality lab in its current state marks a step towards more ecological validity in lab-based hearing and hearing device research, but requires further development towards higher levels of ecological validity.


Asunto(s)
Pruebas Auditivas , Interfaz Usuario-Computador , Realidad Virtual , Acústica , Comprensión , Humanos , Sonido
8.
Ear Hear ; 41 Suppl 1: 5S-19S, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33105255

RESUMEN

Ecological validity is a relatively new concept in hearing science. It has been cited as relevant with increasing frequency in publications over the past 20 years, but without any formal conceptual basis or clear motive. The sixth Eriksholm Workshop was convened to develop a deeper understanding of the concept for the purpose of applying it in hearing research in a consistent and productive manner. Inspired by relevant debate within the field of psychology, and taking into account the World Health Organization's International Classification of Functioning, Disability, and Health framework, the attendees at the workshop reached a consensus on the following definition: "In hearing science, ecological validity refers to the degree to which research findings reflect real-life hearing-related function, activity, or participation." Four broad purposes for striving for greater ecological validity in hearing research were determined: A (Understanding) better understanding the role of hearing in everyday life; B (Development) supporting the development of improved procedures and interventions; C (Assessment) facilitating improved methods for assessing and predicting ability to accomplish real-world tasks; and D (Integration and Individualization) enabling more integrated and individualized care. Discussions considered the effects of variables and phenomena commonly present in hearing-related research on the level of ecological validity of outcomes, supported by examples from a few selected outcome domains and for different types of studies. Illustrated with examples, potential strategies were offered for promoting a high level of ecological validity in a study and for how to evaluate the level of ecological validity of a study. Areas in particular that could benefit from more research to advance ecological validity in hearing science include: (1) understanding the processes of hearing and communication in everyday listening situations, and specifically the factors that make listening difficult in everyday situations; (2) developing new test paradigms that include more than one person (e.g., to encompass the interactive nature of everyday communication) and that are integrative of other factors that interact with hearing in real-life function; (3) integrating new and emerging technologies (e.g., virtual reality) with established test methods; and (4) identifying the key variables and phenomena affecting the level of ecological validity to develop verifiable ways to increase ecological validity and derive a set of benchmarks to strive for.


Asunto(s)
Audífonos , Audición , Percepción Auditiva , Comprensión , Humanos , Proyectos de Investigación
9.
Ear Hear ; 39(4): 664-678, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29210810

RESUMEN

OBJECTIVES: Normalizing perceived loudness is an important rationale for gain adjustments in hearing aids. It has been demonstrated that gains required for restoring normal loudness perception for monaural narrowband signals can lead to higher-than-normal loudness in listeners with hearing loss, particularly for binaural broadband presentation. The present study presents a binaural bandwidth-adaptive dynamic compressor (BBDC) that can apply different gains for narrow- and broadband signals. It was hypothesized that normal perceived loudness for a broad variety of signals could be restored for listeners with mild to moderate high-frequency hearing loss by applying individual signal-dependent gain corrections. DESIGN: Gains to normalize perceived loudness for narrowband stimuli were assessed in 15 listeners with mild to moderate high-frequency hearing loss using categorical loudness scaling. Gains for narrowband loudness compensation were calculated and applied in a standard compressor. Aided loudness functions for signals with different bandwidths were assessed. The deviation from the average normal-hearing loudness functions was used for gain correction in the BBDC. Aided loudness functions for narrow- and broadband signals with BBDC were then assessed. Gains for a 65 dB SPL speech-shaped noise of BBDC were compared with gains based on National Acoustic Laboratories' nonlinear fitting procedure version 2 (NAL-NL2). The perceived loudness for 20 real signals was compared to the average normal-hearing rating. RESULTS: The suggested BBDC showed close-to-normal loudness functions for binaural narrow- and broadband signals for the listeners with hearing loss. Normal loudness ratings were observed for the real-world test signals. The proposed gain reduction method resulted on average in similar gains as prescribed by NAL-NL2. However, substantial gain variations compared to NAL-NL2 were observed in the data for individual listeners. Gain corrections after narrowband loudness compensation showed large interindividual differences for binaural broadband signals. Some listeners required no further gain reduction for broadband signals; for others, gains in decibels were more than halved for binaural broadband signals. CONCLUSION: The interindividual differences of the binaural broadband gain corrections indicate that relevant information for normalizing perceived loudness of binaural broadband signals cannot be inferred from monaural narrowband loudness functions. Over-amplification can be avoided if binaural broadband measurements are included in the fitting procedure. For listeners with a high binaural broadband gain correction factor, loudness compensation for narrowband and broadband stimuli cannot be achieved by compression algorithms that disregard the bandwidth of the input signals. The suggested BBDC includes individual binaural broadband corrections in a more appropriate way than threshold-based procedures.


Asunto(s)
Diseño de Equipo , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Percepción Sonora , Anciano , Anciano de 80 o más Años , Femenino , Pérdida Auditiva Sensorineural/fisiopatología , Humanos , Masculino , Persona de Mediana Edad
10.
Int J Audiol ; 57(sup3): S112-S117, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-27813439

RESUMEN

OBJECTIVE: Create virtual acoustic environments (VAEs) with interactive dynamic rendering for applications in audiology. DESIGN: A toolbox for creation and rendering of dynamic virtual acoustic environments (TASCAR) that allows direct user interaction was developed for application in hearing aid research and audiology. The software architecture and the simulation methods used to produce VAEs are outlined. Example environments are described and analysed. CONCLUSION: With the proposed software, a tool for simulation of VAEs is available. A set of VAEs rendered with the proposed software was described.


Asunto(s)
Acústica , Percepción Auditiva , Corrección de Deficiencia Auditiva/instrumentación , Ambiente Controlado , Audífonos , Pérdida Auditiva/rehabilitación , Audición , Personas con Deficiencia Auditiva/rehabilitación , Realidad Virtual , Estimulación Acústica , Simulación por Computador , Diseño de Equipo , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/fisiopatología , Pérdida Auditiva/psicología , Pruebas Auditivas , Humanos , Ensayo de Materiales , Modelos Teóricos , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Psicoacústica , Programas Informáticos
11.
Int J Audiol ; 57(sup3): S43-S54, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-28355947

RESUMEN

OBJECTIVE: Single-channel noise reduction (SCNR) and dynamic range compression (DRC) are important elements in hearing aids. Only relatively few studies have addressed interaction effects and typically used real hearing aids with limited knowledge about the integrated algorithms. Here the potential benefit of different combinations and integration of SCNR and DRC was systematically assessed. DESIGN: Ten different systems combining SCNR and DRC were implemented, including five serial arrangements, a parallel and two multiplicative approaches. In an instrumental evaluation, signal-to-noise ratio (SNR) improvement and spectral contrast enhancement (SCE) were assessed. Quality ratings at 0 and +6 dB SNR, and speech reception thresholds (SRTs) in noise were measured using stationary and babble noise. STUDY SAMPLE: Thirteen young normal-hearing (NH) listeners and 12 hearing-impaired (HI) listeners participated. RESULTS: In line with an increased segmental SNR and spectral contrast compared to a serial concatenation, the parallel approach significantly reduced the perceived noise annoyance for both subject groups. The proposed multiplicative approaches could partly counteract increased speech distortions introduced by DRC and achieved the best overall quality for the HI listeners. CONCLUSIONS: For high SNRs well above the individual SRT, the specific combination of SCNR and DRC is perceptually relevant and the integrative approaches were preferred.


Asunto(s)
Corrección de Deficiencia Auditiva/instrumentación , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Audición , Ruido/prevención & control , Personas con Deficiencia Auditiva/rehabilitación , Procesamiento de Señales Asistido por Computador , Percepción del Habla , Estimulación Acústica , Adulto , Anciano , Estudios de Casos y Controles , Diseño de Equipo , Femenino , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/psicología , Humanos , Masculino , Persona de Mediana Edad , Modelos Teóricos , Ruido/efectos adversos , Prioridad del Paciente , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Psicoacústica , Inteligibilidad del Habla , Prueba del Umbral de Recepción del Habla , Adulto Joven
13.
J Acoust Soc Am ; 142(1): 35, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28764452

RESUMEN

This study introduces a model for solving three different auditory tasks in a multi-talker setting: target localization, target identification, and word recognition. The model was used to simulate psychoacoustic data from a call-sign-based listening test involving multiple spatially separated talkers [Brungart and Simpson (2007). Percept. Psychophys. 69(1), 79-91]. The main characteristics of the model are (i) the extraction of salient auditory features ("glimpses") from the multi-talker signal and (ii) the use of a classification method that finds the best target hypothesis by comparing feature templates from clean target signals to the glimpses derived from the multi-talker mixture. The four features used were periodicity, periodic energy, and periodicity-based interaural time and level differences. The model results widely exceeded probability of chance for all subtasks and conditions, and generally coincided strongly with the subject data. This indicates that, despite their sparsity, glimpses provide sufficient information about a complex auditory scene. This also suggests that complex source superposition models may not be needed for auditory scene analysis. Instead, simple models of clean speech may be sufficient to decode even complex multi-talker scenes.

14.
J Acoust Soc Am ; 139(5): 2911, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-27250183

RESUMEN

A recent study showed that human listeners are able to localize a short speech target simultaneously masked by four speech tokens in reverberation [Kopco, Best, and Carlile (2010). J. Acoust. Soc. Am. 127, 1450-1457]. Here, an auditory model for solving this task is introduced. The model has three processing stages: (1) extraction of the instantaneous interaural time difference (ITD) information, (2) selection of target-related ITD information ("glimpses") using a template-matching procedure based on periodicity, spectral energy, or both, and (3) target location estimation. The model performance was compared to the human data, and to the performance of a modified model using an ideal binary mask (IBM) at stage (2). The IBM-based model performed similarly to the subjects, indicating that the binaural model is able to accurately estimate source locations. Template matching using spectral energy and using a combination of spectral energy and periodicity achieved good results, while using periodicity alone led to poor results. Particularly, the glimpses extracted from the initial portion of the signal were critical for good performance. Simulation data show that the auditory features investigated here are sufficient to explain human performance in this challenging listening condition and thus may be used in models of auditory scene analysis.


Asunto(s)
Señales (Psicología) , Ruido/efectos adversos , Enmascaramiento Perceptual , Periodicidad , Localización de Sonidos , Acústica del Lenguaje , Percepción del Habla , Estimulación Acústica , Acústica , Vías Auditivas/fisiología , Simulación por Computador , Femenino , Humanos , Masculino , Modelos Psicológicos , Factores de Tiempo
15.
J Acoust Soc Am ; 137(2): EL137-43, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25698041

RESUMEN

Klein-Hennig et al. [J. Acoust. Soc. Am. 129, 3856-3872 (2011)] introduced a class of high-frequency stimuli for which the envelope shape can be altered by independently varying the attack, hold, decay, and pause durations. These stimuli, originally employed for testing the shape dependence of human listeners' sensitivity to interaural temporal differences (ITDs) in the ongoing envelope, were used to measure the lateralization produced by fixed interaural disparities. Consistent with the threshold ITD data, a steep attack and a non-zero pause facilitate strong ITD-based lateralization. In contrast, those conditions resulted in the smallest interaural level-based lateralization.


Asunto(s)
Estimulación Acústica/métodos , Lateralidad Funcional , Localización de Sonidos , Adulto , Audiometría de Tonos Puros , Umbral Auditivo , Femenino , Humanos , Masculino , Movimiento (Física) , Psicoacústica , Sonido , Factores de Tiempo , Adulto Joven
16.
J Acoust Soc Am ; 138(5): 2635-48, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26627742

RESUMEN

Robust sound source localization is performed by the human auditory system even in challenging acoustic conditions and in previously unencountered, complex scenarios. Here a computational binaural localization model is proposed that possesses mechanisms for handling of corrupted or unreliable localization cues and generalization across different acoustic situations. Central to the model is the use of interaural coherence, measured as interaural vector strength (IVS), to dynamically weight the importance of observed interaural phase (IPD) and level (ILD) differences in frequency bands up to 1.4 kHz. This is accomplished through formulation of a probabilistic model in which the ILD and IPD distributions pertaining to a specific source location are dependent on observed interaural coherence. Bayesian computation of the direction-of-arrival probability map naturally leads to coherence-weighted integration of location cues across frequency and time. Results confirm the model's validity through statistical analyses of interaural parameter values. Simulated localization experiments show that even data points with low reliability (i.e., low IVS) can be exploited to enhance localization performance. A temporal integration length of at least 200 ms is required to gain a benefit; this is in accordance with previous psychoacoustic findings on temporal integration of spatial cues in the human auditory system.


Asunto(s)
Percepción Auditiva/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Algoritmos , Teorema de Bayes , Simulación por Computador , Señales (Psicología) , Humanos , Modelos Neurológicos , Modelos Estadísticos
17.
Ear Hear ; 35(5): e213-27, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25010636

RESUMEN

OBJECTIVES: In a previous study, ) investigated whether pure-tone average (PTA) hearing loss and working memory capacity (WMC) modulate benefit from different binaural noise reduction (NR) settings. Results showed that listeners with smaller WMC preferred strong over moderate NR even at the expense of poorer speech recognition due to greater speech distortion (SD), whereas listeners with larger WMC did not. To enable a better understanding of these findings, the main aims of the present study were (1) to explore the perceptual consequences of changes to the signal mixture, target speech, and background noise caused by binaural NR, and (2) to determine whether response to these changes varies with WMC and PTA. DESIGN: As in the previous study, four age-matched groups of elderly listeners (with N = 10 per group) characterized by either mild or moderate PTAs and either better or worse performance on a visual measure of WMC participated. Five processing conditions were tested, which were based on the previously used (binaural coherence-based) NR scheme designed to attenuate diffuse signal components at mid to high frequencies. The five conditions differed in terms of the type of processing that was applied (no NR, strong NR, or strong NR with restoration of the long-term stimulus spectrum) and in terms of whether the target speech and background noise were processed in the same manner or whether one signal was left unprocessed while the other signal was processed with the gains computed for the signal mixture. Comparison across these conditions allowed assessing the effects of changes in high-frequency audibility (HFA), SD, and noise attenuation and distortion (NAD). Outcome measures included a dual-task paradigm combining speech recognition with a visual reaction time (VRT) task as well as ratings of perceived effort and overall preference. All measurements were carried out using headphone simulations of a frontal target speaker in a busy cafeteria. RESULTS: Relative to no NR, strong NR was found to impair speech recognition and VRT performance slightly and to improve perceived effort and overall preference markedly. Relative to strong NR, strong NR with restoration of the long-term stimulus spectrum and thus HFA did not affect speech recognition, restored VRT performance to that achievable with no NR, and increased perceived effort and reduced overall preference markedly. SD had negative effects on speech recognition and perceived effort, particularly when both speech and noise were processed with the gains computed for the signal mixture. NAD had positive effects on speech recognition, perceived effort, and overall preference, particularly when the target speech was left unprocessed. VRT performance was unaffected by SD and NAD. None of the datasets exhibited any clear signs that response to the different signal changes varies with PTA or WMC. CONCLUSIONS: For the outcome measures and stimuli applied here, the present study provides little evidence that PTA or WMC affect response to changes in HFA, SD, and NAD caused by binaural NR. However, statistical power restrictions suggest further research is needed. This research should also investigate whether partial HFA restoration combined with some pre-processing that reduces co-modulation distortion results in a more favorable balance of the effects of binaural NR across outcome dimensions and whether NR strength has any influence on these results.


Asunto(s)
Audífonos , Pérdida Auditiva Sensorineural/fisiopatología , Memoria a Corto Plazo/fisiología , Percepción del Habla/fisiología , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Pérdida Auditiva Sensorineural/rehabilitación , Humanos , Persona de Mediana Edad , Detección de Señal Psicológica , Relación Señal-Ruido
18.
Ear Hear ; 35(3): e52-62, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24351610

RESUMEN

OBJECTIVES: Although previous research indicates that cognitive skills influence benefit from different types of hearing aid algorithms, comparatively little is known about the role of, and potential interaction with, hearing loss. This holds true especially for noise reduction (NR) processing. The purpose of the present study was thus to explore whether degree of hearing loss and cognitive function modulate benefit from different binaural NR settings based on measures of speech intelligibility, listening effort, and overall preference. DESIGN: Forty elderly listeners with symmetrical sensorineural hearing losses in the mild to severe range participated. They were stratified into four age-matched groups (with n = 10 per group) based on their pure-tone average hearing losses and their performance on a visual measure of working memory (WM) capacity. The algorithm under consideration was a binaural coherence-based NR scheme that suppressed reverberant signal components as well as diffuse background noise at mid to high frequencies. The strength of the applied processing was varied from inactive to strong, and testing was carried out across a range of fixed signal-to-noise ratios (SNRs). Potential benefit was assessed using a dual-task paradigm combining speech recognition with a visual reaction time (VRT) task indexing listening effort. Pairwise preference judgments were also collected. All measurements were made using headphone simulations of a frontal speech target in a busy cafeteria. Test-retest data were gathered for all outcome measures. RESULTS: Analysis of the test-retest data showed all data sets to be reliable. Analysis of the speech scores showed that, for all groups, speech recognition was unaffected by moderate NR processing, whereas strong NR processing reduced intelligibility by about 5%. Analysis of the VRT scores revealed a similar data pattern. That is, while moderate NR did not affect VRT performance, strong NR impaired the performance of all groups slightly. Analysis of the preference scores collapsed across SNR showed that all groups preferred some over no NR processing. Furthermore, the two groups with smaller WM capacity preferred strong over moderate NR processing; for the two groups with larger WM capacity, preference did not differ significantly between the moderate and strong settings. CONCLUSIONS: The present study demonstrates that, for the algorithm and the measures of speech recognition and listening effort used here, the effects of different NR settings interact with neither degree of hearing loss nor WM capacity. However, preferred NR strength was found to be associated with smaller WM capacity, suggesting that hearing aid users with poorer cognitive function may prefer greater noise attenuation even at the expense of poorer speech intelligibility. Further research is required to enable a more detailed (SNR-dependent) analysis of this effect and to test its wider applicability.


Asunto(s)
Algoritmos , Cognición , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Memoria a Corto Plazo , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Femenino , Pérdida Auditiva Sensorineural/psicología , Humanos , Masculino , Persona de Mediana Edad , Patrones de Reconocimiento Fisiológico , Tiempo de Reacción , Relación Señal-Ruido , Percepción del Habla , Resultado del Tratamiento
19.
J Acoust Soc Am ; 133(1): 1-4, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23297875

RESUMEN

Recently two studies [Klein-Hennig et al., J. Acoust. Soc. Am. 129, 3856-3872 (2011); Laback et al., J. Acoust. Soc. Am. 130, 1515-1529 (2011)] independently investigated the isolated effect of pause duration on sensitivity to interaural time differences (ITD) in the ongoing stimulus envelope. The steepness of the threshold ITD as a function of pause duration functions differed considerably across studies. The present study, using matched carrier and modulation frequencies, directly compared threshold ITDs for the two envelope flank shapes from those studies. The results agree well when defining the metric of pause duration based on modulation depth sensitivity.


Asunto(s)
Estimulación Acústica/métodos , Percepción Auditiva , Umbral Auditivo , Señales (Psicología) , Oído/fisiología , Adulto , Audiometría , Lateralidad Funcional , Humanos , Psicoacústica , Factores de Tiempo , Adulto Joven
20.
J Acoust Soc Am ; 133(4): EL314-9, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23556697

RESUMEN

Data are presented on the relation between loudness measured in categorical units (CUs) using a standardized loudness scaling method (ISO 16832, 2006) and loudness expressed as the classical standardized measures phon and sone. Based on loudness scaling of narrowband noise signals by 31 normal-hearing subjects, sound pressure levels eliciting the same categorical loudness were derived for various center frequencies. The results were comparable to the standardized equal-loudness level contours. A comparison between the loudness function in CUs at 1000 Hz and the standardized loudness function in sones indicates a cubic relation between the two loudness measures.


Asunto(s)
Acústica , Percepción Sonora , Estimulación Acústica , Audiometría , Umbral Auditivo , Humanos , Modelos Teóricos , Presión , Procesamiento de Señales Asistido por Computador , Sonido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA