Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
J Assoc Res Otolaryngol ; 24(4): 429-439, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37438572

RESUMEN

PURPOSE: Speech is characterized by dynamic acoustic cues that must be encoded by the auditory periphery, auditory nerve, and brainstem before they can be represented in the auditory cortex. The fidelity of these cues in the brainstem can be assessed with the frequency-following response (FFR). Data obtained from older adults-with normal or impaired hearing-were compared with previous results obtained from normal-hearing younger adults to evaluate the effects of age and hearing loss on the fidelity of FFRs to tone glides. METHOD: A signal detection approach was used to model a threshold criterion to distinguish the FFR from baseline neural activity. The response strength and temporal coherence of the FFR to tone glides varying in direction (rising or falling) and extent ([Formula: see text], [Formula: see text], or 1 octave) were assessed by signal-to-noise ratio (SNR) and stimulus-response correlation coefficient (SRCC) in older adults with normal hearing and with hearing loss. RESULTS: Significant group mean differences in both SNR and SRCC were noted-with poorer responses more frequently observed with increased age and hearing loss-but with considerable response variability among individuals within each group and substantial overlap among group distributions. CONCLUSION: The overall distribution of FFRs across listeners and stimulus conditions suggests that observed group differences associated with age and hearing loss are influenced by a decreased likelihood of older and hearing-impaired individuals having a detectable FFR response and by lower average FFR fidelity among those older and hearing-impaired individuals who do have a detectable response.


Asunto(s)
Sordera , Pérdida Auditiva Sensorineural , Pérdida Auditiva , Percepción del Habla , Humanos , Anciano , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Audición/fisiología
2.
Hear Res ; 434: 108771, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37119674

RESUMEN

Difficulty understanding speech in fluctuating backgrounds is common among older adults. Whereas younger adults are adept at interpreting speech based on brief moments when the signal-to-noise ratio is favorable, older adults use these glimpses of speech less effectively. Age-related declines in auditory brainstem function may degrade the fidelity of speech cues in fluctuating noise for older adults, such that brief glimpses of speech interrupted by noise segments are not faithfully represented in the neural code that reaches the cortex. This hypothesis was tested using electrophysiological recordings of the envelope following response (EFR) elicited by glimpses of speech-like stimuli varying in duration (42, 70, 210 ms) and interrupted by silence or intervening noise. Responses from adults aged 23-73 years indicated that both age and hearing sensitivity were associated with EFR temporal coherence and response magnitude. Age was better than hearing sensitivity for predicting temporal coherence, whereas hearing sensitivity was better than age for predicting response magnitude. Poorer-fidelity EFRs were observed with shorter glimpses and with the addition of intervening noise. However, losses of fidelity with glimpse duration and noise were not associated with participant age or hearing sensitivity. These results suggest that the EFR is sensitive to factors commonly associated with glimpsing but do not entirely account for age-related changes in speech recognition in fluctuating backgrounds.


Asunto(s)
Percepción del Habla , Habla , Humanos , Anciano , Percepción del Habla/fisiología , Ruido/efectos adversos , Audición/fisiología , Tronco Encefálico , Estimulación Acústica/métodos
3.
J Acoust Soc Am ; 152(2): 807, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-36050190

RESUMEN

Remote testing of auditory function can be transformative to both basic research and hearing healthcare; however, historically, many obstacles have limited remote collection of reliable and valid auditory psychometric data. Here, we report performance on a battery of auditory processing tests using a remotely administered system, Portable Automatic Rapid Testing. We compare a previously reported dataset collected in a laboratory setting with the same measures using uncalibrated, participant-owned devices in remote settings (experiment 1, n = 40) remote with and without calibrated hardware (experiment 2, n = 36) and laboratory with and without calibrated hardware (experiment 3, n = 58). Results were well-matched across datasets and had similar reliability, but overall performance was slightly worse than published norms. Analyses of potential nuisance factors such as environmental noise, distraction, or lack of calibration failed to provide reliable evidence that these factors contributed to the observed variance in performance. These data indicate feasibility of remote testing of suprathreshold auditory processing using participants' own devices. Although the current investigation was limited to young participants without hearing difficulties, its outcomes demonstrate the potential for large-scale, remote hearing testing of more hearing-diverse populations both to advance basic science and to establish the clinical viability of auditory remote testing.


Asunto(s)
Pérdida Auditiva , Pruebas Auditivas , Percepción Auditiva , Audición , Humanos , Reproducibilidad de los Resultados
4.
JASA Express Lett ; 2(9): 094401, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36097604

RESUMEN

This study investigated how level differences affect the fusion and identification of dichotically and monaurally presented concurrent vowel pairs where the vowels differed in level by 0, 4, 8, or 12 dB. With dichotic presentation, there was minimal variation in fusion and identification-vowels were nearly always fused and were identified consistently across level differences. Conversely, with monaural presentation, fusion and identification varied systematically across level differences-with the more intense vowel dominating fused percepts. The dissimilar effect of level difference for dichotic versus monaural presentation may arise from differences in energetic masking and/or divergent mechanisms underlying sound segregation and integration.

5.
Brain Sci ; 12(6)2022 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-35741581

RESUMEN

(1) Background: Difficulty hearing in noise is exacerbated in older adults. Older adults are more likely to have audiometric hearing loss, although some individuals with normal pure-tone audiograms also have difficulty perceiving speech in noise. Additional variables also likely account for speech understanding in noise. It has been suggested that one important class of variables is the ability to process auditory information once it has been detected. Here, we tested a set of these "suprathreshold" auditory processing abilities and related them to performance on a two-part test of speech understanding in competition with and without spatial separation of the target and masking speech. Testing was administered in the Portable Automated Rapid Testing (PART) application developed by our team; PART facilitates psychoacoustic assessments of auditory processing. (2) Methods: Forty-one individuals (average age 51 years), completed assessments of sensitivity to temporal fine structure (TFS) and spectrotemporal modulation (STM) detection via an iPad running the PART application. Statistical models were used to evaluate the strength of associations between performance on the auditory processing tasks and speech understanding in competition. Age and pure-tone-average (PTA) were also included as potential predictors. (3) Results: The model providing the best fit also included age and a measure of diotic frequency modulation (FM) detection but none of the other potential predictors. However, even the best fitting models accounted for 31% or less of the variance, supporting work suggesting that other variables (e.g., cognitive processing abilities) also contribute significantly to speech understanding in noise. (4) Conclusions: The results of the current study do not provide strong support for previous suggestions that suprathreshold processing abilities alone can be used to explain difficulties in speech understanding in competition among older adults. This discrepancy could be due to the speech tests used, the listeners tested, or the suprathreshold tests chosen. Future work with larger numbers of participants is warranted, including a range of cognitive tests and additional assessments of suprathreshold auditory processing abilities.

6.
J Speech Lang Hear Res ; 65(7): 2709-2719, 2022 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-35728021

RESUMEN

PURPOSE: The effect of onset asynchrony on dichotic vowel segregation and identification in normal-hearing (NH) and hearing-impaired (HI) listeners was examined. We hypothesized that fusion would decrease and identification performance would improve with increasing onset asynchrony. Additionally, we hypothesized that HI listeners would gain more benefit from onset asynchrony. METHOD: A total of 18 adult subjects (nine NH, nine HI) participated. Testing included dichotic presentation of synthetic vowels, /i/, /u/, /a/, and /ae/. Vowel pairs were presented with the same or different fundamental frequency (f o; f o = 106.9, 151.2, or 201.8 Hz) across the two ears and one onset asynchrony of 0, 1, 2, 4, 10, or 20 ms throughout a block (one block = 80 runs). Subjects identified the one or two vowels that they perceived on a touchscreen. Subjects were not informed that two vowels were always presented or that there was onset asynchrony. RESULTS: The effect of onset asynchrony on fusion and vowel identification was greatest in both groups when Δf o = 0 Hz. Mean fusion scores across increasing onset asynchronies differed significantly between the two groups with HI listeners exhibiting less fusion across pooled Δf o. There was no significant difference with identification performance. CONCLUSIONS: As onset asynchrony increased, dichotic vowel fusion decreased and identification performance improved. Onset asynchrony exerted a greater effect on fusion and identification of vowels when Δf o = 0, especially in HI listeners. Therefore, the temporal cue promotes segregation in both groups of listeners, especially in HI listeners when the f o cue was unavailable.


Asunto(s)
Señales (Psicología) , Pérdida Auditiva , Audición , Percepción del Habla , Adulto , Audición/fisiología , Pérdida Auditiva/fisiopatología , Humanos , Percepción del Habla/fisiología
7.
J Cogn Enhanc ; 6(1): 47-66, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34568741

RESUMEN

Understanding speech in the presence of acoustical competition is a major complaint of those with hearing difficulties. Here, a novel perceptual learning game was tested for its effectiveness in reducing difficulties with hearing speech in competition. The game was designed to train a mixture of auditory processing skills thought to underlie speech in competition, such as spectral-temporal processing, sound localization, and auditory working memory. Training on these skills occurred both in quiet and in competition with noise. Thirty college-aged participants without any known hearing difficulties were assigned either to this mixed-training condition or an active control consisting of frequency discrimination training within the same gamified setting. To assess training effectiveness, tests of speech in competition (primary outcome), as well as basic supra-threshold auditory processing and cognitive processing abilities (secondary outcomes) were administered before and after training. Results suggest modest improvements on speech in competition tests in the mixed-training compared to the frequency-discrimination control condition (Cohen's d = 0.68). While the sample is small, and in normally hearing individuals, these data suggest promise of future study in populations with hearing difficulties. Supplementary Information: The online version contains supplementary material available at 10.1007/s41465-021-00224-5.

8.
Am J Audiol ; 30(4): 1023-1036, 2021 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-34633838

RESUMEN

PURPOSE: Type 2 diabetes mellitus (DM2) is associated with impaired hearing. However, the evidence is less clear if DM2 can lead to difficulty understanding speech in complex acoustic environments, independently of age and hearing loss effects. The purpose of this study was to estimate the magnitude of DM2-related effects on speech understanding in the presence of competing speech after adjusting for age and hearing. METHOD: A cross-sectional study design was used to investigate the relationship between DM2 and speech understanding in 190 Veterans (M age = 47 years, range: 25-76). Participants were classified as having no diabetes (n = 74), prediabetes (n = 19), or DM2 that was well controlled (n = 24) or poorly controlled (n = 73). A test of spatial release from masking (SRM) was presented in a virtual acoustical simulation over insert earphones with multiple talkers using sentences from the coordinate response measure corpus to determine the target-to-masker ratio (TMR) required for 50% correct identification of target speech. A linear mixed model of the TMR results was used to estimate SRM and separate effects of diabetes group, age, and low-frequency pure-tone average (PTA-low) and high-frequency pure-tone average. A separate model estimated the effects of DM2 on PTA-low. RESULTS: After adjusting for hearing and age, diabetes-related effects remained among those whose DM2 was well controlled, showing an SRM loss of approximately 0.5 dB. Results also showed effects of hearing loss and age, consistent with the literature on people without DM2. Low-frequency hearing loss was greater among those with DM2. CONCLUSIONS: In a large cohort of Veterans, low-frequency hearing loss and older age negatively impact speech understanding. Compared with nondiabetics, individuals with controlled DM2 have additional auditory deficits beyond those associated with hearing loss or aging. These results provide a potential explanation for why individuals who have diabetes and/or are older often report difficulty understanding speech in real-world listening environments. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.16746475.


Asunto(s)
Diabetes Mellitus Tipo 2 , Pérdida Auditiva , Percepción del Habla , Veteranos , Anciano , Envejecimiento , Umbral Auditivo , Estudios Transversales , Diabetes Mellitus Tipo 2/epidemiología , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/epidemiología , Humanos , Persona de Mediana Edad , Enmascaramiento Perceptual , Habla
10.
J Assoc Res Otolaryngol ; 22(4): 443-461, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33877470

RESUMEN

Normal-hearing (NH) listeners use frequency cues, such as fundamental frequency (voice pitch), to segregate sounds into discrete auditory streams. However, many hearing-impaired (HI) individuals have abnormally broad binaural pitch fusion which leads to fusion and averaging of the original monaural pitches into the same stream instead of segregating the two streams (Oh and Reiss, 2017) and may similarly lead to fusion and averaging of speech streams across ears. In this study, using dichotic speech stimuli, we examined the relationship between speech fusion and vowel identification. Dichotic vowel perception was measured in NH and HI listeners, with across-ear fundamental frequency differences varied. Synthetic vowels /i/, /u/, /a/, and /ae/ were generated with three fundamental frequencies (F0) of 106.9, 151.2, and 201.8 Hz and presented dichotically through headphones. For HI listeners, stimuli were shaped according to NAL-NL2 prescriptive targets. Although the dichotic vowels presented were always different across ears, listeners were not informed that there were no single vowel trials and could identify one vowel or two different vowels on each trial. When there was no F0 difference between the ears, both NH and HI listeners were more likely to fuse the vowels and identify only one vowel. As ΔF0 increased, NH listeners increased the percentage of two-vowel responses, but HI listeners were more likely to continue to fuse vowels even with large ΔF0. Binaural tone fusion range was significantly correlated with vowel fusion rates in both NH and HI listeners. Confusion patterns with dichotic vowels differed from those seen with concurrent monaural vowels, suggesting different mechanisms behind the errors. Together, the findings suggest that broad fusion leads to spectral blending across ears, even for different ΔF0, and may hinder the stream segregation and understanding of speech in the presence of competing talkers.


Asunto(s)
Pérdida Auditiva , Fonética , Percepción de la Altura Tonal , Habla , Pruebas de Audición Dicótica , Pruebas Auditivas , Humanos
11.
J Acoust Soc Am ; 147(2): EL201, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-32113282

RESUMEN

Measures of signal-in-noise neural encoding may improve understanding of the hearing-in-noise difficulties experienced by many individuals in everyday life. Usually noise results in weaker envelope following responses (EFRs); however, some studies demonstrate EFR enhancements. This experiment tested whether noise-induced enhancements in EFRs are demonstrated with simple 500- and 1000-Hz pure tones amplitude modulated at 110 Hz. Most of the 12 young normal-hearing participants demonstrated enhanced encoding of the 110-Hz fundamental in a noise background compared to quiet; in contrast, responses at the harmonics were decreased in noise relative to quiet conditions. Possible mechanisms of such an enhancement are discussed.


Asunto(s)
Potenciales Evocados Auditivos , Ruido , Estimulación Acústica , Adulto , Audición , Humanos , Ruido/efectos adversos
12.
Hear Res ; 375: 25-33, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30772133

RESUMEN

The spectral (frequency) and amplitude cues in speech change rapidly over time. Study of the neural encoding of these dynamic features may help to improve diagnosis and treatment of speech-perception difficulties. This study uses tone glides as a simple approximation of dynamic speech sounds to better our understanding of the underlying neural representation of speech. The frequency following response (FFR) was recorded from 10 young normal-hearing adults using six signals varying in glide direction (rising and falling) and extent of frequency change (13, 23, and 1 octave). In addition, the FFR was simultaneously recorded using two different electrode montages (vertical and horizontal). These factors were analyzed across three time windows using a measure of response strength (signal-to-noise ratio) and a measure of temporal coherence (stimulus-to-response correlation coefficient). Results demonstrated effects of extent, montage, and a montage-by-window interaction. SNR and stimulus-to-response correlation measures differed in their sensitivity to these factors. These results suggest that the FFR reflects dynamic acoustic characteristics of simple tonal stimuli very well. Additional research is needed to determine how neural encoding may differ for more natural dynamic speech signals and populations with impaired auditory processing.


Asunto(s)
Estimulación Acústica/métodos , Percepción del Habla/fisiología , Adulto , Electrodos , Electroencefalografía/instrumentación , Electroencefalografía/estadística & datos numéricos , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Masculino , Fonética , Psicoacústica , Relación Señal-Ruido , Adulto Joven
13.
J Acoust Soc Am ; 143(1): 378, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-29390743

RESUMEN

Individuals with hearing loss are thought to be less sensitive to the often subtle variations of acoustic information that support auditory stream segregation. Perceptual segregation can be influenced by differences in both the spectral and temporal characteristics of interleaved stimuli. The purpose of this study was to determine what stimulus characteristics support sequential stream segregation by normal-hearing and hearing-impaired listeners. Iterated rippled noises (IRNs) were used to assess the effects of tonality, spectral resolvability, and hearing loss on the perception of auditory streams in two pitch regions, corresponding to 250 and 1000 Hz. Overall, listeners with hearing loss were significantly less likely to segregate alternating IRNs into two auditory streams than were normally hearing listeners. Low pitched IRNs were generally less likely to segregate into two streams than were higher pitched IRNs. High-pass filtering was a strong contributor to reduced segregation for both groups. The tonality, or pitch strength, of the IRNs had a significant effect on streaming, but the effect was similar for both groups of subjects. These data demonstrate that stream segregation is influenced by many factors including pitch differences, pitch region, spectral resolution, and degree of stimulus tonality, in addition to the loss of auditory sensitivity.


Asunto(s)
Percepción Auditiva , Señales (Psicología) , Pérdida Auditiva/psicología , Personas con Deficiencia Auditiva/psicología , Estimulación Acústica , Adulto , Anciano , Umbral Auditivo , Estudios de Casos y Controles , Femenino , Audición , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , Percepción de la Altura Tonal , Factores de Tiempo
14.
Proc Meet Acoust ; 33(1)2018 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-30627315

RESUMEN

The current state of consumer-grade electronics means that researchers, clinicians, students, and members of the general public across the globe can create high-quality auditory stimuli using tablet computers, built-in sound hardware, and calibrated consumer-grade headphones. Our laboratories have created a free application that supports this work: PART (Portable Automated Rapid Testing). PART has implemented a range of psychoacoustical tasks including: spatial release from speech-on-speech masking, binaural sensitivity, gap discrimination, temporal modulation, spectral modulation, and spectrotemporal modulation (STM). Here, data from the spatial release and STM tasks are presented. Data were collected across the globe on tablet computers using applications available for free download, built-in sound hardware, and calibrated consumer-grade headphones. Spatial release results were as good or better than those obtained with standard laboratory methods. Spectrotemporal modulation thresholds were obtained rapidly and, for younger normal hearing listeners, were also as good or better than those in the literature. For older hearing impaired listeners, rapid testing resulted in similar thresholds to those reported in the literature. Listeners at five different testing sites produced very similar STM thresholds, despite a variety of testing conditions and calibration routines. Download Spatial Release, PART, and Listen: An Auditory Training Experience for free at https://bgc.ucr.edu/games/.

15.
J Speech Lang Hear Res ; 58(2): 481-96, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25815688

RESUMEN

PURPOSE: Aging is known to influence temporal processing, but its relationship to speech perception has not been clearly defined. To examine listeners' use of contextual and phonetic information, the Revised Speech Perception in Noise test (R-SPIN) was used to develop a time-gated word (TGW) task. METHOD: In Experiment 1, R-SPIN sentence lists were matched on context, target-word length, and median word segment length necessary for target recognition. In Experiment 2, TGW recognition was assessed in quiet and in noise among adults of various ages with normal hearing to moderate hearing loss. Linear regression models of the minimum word duration necessary for correct identification and identification failure rates were developed. Age and hearing thresholds were modeled as continuous predictors with corrections for correlations among multiple measurements of the same participants. RESULTS: While aging and hearing loss both had significant impacts on task performance in the most adverse listening condition (low context, in noise), for most conditions, performance was limited primarily by hearing loss. CONCLUSION: Whereas hearing loss was strongly related to target-word recognition, the effect of aging was only weakly related to task performance. These results have implications for the design and evaluation of studies of hearing and aging.


Asunto(s)
Envejecimiento/fisiología , Audición/fisiología , Lenguaje , Reconocimiento en Psicología , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Anciano , Umbral Auditivo , Femenino , Pérdida Auditiva/psicología , Pruebas Auditivas , Humanos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Ruido , Fonética , Factores de Tiempo , Adulto Joven
16.
Front Neurosci ; 8: 172, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25009458

RESUMEN

Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal processing ability in a group of younger and older listeners with a range of hearing thresholds. Seventy-eight listeners were tested on a set of three temporal discrimination tasks (monaural gap discrimination, bilateral gap discrimination, and binaural discrimination of interaural differences in time). To examine the role of temporal fine structure in these tasks, four types of brief stimuli were used: tone bursts, broad-frequency chirps with rising or falling frequency contours, and random-phase noise bursts. Between-subject group analyses conducted separately for each task revealed substantial increases in temporal thresholds for the older listeners across all three tasks, regardless of stimulus type, as well as significant correlations among the performance of individual listeners across most combinations of tasks and stimuli. Differences in performance were associated with the stimuli in the monaural and binaural tasks, but not the bilateral task. Temporal fine structure differences among the stimuli had the greatest impact on monaural thresholds. Threshold estimate values across all tasks and stimuli did not show any greater variability for the older listeners as compared to the younger listeners. A linear mixed model applied to the data suggested that age and hearing loss are independent factors responsible for temporal processing ability, thus supporting the increasingly accepted hypothesis that temporal processing can be impaired for older compared to younger listeners with similar hearing and/or amounts of hearing loss.

17.
J Assoc Res Otolaryngol ; 14(1): 125-37, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23007720

RESUMEN

Vowel identification is largely dependent on listeners' access to the frequency of two or three peaks in the amplitude spectrum. Earlier work has demonstrated that, whereas normal-hearing listeners can identify harmonic complexes with vowel-like spectral shapes even with very little amplitude contrast between "formant" components and remaining harmonic components, listeners with hearing loss require greater amplitude differences. This is likely the result of the poor frequency resolution that often accompanies hearing loss. Here, we describe an additional acoustic dimension for emphasizing formant versus non-formant harmonics that may supplement amplitude contrast information. The purpose of this study was to determine whether listeners were able to identify "vowel-like" sounds using temporal (component phase) contrast, which may be less affected by cochlear loss than spectral cues, and whether overall identification improves when congruent temporal and spectral information are provided together. Five normal-hearing and five hearing-impaired listeners identified three vowels over many presentations. Harmonics representing formant peaks were varied in amplitude, phase, or a combination of both. In addition to requiring less amplitude contrast, normal-hearing listeners could accurately identify the sounds with less phase contrast than required by people with hearing loss. However, both normal-hearing and hearing-impaired groups demonstrated the ability to identify vowel-like sounds based solely on component phase shifts, with no amplitude contrast information, and they also showed improved performance when congruent phase and amplitude cues were combined. For nearly all listeners, the combination of spectral and temporal information improved identification in comparison to either dimension alone.


Asunto(s)
Pérdida Auditiva/fisiopatología , Fonética , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Umbral Auditivo/fisiología , Humanos , Persona de Mediana Edad , Pruebas de Discriminación del Habla/métodos
18.
Ear Hear ; 33(2): 231-8, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22367094

RESUMEN

OBJECTIVE: To investigate the contributions of energetic and informational masking to neural encoding and perception in noise, using oddball discrimination and sentence recognition tasks. DESIGN: P3 auditory evoked potential, behavioral discrimination, and sentence recognition data were recorded in response to speech and tonal signals presented to nine normal-hearing adults. Stimuli were presented at a signal to noise ratio of -3 dB in four background conditions: quiet, continuous noise, intermittent noise, and four-talker babble. RESULTS: Responses to tonal signals were not significantly different for the three maskers. However, responses to speech signals in the four-talker babble resulted in longer P3 latencies, smaller P3 amplitudes, poorer discrimination accuracy, and longer reaction times than in any of the other conditions. Results also demonstrate significant correlations between physiological and behavioral data. As latency of the P3 increased, reaction times also increased and sentence recognition scores decreased. CONCLUSION: The data confirm a differential effect of masker type on the P3 and behavioral responses and present evidence of interference by an informational masker to speech understanding at the level of the cortex. Results also validate the use of the P3 as a useful measure to demonstrate physiological correlates of informational masking.


Asunto(s)
Potenciales Evocados Auditivos , Enmascaramiento Perceptual/fisiología , Fonética , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Discriminación en Psicología/fisiología , Potenciales Relacionados con Evento P300/fisiología , Femenino , Humanos , Masculino , Ruido , Patrones de Reconocimiento Fisiológico/fisiología , Desempeño Psicomotor , Tiempo de Reacción/fisiología , Relación Señal-Ruido , Adulto Joven
19.
J Speech Lang Hear Res ; 54(4): 1211-23, 2011 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-21297168

RESUMEN

PURPOSE: This study examined the influence of presentation level and mild-to-moderate hearing loss on the identification of a set of vowel tokens systematically varying in the frequency locations of their second and third formants. METHOD: Five listeners with normal hearing (NH listeners) and five listeners with hearing impairment (HI listeners) identified synthesized vowels that represented both highly identifiable and ambiguous examples of /i/, /[Please see symbol]/, and /[Please see symbol]/. RESULTS: Response patterns of NH listeners showed significant changes, with an increase in presentation level from 75 dB SPL to 95 dB SPL, including increased category overlap. HI listeners, listening only at the higher level, showed greater category overlap than normal and overall identification patterns that differed significantly from those of NH listeners. Excitation patterns based on estimates of auditory filters suggested smoothing of the internal representations, resulting in impaired formant resolution. CONCLUSIONS: Both increased presentation level for NH listeners and the presence of hearing loss produced a significant change in vowel identification for this stimulus set. Major differences were observed between NH listeners and HI listeners in vowel category overlap and in the sharpness of boundaries between vowel tokens. It is likely that these findings reflect imprecise internal spectral representations due to reduced frequency selectivity.


Asunto(s)
Percepción Auditiva , Umbral Auditivo , Pérdida Auditiva Sensorineural/fisiopatología , Fonética , Percepción del Habla , Pruebas de Impedancia Acústica , Adulto , Anciano , Anciano de 80 o más Años , Análisis de Varianza , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Biológicos , Psicoacústica , Valores de Referencia , Espectrografía del Sonido
20.
Ear Hear ; 32(1): 53-60, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20890206

RESUMEN

OBJECTIVES: Perception-in-noise deficits have been demonstrated across many populations and listening conditions. Many factors contribute to successful perception of auditory stimuli in noise, including neural encoding in the central auditory system. Physiological measures such as cortical auditory-evoked potentials (CAEPs) can provide a view of neural encoding at the level of the cortex that may inform our understanding of listeners' abilities to perceive signals in the presence of background noise. To understand signal-in-noise neural encoding better, we set out to determine the effect of signal type, noise type, and evoking paradigm on the P1-N1-P2 complex. DESIGN: Tones and speech stimuli were presented to nine individuals in quiet and in three background noise types: continuous speech spectrum noise, interrupted speech spectrum noise, and four-talker babble at a signal-to-noise ratio of -3 dB. In separate sessions, CAEPs were evoked by a passive homogenous paradigm (single repeating stimulus) and an active oddball paradigm. RESULTS: The results for the N1 component indicated significant effects of signal type, noise type, and evoking paradigm. Although components P1 and P2 also had significant main effects of these variables, only P2 demonstrated significant interactions among these variables. CONCLUSIONS: Signal type, noise type, and evoking paradigm all must be carefully considered when interpreting signal-in-noise evoked potentials. Furthermore, these data confirm the possible usefulness of CAEPs as an aid to understand perception-in-noise deficits.


Asunto(s)
Estimulación Acústica/métodos , Potenciales Evocados Auditivos/fisiología , Enmascaramiento Perceptual/fisiología , Percepción del Habla/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Procesamiento de Señales Asistido por Computador , Espectrografía del Sonido , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...