Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Alzheimers Dement ; 18(6): 1085-1099, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34569690

RESUMEN

Speech comprehension in noisy environments depends on central auditory functions, which are vulnerable in Alzheimer's disease (AD). Binaural processing exploits two ear sounds to optimally process degraded sound information; its characteristics are poorly understood in AD. We studied behavioral and electrophysiological alterations in binaural processing among 121 participants (AD = 27; amnestic mild cognitive impairment [aMCI] = 33; subjective cognitive decline [SCD] = 30; cognitively normal [CN] = 31). We observed impairment of binaural processing in AD and aMCI, and detected a U-shaped curve change in phase synchrony (declining from CN to SCD and to aMCI, but increasing from aMCI to AD). This improvement in phase synchrony accompanying more severe cognitive stages could reflect neural adaptation for binaural processing. Moreover, increased phase synchrony is associated with worse memory during the stages when neural adaptation apparently occurs. These findings support a hypothesis that neural adaptation for binaural processing deficit may exacerbate cognitive impairment, which could help identify biomarkers and therapeutic targets in AD.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Enfermedad de Alzheimer/psicología , Biomarcadores , Disfunción Cognitiva/psicología , Humanos , Trastornos de la Memoria , Pruebas Neuropsicológicas
2.
Ear Hear ; 34(3): 280-7, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23132528

RESUMEN

OBJECTIVES: Previous studies have shown that both younger adults and older adults with clinically normal hearing are able to detect a break in correlation (BIC) between interaurally correlated sounds presented over headphones. This ability to detect a BIC improved when the correlated sounds were presented over left and right loudspeakers rather than over left and right headphones, suggesting that additional spectral cues provided by comb filtering (caused by interference between the two channels) facilitate detection of the BIC. However, older adults receive significantly less benefit than younger adults from a switch to loudspeaker presentation. It is hypothesized that this is a result of an age-related reduction in the sensitivity to the monaural spectral cues provided by comb filtering. DESIGN: Two experiments were conducted in this study. Correlated white noises with a BIC in the temporal middle were presented from two spatially separated loudspeakers (positioned at ±45-degree azimuth) and recorded at the right ear of a Knowles Electronic Manikin for Acoustic Research (KEMAR). In Experiment 1, the waveforms recorded at the KEMAR's right ear were presented to the participant's right ear over a headphone in 14 younger adults and 24 older adults with clinically normal hearing. In Experiment 2, 8 of the 14 younger participants participated. Under the monaurally cueing condition, the waveforms recorded at the KEMAR's right ear were presented to the participant's right ear as Experiment 1. Under the binaurally cueing condition, waveforms delivered from the left loudspeaker and those from the right loudspeaker were recorded at the KEMAR's left and right ear, respectively, thereby eliminating the spectral ripple cue, and were presented to the participant's left and right ears, respectively. For each of the two experiments, the break duration threshold for detecting the BIC was examined when the interloudspeaker interval (delay) (ILI) was 0, 1, 2, or 4 msec (left loudspeaker leading). RESULTS: In Experiment 1, both younger participants and older participants detected the BIC in the waveforms recorded by the right ear of KEMAR, but older participants had higher detection thresholds than younger participants when the ILI was 0, 2, or 4 msec without an effect of SPL shift between 59 and 71 dB. In Experiment 2, each of the eight younger participants was able to detect the occurrence of the BIC in either the monaurally cueing or binaural-cueing condition. In addition, the detection threshold under the monaurally cueing condition was substantially the same as that under the binaurally cueing condition at each of the four ILIs. CONCLUSIONS: Younger adults and older adults with clinically normal hearing are able to detect the monaural spectral changes arising from comb filtering when a sudden drop in intersound correlation is introduced. However, younger adults are more sensitive than older adults are, at detecting the BIC. The findings suggest that older adults are less able than younger adults to detect a periodic ripple in the sound spectrum. This age-related ability reduction may contribute to older adults' difficulties in hearing under noisy, reverberant conditions.


Asunto(s)
Estimulación Acústica/métodos , Envejecimiento/fisiología , Umbral Auditivo/fisiología , Audición/fisiología , Ruido/efectos adversos , Percepción del Habla/fisiología , Adulto , Factores de Edad , Anciano , Análisis de Varianza , Audiometría de Tonos Puros , Femenino , Humanos , Persona de Mediana Edad
3.
Adv Exp Med Biol ; 787: 239-46, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23716229

RESUMEN

Different models of the binaural system make different predictions for the just-detectable interaural time difference (ITD) for sine tones. To test these models, ITD thresholds were measured for human listeners focusing on high- and low-frequency regions. The measured thresholds exhibited a minimum between 700 and 1,000 Hz. As the frequency increased above 1,000 Hz, thresholds rose faster than exponentially. Although finite thresholds could be measured at 1,400 Hz, experiments did not converge at 1,450 Hz and higher. A centroid computation along the interaural delay axis, within the context of the Jeffress model, can successfully simulate the minimum and the high-frequency dependence. In the limit of medium-low frequencies (f), where f . ITD << 1, mathematical approximations predict low-­ frequency slopes for the centroid model and for a rate-code model. It was found that measured thresholds were approximately inversely proportional to the frequency (slope = ­1) in agreement with a rate-code model. However, the centroid model is capable of a wide range of predictions (slopes from 0 to ­2).


Asunto(s)
Percepción Auditiva/fisiología , Umbral Auditivo/fisiología , Modelos Neurológicos , Percepción de la Altura Tonal/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica/métodos , Humanos
4.
Artículo en Inglés | MEDLINE | ID: mdl-34078214

RESUMEN

Aging impairs visual associative memories. Up to date, little is known about whether aging impairs auditory associative memories. Using the head-related-transfer function to induce perceived spatial locations of auditory phonemes, this study used an audiospatial paired-associates-learning (PAL) paradigm to assess the auditory associative memory for phoneme-location pairs in both younger and older adults. Both aging groups completed the PAL task with various levels of difficulty, which were defined by the number of items to be remembered. The results showed that compared with younger participants' performance, older participants passed fewer stages and had lower capacity of auditory associative memory. For maintaining a single audiospatial pair, no significant behavioral differences between the two aging grous werefound. However, when multiple sound-location pairs were required to be remembered, older adults made more errors and demonstrated a lower working memory capacity than younger adults. Our study indicates aging impairs audiospatial associative learning and memory.


Asunto(s)
Aprendizaje por Asociación , Memoria a Corto Plazo , Anciano , Envejecimiento/psicología , Humanos , Recuerdo Mental
5.
J Cogn Neurosci ; 23(4): 1003-14, 2011 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20350060

RESUMEN

To discriminate and to recognize sound sources in a noisy, reverberant environment, listeners need to perceptually integrate the direct wave with the reflections of each sound source. It has been confirmed that perceptual fusion between direct and reflected waves of a speech sound helps listeners recognize this speech sound in a simulated reverberant environment with disrupting sound sources. When the delay between a direct sound wave and its reflected wave is sufficiently short, the two waves are perceptually fused into a single sound image as coming from the source location. Interestingly, compared with nonspeech sounds such as clicks and noise bursts, speech sounds have a much larger perceptual fusion tendency. This study investigated why the fusion tendency for speech sounds is so large. Here we show that when the temporal amplitude fluctuation of speech was artificially time reversed, a large perceptual fusion tendency of speech sounds disappeared, regardless of whether the speech acoustic carrier was in normal or reversed temporal order. Moreover, perceptual fusion of normal-order speech, but not that of time-reversed speech, was accompanied by increased coactivation of the attention-control-related, spatial-processing-related, and speech-processing-related cortical areas. Thus, speech-like acoustic carriers modulated by speech amplitude fluctuation selectively activate a cortical network for top-down modulations of speech processing, leading to an enhancement of perceptual fusion of speech sounds. This mechanism represents a perceptual-grouping strategy for unmasking speech under adverse conditions.


Asunto(s)
Encéfalo/fisiología , Fonética , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Análisis de Varianza , Umbral Auditivo , Encéfalo/irrigación sanguínea , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética/métodos , Masculino , Oxígeno/sangre , Estadística como Asunto , Factores de Tiempo , Adulto Joven
6.
Front Psychol ; 12: 692785, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34220654

RESUMEN

This study was to investigate whether human listeners are able to detect a binaurally uncorrelated arbitrary-noise fragment embedded in binaurally identical arbitrary-noise markers [a break in correlation, break in interaural correlation (BIAC)] in either frequency-constant (frequency-steady) or frequency-varied (unidirectionally frequency gliding) noise. Ten participants with normal hearing were tested in Experiment 1 for up-gliding, down-gliding, and frequency-steady noises. Twenty-one participants with normal hearing were tested in Experiment 2a for both up-gliding and frequency-steady noises. Another nineteen participants with normal hearing were tested in Experiment 2b for both down-gliding and frequency-steady noises. Listeners were able to detect a BIAC in the frequency-steady noise (center frequency = 400 Hz) and two types of frequency-gliding noises (center frequency: between 100 and 1,600 Hz). The duration threshold for detecting the BIAC in frequency-gliding noises was significantly longer than that in the frequency-steady noise (Experiment 1), and the longest interaural delay at which a duration-fixed BIAC (200 ms) in frequency-gliding noises could be detected was significantly shorter than that in the frequency-steady noise (Experiment 2). Although human listeners can detect a BIAC in frequency-gliding noises, their sensitivity to a BIAC in frequency-gliding noises is much lower than that in frequency-steady noise.

7.
Hear Res ; 244(1-2): 51-65, 2008 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-18694813

RESUMEN

This study evaluated unmasking functions of perceptual integration of target speech and simulated target-speech reflection, which were presented by two spatially separated loudspeakers. In both younger-adult listeners with normal hearing and older-adult listeners in the early stages of presbycusis, reducing the time interval between target speech and target-reflection simulation (inter-target interval, ITI) from 64 to 0ms not only progressively enhanced perceptual integration of target-speech signals, but also progressively released target speech from either speech masking or noise masking. When the signal-to-noise ratio was low, the release from speech masking was significantly larger than the release from noise masking in both younger listeners and older listeners, but the longest ITI at which a significant release from speech masking occurred was significantly shorter in older listeners than in younger listeners. These results suggest that in reverberant environments with multi-talker speech, perceptual integration between the direct sound wave and correlated reflections, which facilitates perceptual segregation of various sources, is critical for unmasking attended speech. The age-related reduction of the ITI range for releasing speech from speech masking may be one of the causes for the speech-recognition difficulties experienced by older listeners in such adverse environments.


Asunto(s)
Percepción , Enmascaramiento Perceptual/fisiología , Adolescente , Adulto , Factores de Edad , Anciano , Audiometría del Habla , Femenino , Humanos , Masculino , Persona de Mediana Edad , Habla , Acústica del Lenguaje , Percepción del Habla/fisiología , Prueba del Umbral de Recepción del Habla
8.
Atten Percept Psychophys ; 80(4): 871-883, 2018 May.
Artículo en Inglés | MEDLINE | ID: mdl-29473143

RESUMEN

Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.


Asunto(s)
Señales (Psicología) , Emociones , Enmascaramiento Perceptual/fisiología , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Adulto , Femenino , Humanos , Masculino , Ruido , Voz , Adulto Joven
9.
Front Hum Neurosci ; 11: 34, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28239344

RESUMEN

Human listeners are able to selectively attend to target speech in a noisy environment with multiple-people talking. Using recordings of scalp electroencephalogram (EEG), this study investigated how selective attention facilitates the cortical representation of target speech under a simulated "cocktail-party" listening condition with speech-on-speech masking. The result shows that the cortical representation of target-speech signals under the multiple-people talking condition was specifically improved by selective attention relative to the non-selective-attention listening condition, and the beta-band activity was most strongly modulated by selective attention. Moreover, measured with the Granger Causality value, selective attention to the single target speech in the mixed-speech complex enhanced the following four causal connectivities for the beta-band oscillation: the ones (1) from site FT7 to the right motor area, (2) from the left frontal area to the right motor area, (3) from the central frontal area to the right motor area, and (4) from the central frontal area to the right frontal area. However, the selective-attention-induced change in beta-band causal connectivity from the central frontal area to the right motor area, but not other beta-band causal connectivities, was significantly correlated with the selective-attention-induced change in the cortical beta-band representation of target speech. These findings suggest that under the "cocktail-party" listening condition, the beta-band oscillation in EEGs to target speech is specifically facilitated by selective attention to the target speech that is embedded in the mixed-speech complex. The selective attention-induced unmasking of target speech may be associated with the improved beta-band functional connectivity from the central frontal area to the right motor area, suggesting a top-down attentional modulation of the speech-motor process.

10.
PLoS One ; 10(6): e0126342, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26125970

RESUMEN

The subjective representation of the sounds delivered to the two ears of a human listener is closely associated with the interaural delay and correlation of these two-ear sounds. When the two-ear sounds, e.g., arbitrary noises, arrive simultaneously, the single auditory image of the binaurally identical noises becomes increasingly diffuse, and eventually separates into two auditory images as the interaural correlation decreases. When the interaural delay increases from zero to several milliseconds, the auditory image of the binaurally identical noises also changes from a single image to two distinct images. However, measuring the effect of these two factors on an identical group of participants has not been investigated. This study examined the impacts of interaural correlation and delay on detecting a binaurally uncorrelated fragment (interaural correlation = 0) embedded in the binaurally correlated noises (i.e., binaural gap or break in interaural correlation). We found that the minimum duration of the binaural gap for its detection (i.e., duration threshold) increased exponentially as the interaural delay between the binaurally identical noises increased linearly from 0 to 8 ms. When no interaural delay was introduced, the duration threshold also increased exponentially as the interaural correlation of the binaurally correlated noises decreased linearly from 1 to 0.4. A linear relationship between the effect of interaural delay and that of interaural correlation was described for listeners participating in this study: a 1 ms increase in interaural delay appeared to correspond to a 0.07 decrease in interaural correlation specific to raising the duration threshold. Our results imply that a tradeoff may exist between the impacts of interaural correlation and interaural delay on the subjective representation of sounds delivered to two human ears.


Asunto(s)
Percepción Auditiva/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Umbral Auditivo/fisiología , Femenino , Humanos , Masculino , Modelos Psicológicos , Ruido , Factores de Tiempo , Adulto Joven
11.
Behav Brain Res ; 269: 87-94, 2014 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-24780867

RESUMEN

Prepulse inhibition (PPI) is the suppression of the startle reflex when the startling stimulus is shortly preceded by a non-startling stimulus (the prepulse). Previous studies have shown that both fear conditioning of a prepulse and precedence-effect-induced perceptual separation between the conditioned prepulse and a noise masker facilitate selective attention to the prepulse and consequently enhance PPI with a remarkable prepulse-feature specificity. This study investigated whether the two types of attentional enhancements of PPI in rats also exhibit a prepulse-location specificity. The results showed that when a prepulse was delivered by each of the two spatially separated loudspeakers, fear conditioning of the prepulse at a particularly perceived location (left or right to the tested rat) enhanced PPI without exhibiting any perceived-location specificity. However, when a noise masker was presented, the precedence-effect-induced perceptual separation between the conditioned prepulse and the noise masker further enhanced PPI when the prepulse was perceived as coming from the location that was conditioned but not the location without being conditioned. Moreover, both conditioning-induced and perceptual separation-induced PPI enhancements were eliminated by extinction learning, whose effect could be blocked by systemic injection of the selective antagonist of metabotropic glutamate receptor subtype 5 (mGluR5), 2-methyl-6-(phenylethynyl)-pyridine (MPEP). Thus, fear conditioning of a prepulse perceived at a particular location not only facilitates selective attention to the conditioned prepulse but also induces a learning-based spatial gating effect on the spatial unmasking of the conditioned prepulse, leading to that the perceptual separation-induced PPI enhancement becomes perceived-location specific.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Condicionamiento Psicológico/fisiología , Miedo/fisiología , Inhibición Prepulso/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Animales , Atención/efectos de los fármacos , Percepción Auditiva/efectos de los fármacos , Condicionamiento Psicológico/efectos de los fármacos , Electrochoque , Antagonistas de Aminoácidos Excitadores/farmacología , Extinción Psicológica/efectos de los fármacos , Extinción Psicológica/fisiología , Miedo/efectos de los fármacos , Masculino , Inhibición Prepulso/efectos de los fármacos , Piridinas/farmacología , Distribución Aleatoria , Ratas Sprague-Dawley , Receptor del Glutamato Metabotropico 5/antagonistas & inhibidores , Receptor del Glutamato Metabotropico 5/metabolismo , Localización de Sonidos/efectos de los fármacos , Localización de Sonidos/fisiología , Percepción Espacial/efectos de los fármacos
12.
Psych J ; 3(2): 113-20, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-26271763

RESUMEN

In noisy, multipeople talking environments such as a cocktail party, listeners can use various perceptual and/or cognitive cues to improve recognition of target speech against masking, particularly informational masking. Previous studies have shown that temporally prepresented voice cues (voice primes) improve recognition of target speech against speech masking but not noise masking. This study investigated whether static face image primes that have become target-voice associated (i.e., facial images linked through associative learning with voices reciting the target speech) can be used by listeners to unmask speech. The results showed that in 32 normal-hearing younger adults, temporally prepresenting a voice-priming sentence with the same voice reciting the target sentence significantly improved the recognition of target speech that was masked by irrelevant two-talker speech. When a person's face photograph image became associated with the voice reciting the target speech by learning, temporally prepresenting the target-voice-associated face image significantly improved recognition of target speech against speech masking, particularly for the last two keywords in the target sentence. Moreover, speech-recognition performance under the voice-priming condition was significantly correlated to that under the face-priming condition. The results suggest that learned facial information on talker identity plays an important role in identifying the target-talker's voice and facilitating selective attention to the target-speech stream against the masking-speech stream.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA