Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 152(3): 1300, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-36182279

RESUMO

Perception of word stress is an important aspect of recognizing speech, guiding the listener toward candidate words based on the perceived stress pattern. Cochlear implant (CI) signal processing is likely to disrupt some of the available cues for word stress, particularly vowel quality and pitch contour changes. In this study, we used a cue weighting paradigm to investigate differences in stress cue weighting patterns between participants listening with CIs and those with normal hearing (NH). We found that participants with CIs gave less weight to frequency-based pitch and vowel quality cues than NH listeners but compensated by upweighting vowel duration and intensity cues. Nonetheless, CI listeners' stress judgments were also significantly influenced by vowel quality and pitch, and they modulated their usage of these cues depending on the specific word pair in a manner similar to NH participants. In a series of separate online experiments with NH listeners, we simulated aspects of bimodal hearing by combining low-pass filtered speech with a vocoded signal. In these conditions, participants upweighted pitch and vowel quality cues relative to a fully vocoded control condition, suggesting that bimodal listening holds promise for restoring the stress cue weighting patterns exhibited by listeners with NH.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Estimulação Acústica/métodos , Acústica , Sinais (Psicologia) , Audição , Humanos
2.
J Acoust Soc Am ; 150(4): 3085, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34717460

RESUMO

The ability to see a talker's face improves speech intelligibility in noise, provided that the auditory and visual speech signals are approximately aligned in time. However, the importance of spatial alignment between corresponding faces and voices remains unresolved, particularly in multi-talker environments. In a series of online experiments, we investigated this using a task that required participants to selectively attend a target talker in noise while ignoring a distractor talker. In experiment 1, we found improved task performance when the talkers' faces were visible, but only when corresponding faces and voices were presented in the same hemifield (spatially aligned). In experiment 2, we tested for possible influences of eye position on this result. In auditory-only conditions, directing gaze toward the distractor voice reduced performance, but this effect could not fully explain the cost of audio-visual (AV) spatial misalignment. Lowering the signal-to-noise ratio (SNR) of the speech from +4 to -4 dB increased the magnitude of the AV spatial alignment effect (experiment 3), but accurate closed-set lipreading caused a floor effect that influenced results at lower SNRs (experiment 4). Taken together, these results demonstrate that spatial alignment between faces and voices contributes to the ability to selectively attend AV speech.


Assuntos
Percepção da Fala , Voz , Humanos , Leitura Labial , Ruído/efeitos adversos , Inteligibilidade da Fala
3.
Neuropsychologia ; 146: 107530, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32574616

RESUMO

In order to parse the world around us, we must constantly determine which sensory inputs arise from the same physical source and should therefore be perceptually integrated. Temporal coherence between auditory and visual stimuli drives audio-visual (AV) integration, but the role played by AV spatial alignment is less well understood. Here, we manipulated AV spatial alignment and collected electroencephalography (EEG) data while human subjects performed a free-field variant of the "pip and pop" AV search task. In this paradigm, visual search is aided by a spatially uninformative auditory tone, the onsets of which are synchronized to changes in the visual target. In Experiment 1, tones were either spatially aligned or spatially misaligned with the visual display. Regardless of AV spatial alignment, we replicated the key pip and pop result of improved AV search times. Mirroring the behavioral results, we found an enhancement of early event-related potentials (ERPs), particularly the auditory N1 component, in both AV conditions. We demonstrate that both top-down and bottom-up attention contribute to these N1 enhancements. In Experiment 2, we tested whether spatial alignment influences AV integration in a more challenging context with competing multisensory stimuli. An AV foil was added that visually resembled the target and was synchronized to its own stream of synchronous tones. The visual components of the AV target and AV foil occurred in opposite hemifields; the two auditory components were also in opposite hemifields and were either spatially aligned or spatially misaligned with the visual components to which they were synchronized. Search was fastest when the auditory and visual components of the AV target (and the foil) were spatially aligned. Attention modulated ERPs in both spatial conditions, but importantly, the scalp topography of early evoked responses shifted only when stimulus components were spatially aligned, signaling the recruitment of different neural generators likely related to multisensory integration. These results suggest that AV integration depends on AV spatial alignment when stimuli in both modalities compete for selective integration, a common scenario in real-world perception.


Assuntos
Percepção Auditiva , Percepção Visual , Estimulação Acústica , Eletroencefalografia , Potenciais Evocados , Humanos , Estimulação Luminosa
4.
PLoS One ; 13(8): e0200930, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30067790

RESUMO

The ventriloquism aftereffect (VAE) refers to a shift in auditory spatial perception following exposure to a spatial disparity between auditory and visual stimuli. The VAE has been previously measured on two distinct time scales. Hundreds or thousands of exposures to a an audio-visual spatial disparity produces enduring VAE that persists after exposure ceases. Exposure to a single audio-visual spatial disparity produces immediate VAE that decays over seconds. To determine if these phenomena are two extremes of a continuum or represent distinct processes, we conducted an experiment with normal hearing listeners that measured VAE in response to a repeated, constant audio-visual disparity sequence, both immediately after exposure to each audio-visual disparity and after the end of the sequence. In each experimental session, subjects were exposed to sequences of auditory and visual targets that were constantly offset by +8° or -8° in azimuth from one another, then localized auditory targets presented in isolation following each sequence. Eye position was controlled throughout the experiment, to avoid the effects of gaze on auditory localization. In contrast to other studies that did not control eye position, we found both a large shift in auditory perception that decayed rapidly after each AV disparity exposure, along with a gradual shift in auditory perception that grew over time and persisted after exposure to the AV disparity ceased. We modeled the temporal and spatial properties of the measured auditory shifts using grey box nonlinear system identification, and found that two models could explain the data equally well. In the power model, the temporal decay of the ventriloquism aftereffect was modeled with a power law relationship. This causes an initial rapid drop in auditory shift, followed by a long tail which accumulates with repeated exposure to audio-visual disparity. In the double exponential model, two separate processes were required to explain the data, one which accumulated and decayed exponentially and the other which slowly integrated over time. Both models fit the data best when the spatial spread of the ventriloquism aftereffect was limited to a window around the location of the audio-visual disparity. We directly compare the predictions made by each model, and suggest additional measurements that could help distinguish which model best describes the mechanisms underlying the VAE.


Assuntos
Localização de Som , Adaptação Psicológica , Adulto , Movimentos Oculares , Feminino , Humanos , Masculino , Modelos Biológicos , Psicofísica , Percepção Espacial , Fatores de Tempo , Percepção Visual , Adulto Jovem
5.
Exp Brain Res ; 235(2): 585-595, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27837258

RESUMO

Visual capture and the ventriloquism aftereffect resolve spatial disparities of incongruent auditory visual (AV) objects by shifting auditory spatial perception to align with vision. Here, we demonstrated the distinct temporal characteristics of visual capture and the ventriloquism aftereffect in response to brief AV disparities. In a set of experiments, subjects localized either the auditory component of AV targets (A within AV) or a second sound presented at varying delays (1-20 s) after AV exposure (A2 after AV). AV targets were trains of brief presentations (1 or 20), covering a ±30° azimuthal range, and with ±8° (R or L) disparity. We found that the magnitude of visual capture generally reached its peak within a single AV pair and did not dissipate with time, while the ventriloquism aftereffect accumulated with repetitions of AV pairs and dissipated with time. Additionally, the magnitude of the auditory shift induced by each phenomenon was uncorrelated across listeners and visual capture was unaffected by subsequent auditory targets, indicating that visual capture and the ventriloquism aftereffect are separate mechanisms with distinct effects on auditory spatial perception. Our results indicate that visual capture is a 'sample-and-hold' process that binds related objects and stores the combined percept in memory, whereas the ventriloquism aftereffect is a 'leaky integrator' process that accumulates with experience and decays with time to compensate for cross-modal disparities.


Assuntos
Localização de Som/fisiologia , Disparidade Visual/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Análise de Variância , Feminino , Humanos , Masculino , Memória/fisiologia , Estimulação Luminosa , Adulto Jovem
6.
Biol Cybern ; 110(6): 455-471, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27815630

RESUMO

Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.


Assuntos
Percepção Auditiva , Modelos Biológicos , Animais , Teorema de Bayes , Humanos , Julgamento , Localização de Som , Percepção Espacial , Percepção Visual
7.
Atten Percept Psychophys ; 78(5): 1392-404, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27029482

RESUMO

What role do general-purpose, experience-sensitive perceptual mechanisms play in producing characteristic features of face perception? We previously demonstrated that different-colored, misaligned framing backgrounds, designed to disrupt perceptual grouping of face parts appearing upon them, disrupt holistic face perception. In the current experiments, a similar part-judgment task with composite faces was performed: face parts appeared in either misaligned, different-colored rectangles or aligned, same-colored rectangles. To investigate whether experience can shape impacts of perceptual grouping on holistic face perception, a pre-task fostered the perception of either (a) the misaligned, differently colored rectangle frames as parts of a single, multicolored polygon or (b) the aligned, same-colored rectangle frames as a single square shape. Faces appearing in the misaligned, differently colored rectangles were processed more holistically by those in the polygon-, compared with the square-, pre-task group. Holistic effects for faces appearing in aligned, same-colored rectangles showed the opposite pattern. Experiment 2, which included a pre-task condition fostering the perception of the aligned, same-colored frames as pairs of independent rectangles, provided converging evidence that experience can modulate impacts of perceptual grouping on holistic face perception. These results are surprising given the proposed impenetrability of holistic face perception and provide insights into the elusive mechanisms underlying holistic perception.


Assuntos
Reconhecimento Facial , Análise e Desempenho de Tarefas , Adulto , Cor , Feminino , Saúde Holística , Humanos , Julgamento , Masculino , Mascaramento Perceptivo , Estimulação Luminosa/métodos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA