Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Trends Hear ; 26: 23312165221097789, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35477340

RESUMO

To optimally improve signal-to-noise ratio in noisy environments, a hearing assistance device must correctly identify what is signal and what is noise. Many of the biosignal-based approaches to solving this question are themselves subject to noise, but head angle is an overt behavior that may be possible to capture in practical devices in the real world. Previous orientation studies have demonstrated that head angle is systematically related to listening target; our study aimed to examine whether this relationship is sufficiently reliable to be used in group conversations where participants may be seated in different layouts and the listener is free to turn their body as well as their head. In addition to this simple method, we developed a source-selection algorithm based on a hidden Markov model (HMM) trained on listeners' head movement. The performance of this model and the simple head-steering method was evaluated using publicly available behavioral data. Head angle during group conversation was predictive of active talker, exhibiting an undershoot with a slope consistent with that found in simple orientation studies, but the intercept of the linear relationship was different for different talker layouts, suggesting it would be problematic to rely exclusively on this information to predict the location of auditory attention. Provided the location of all target talkers is known, the HMM source selection model implemented here, however, showed significantly lower error in identifying listeners' auditory attention than the linear head-steering method.


Assuntos
Auxiliares de Audição , Percepção da Fala , Percepção Auditiva , Movimentos da Cabeça , Humanos , Ruído
2.
PLoS One ; 16(7): e0254119, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34197551

RESUMO

Those experiencing hearing loss face severe challenges in perceiving speech in noisy situations such as a busy restaurant or cafe. There are many factors contributing to this deficit including decreased audibility, reduced frequency resolution, and decline in temporal synchrony across the auditory system. Some hearing assistive devices implement beamforming in which multiple microphones are used in combination to attenuate surrounding noise while the target speaker is left unattenuated. In increasingly challenging auditory environments, more complex beamforming algorithms are required, which increases the processing time needed to provide a useful signal-to-noise ratio of the target speech. This study investigated whether the benefits from signal enhancement from beamforming are outweighed by the negative impacts on perception from an increase in latency between the direct acoustic signal and the digitally enhanced signal. The hypothesis for this study is that an increase in latency between the two identical speech signals would decrease intelligibility of the speech signal. Using 3 gain / latency pairs from a beamforming simulation previously completed in lab, perceptual thresholds of SNR from a simulated use case were obtained from normal hearing participants. No significant differences were detected between the 3 conditions. When presented with 2 copies of the same speech signal presented at varying gain / latency pairs in a noisy environment, any negative intelligibility effects from latency are masked by the noise. These results allow for more lenient restrictions for limiting processing delays in hearing assistive devices.


Assuntos
Ruído , Percepção da Fala , Adulto , Humanos , Masculino , Mascaramento Perceptivo , Adulto Jovem
3.
JASA Express Lett ; 1(4): 044401, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-36154203

RESUMO

Linear comparisons can fail to describe perceptual differences between head-related transfer functions (HRTFs), reducing their utility for perceptual tests, HRTF selection methods, and prediction algorithms. This work introduces a machine learning framework for constructing a perceptual error metric that is aligned with performance in human sound localization. A neural network is first trained to predict measurement locations from a large database of HRTFs and then fine-tuned with perceptual data. It demonstrates robust model performance over a standard spectral difference error metric. A statistical test is employed to quantify the information gain from the perceptual observations as a function of space.


Assuntos
Percepção Auditiva , Localização de Som , Algoritmos , Benchmarking , Humanos
4.
JASA Express Lett ; 1(3): 034401, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36154562

RESUMO

Speech intelligibility (SI) is known to be affected by the relative spatial position between target and interferers. The benefit of a spatial separation is, along with other factors, related to the head-related transfer function (HRTF). The HRTF is individually different and thus, the cues that affect SI might also be different. In the current study, an auditory model was employed to predict SI with various HRTFs and at different angles on the horizontal plane. The predicted SI threshold was found to be largely different across HRTFs. Thus, individual listeners might have different access to SI cues, dependent on their HRTF.

5.
Psychon Bull Rev ; 28(2): 632-640, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33051825

RESUMO

Many conversations in our day-to-day lives are held in noisy environments - impeding comprehension, and in groups - taxing auditory attention-switching processes. These situations are particularly challenging for older adults in cognitive and sensory decline. In noisy environments, a variety of extra-linguistic strategies are available to speakers and listeners to facilitate communication, but while models of language account for the impact of context on word choice, there has been little consideration of the impact of context on extra-linguistic behaviour. To address this issue, we investigate how the complexity of the acoustic environment and interaction situation impacts extra-linguistic conversation behaviour of older adults during face-to-face conversations. Specifically, we test whether the use of intelligibility-optimising strategies increases with complexity of the background noise (from quiet to loud, and in speech-shaped vs. babble noise), and with complexity of the conversing group (dyad vs. triad). While some communication strategies are enhanced in more complex background noise, with listeners orienting to talkers more optimally and moving closer to their partner in babble than speech-shaped noise, this is not the case with all strategies, as we find greater vocal level increases in the less complex speech-shaped noise condition. Other behaviours are enhanced in the more complex interaction situation, with listeners using more optimal head orientations, and taking longer turns when gaining the floor in triads compared to dyads. This study elucidates how different features of the conversation context impact individuals' communication strategies, which is necessary to both develop a comprehensive cognitive model of multimodal conversation behaviour, and effectively support individuals that struggle conversing.


Assuntos
Comunicação , Processos Grupais , Orientação Espacial/fisiologia , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Idoso , Feminino , Humanos , Masculino
6.
Sci Rep ; 9(1): 10451, 2019 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-31320658

RESUMO

How do people have conversations in noise and make themselves understood? While many previous studies have investigated speaking and listening in isolation, this study focuses on the behaviour of pairs of individuals in an ecologically valid context. Specifically, we report the fine-grained dynamics of natural conversation between interlocutors of varying hearing ability (n = 30), addressing how different levels of background noise affect speech, movement, and gaze behaviours. We found that as noise increased, people spoke louder and moved closer together, although these behaviours provided relatively small acoustic benefit (0.32 dB speech level increase per 1 dB noise increase). We also found that increased noise led to shorter utterances and increased gaze to the speaker's mouth. Surprisingly, interlocutors did not make use of potentially beneficial head orientations. While participants were able to sustain conversation in noise of up to 72 dB, changes in conversation structure suggested increased difficulty at 78 dB, with a significant decrease in turn-taking success. Understanding these natural conversation behaviours could inform broader models of interpersonal communication, and be applied to the development of new communication technologies. Furthermore, comparing these findings with those from isolation paradigms demonstrates the importance of investigating social processes in ecologically valid multi-person situations.


Assuntos
Percepção Auditiva/fisiologia , Comunicação , Fixação Ocular/fisiologia , Movimento , Ruído , Percepção da Fala/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Razão Sinal-Ruído
7.
Trends Hear ; 22: 2331216518775568, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29764312

RESUMO

By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals. A mathematical model that mimics the measured expansion of space can be used to successfully capture several previous findings in spatial auditory perception. The inverse of this function could be used alongside individualized head-related transfer functions and motion tracking to produce hyperstable virtual acoustic environments.


Assuntos
Audiometria de Tons Puros , Percepção Auditiva/fisiologia , Audição/fisiologia , Adulto , Limiar Auditivo/fisiologia , Humanos , Pessoa de Meia-Idade , Modelos Teóricos , Movimento (Física) , Escócia , Localização de Som/fisiologia , Percepção Espacial
8.
Proc Natl Acad Sci U S A ; 115(16): 4264-4269, 2018 04 17.
Artigo em Inglês | MEDLINE | ID: mdl-29531082

RESUMO

Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.


Assuntos
Percepção de Profundidade/fisiologia , Propriocepção/fisiologia , Localização de Som/fisiologia , Vestíbulo do Labirinto/fisiologia , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Movimentos da Cabeça/fisiologia , Humanos , Movimento (Física) , Psicoacústica , Adulto Jovem
9.
PLoS One ; 13(1): e0190420, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29304120

RESUMO

The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.


Assuntos
Eletroculografia/métodos , Movimentos Sacádicos , Algoritmos , Humanos
10.
Hear Res ; 357: 64-72, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29223929

RESUMO

The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to -18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above -12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below -6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB.


Assuntos
Auxiliares de Audição , Processamento de Sinais Assistido por Computador , Localização de Som , Acústica da Fala , Percepção da Fala , Estimulação Acústica , Compreensão , Desenho de Equipamento , Feminino , Movimentos da Cabeça , Humanos , Masculino , Ruído/efeitos adversos , Mascaramento Perceptivo , Razão Sinal-Ruído , Inteligibilidade da Fala
11.
PLoS Biol ; 15(6): e2001878, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28617796

RESUMO

A key function of the brain is to provide a stable representation of an object's location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.


Assuntos
Córtex Auditivo/fisiologia , Modelos Neurológicos , Modelos Psicológicos , Neurônios/fisiologia , Localização de Som , Processamento Espacial , Estimulação Acústica , Animais , Córtex Auditivo/citologia , Córtex Auditivo/efeitos da radiação , Comportamento Animal/efeitos da radiação , Estimulação Elétrica , Eletrodos Implantados , Potenciais Evocados Auditivos/efeitos da radiação , Comportamento Exploratório/efeitos da radiação , Feminino , Furões , Movimentos da Cabeça/efeitos da radiação , Locomoção/efeitos da radiação , Neurônios/citologia , Neurônios/efeitos da radiação , Som , Localização de Som/efeitos da radiação , Comportamento Espacial/efeitos da radiação , Processamento Espacial/efeitos da radiação , Gravação em Vídeo
12.
J Exp Psychol Hum Percept Perform ; 43(2): 371-380, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27841453

RESUMO

Hearing is confronted by a similar problem to vision when the observer moves. The image motion that is created remains ambiguous until the observer knows the velocity of eye and/or head. One way the visual system solves this problem is to use motor commands, proprioception, and vestibular information. These "extraretinal signals" compensate for self-movement, converting image motion into head-centered coordinates, although not always perfectly. We investigated whether the auditory system also transforms coordinates by examining the degree of compensation for head rotation when judging a moving sound. Real-time recordings of head motion were used to change the "movement gain" relating head movement to source movement across a loudspeaker array. We then determined psychophysically the gain that corresponded to a perceptually stationary source. Experiment 1 showed that the gain was small and positive for a wide range of trained head speeds. Hence, listeners perceived a stationary source as moving slightly opposite to the head rotation, in much the same way that observers see stationary visual objects move against a smooth pursuit eye movement. Experiment 2 showed the degree of compensation remained the same for sounds presented at different azimuths, although the precision of performance declined when the sound was eccentric. We discuss two possible explanations for incomplete compensation, one based on differences in the accuracy of signals encoding image motion and self-movement and one concerning statistical optimization that sacrifices accuracy for precision. We then consider the degree to which such explanations can be applied to auditory motion perception in moving listeners. (PsycINFO Database Record


Assuntos
Movimentos da Cabeça/fisiologia , Localização de Som/fisiologia , Adulto , Humanos , Psicofísica , Rotação
13.
J Am Acad Audiol ; 27(7): 588-600, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27406664

RESUMO

BACKGROUND: There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion-related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids. PURPOSE: To determine how age, hearing impairment, and the use of hearing aids affect a listener's ability to determine front from back based on both self-motion and spectral cues. RESEARCH DESIGN: We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener's head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids. STUDY SAMPLE: We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment. DATA COLLECTION AND ANALYSIS: Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided. RESULTS: Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in "front" responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front. CONCLUSIONS: Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion-related cues with sufficient fidelity to allow reliable front/back discrimination.


Assuntos
Fatores Etários , Movimentos da Cabeça , Auxiliares de Audição , Perda Auditiva/fisiopatologia , Localização de Som , Adulto , Idoso , Idoso de 80 Anos ou mais , Sinais (Psicologia) , Humanos , Pessoa de Meia-Idade , Percepção da Fala
14.
J Acoust Soc Am ; 137(5): EL360-6, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25994734

RESUMO

Sound sources at the same angle in front or behind a two-microphone array (e.g., bilateral hearing aids) produce the same time delay and two estimates for the direction of arrival: A front-back confusion. The auditory system can resolve this issue using head movements. To resolve front-back confusion for hearing-aid algorithms, head movement was measured using an inertial sensor. Successive time-delay estimates between the microphones are shifted clockwise and counterclockwise by the head movement between estimates and aggregated in two histograms. The histogram with the largest peak after multiple estimates predicted the correct hemifield for the source, eliminating the front-back confusions.


Assuntos
Biomimética , Correção de Deficiência Auditiva/instrumentação , Auxiliares de Audição , Pessoas com Deficiência Auditiva/reabilitação , Localização de Som , Estimulação Acústica , Algoritmos , Desenho de Equipamento , Análise de Fourier , Movimentos da Cabeça , Humanos , Modelos Teóricos , Movimento (Física) , Pessoas com Deficiência Auditiva/psicologia , Som , Fatores de Tempo
15.
Front Neurosci ; 8: 273, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25228856

RESUMO

WE ARE RARELY PERFECTLY STILL: our heads rotate in three axes and move in three dimensions, constantly varying the spectral and binaural cues at the ear drums. In spite of this motion, static sound sources in the world are typically perceived as stable objects. This argues that the auditory system-in a manner not unlike the vestibulo-ocular reflex-works to compensate for self motion and stabilize our sensory representation of the world. We tested a prediction arising from this postulate: that self motion should be processed more accurately than source motion. We used an infrared motion tracking system to measure head angle, and real-time interpolation of head related impulse responses to create "head-stabilized" signals that appeared to remain fixed in space as the head turned. After being presented with pairs of simultaneous signals consisting of a man and a woman speaking a snippet of speech, normal and hearing impaired listeners were asked to report whether the female voice was to the left or the right of the male voice. In this way we measured the moving minimum audible angle (MMAA). This measurement was made while listeners were asked to turn their heads back and forth between ± 15° and the signals were stabilized in space. After this "self-motion" condition we measured MMAA in a second "source-motion" condition when listeners remained still and the virtual locations of the signals were moved using the trajectories from the first condition. For both normal and hearing impaired listeners, we found that the MMAA for signals moving relative to the head was ~1-2° smaller when the movement was the result of self motion than when it was the result of source motion, even though the motion with respect to the head was identical. These results as well as the results of past experiments suggest that spatial processing involves an ongoing and highly accurate comparison of spatial acoustic cues with self-motion cues.

16.
Ear Hear ; 35(5): e204-12, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25148290

RESUMO

OBJECTIVES: Although directional microphones on a hearing aid provide a signal-to-noise ratio benefit in a noisy background, the amount of benefit is dependent on how close the signal of interest is to the front of the user. It is assumed that when the signal of interest is off-axis, users can reorient themselves to the signal to make use of the directional microphones to improve signal-to-noise ratio. The present study tested this assumption by measuring the head-orienting behavior of bilaterally fit hearing-impaired individuals with their microphones set to omnidirectional and directional modes. The authors hypothesized that listeners using directional microphones would have greater difficulty in rapidly and accurately orienting to off-axis signals than they would when using omnidirectional microphones. DESIGN: The authors instructed hearing-impaired individuals to turn and face a female talker in simultaneous surrounding male-talker babble. Participants pressed a button when they felt they were accurately oriented in the direction of the female talker. Participants completed three blocks of trials with their hearing aids in omnidirectional mode and three blocks in directional mode, with mode order randomized. Using a Vicon motion tracking system, the authors measured head position and computed fixation error, fixation latency, trajectory complexity, and proportion of misorientations. RESULTS: Results showed that for larger off-axis target angles, listeners using directional microphones took longer to reach their targets than they did when using omnidirectional microphones, although they were just as accurate. They also used more complex movements and frequently made initial turns in the wrong direction. For smaller off-axis target angles, this pattern was reversed, and listeners using directional microphones oriented more quickly and smoothly to the targets than when using omnidirectional microphones. CONCLUSIONS: The authors argue that an increase in movement complexity indicates a switch from a simple orienting movement to a search behavior. For the most off-axis target angles, listeners using directional microphones appear to not know which direction to turn, so they pick a direction at random and simply rotate their heads until the signal becomes more audible. The changes in fixation latency and head orientation trajectories suggest that the decrease in off-axis audibility is a primary concern in the use of directional microphones, and listeners could experience a loss of initial target speech while turning toward a new signal of interest. If hearing-aid users are to receive maximum directional benefit in noisy environments, both adaptive directionality in hearing aids and clinical advice on using directional microphones should take head movement and orientation behavior into account.


Assuntos
Desenho de Equipamento , Auxiliares de Audição , Localização de Som/fisiologia , Comportamento Apetitivo , Percepção Auditiva , Cabeça , Humanos , Movimento , Razão Sinal-Ruído
17.
PLoS One ; 8(12): e83068, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24312677

RESUMO

BACKGROUND: When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization. METHODOLOGY/PRINCIPAL FINDINGS: We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners' head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses. CONCLUSIONS/SIGNIFICANCE: Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory perception.


Assuntos
Percepção Auditiva/fisiologia , Movimentos da Cabeça/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Som , Localização de Som
18.
J Acoust Soc Am ; 133(2): EL118-22, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23363191

RESUMO

Listeners presented with noise were asked to press a key whenever they heard the vowels [a] or [i:]. The noise had a random spectrum, with levels in 60 frequency bins changing every 0.5 s. Reverse correlation was used to average the spectrum of the noise prior to each key press, thus estimating the features of the vowels for which the participants were listening. The formant frequencies of these reverse-correlated vowels were similar to those of their respective whispered vowels. The success of this response-triggered technique suggests that it may prove useful for estimating other internal representations, including perceptual phenomena like tinnitus.


Assuntos
Psicoacústica , Detecção de Sinal Psicológico , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adolescente , Audiometria da Fala , Feminino , Humanos , Masculino , Ruído/efeitos adversos , Mascaramento Perceptivo , Espectrografia do Som , Fatores de Tempo , Adulto Jovem
19.
Iperception ; 3(3): 179-82, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23145279

RESUMO

We used a dynamic auditory spatial illusion to investigate the role of self-motion and acoustics in shaping our spatial percept of the environment. Using motion capture, we smoothly moved a sound source around listeners as a function of their own head movements. A lowpass filtered sound behind a listener that moved in the direction it would have moved if it had been located in the front was perceived as statically located in front. The contrariwise effect occurred if the sound was in front but moved as if it were behind. The illusion was strongest for sounds lowpass filtered at 500 Hz and weakened as a function of increasing lowpass cut-off frequency. The signals with the most high frequency energy were often associated with an unstable location percept that flickered from front to back as self-motion cues and spectral cues for location came into conflict with one another.

20.
Hear Res ; 283(1-2): 162-8, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22079774

RESUMO

It has long been understood that the level of a sound at the ear is dependent on head orientation, but the way in which listeners move their heads during listening has remained largely unstudied. Given the task of understanding a speech signal in the presence of a simultaneous noise, listeners could potentially use head orientation to either maximize the level of the signal in their better ear, or to maximize the signal-to-noise ratio in their better ear. To establish what head orientation strategy listeners use in a speech comprehension task, we used an infrared motion-tracking system to measure the head movements of 36 listeners with large (>16 dB) differences in hearing threshold between their left and right ears. We engaged listeners in a difficult task of understanding sentences presented at the same time as a spatially separated background noise. We found that they tended to orient their heads so as to maximize the level of the target sentence in their better ear, irrespective of the position of the background noise. This is not ideal orientation behavior from the perspective of maximizing the signal-to-noise ratio (SNR) at the ear, but is a simple, easily implemented strategy that is often effective in an environment where the spatial position of multiple noise sources may be difficult or impossible to determine.


Assuntos
Lateralidade Funcional , Movimentos da Cabeça , Transtornos da Audição/fisiopatologia , Transtornos da Audição/psicologia , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/psicologia , Percepção da Fala , Estimulação Acústica , Análise de Variância , Audiometria de Tons Puros , Audiometria da Fala , Limiar Auditivo , Humanos , Pessoa de Meia-Idade , Detecção de Sinal Psicológico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...