Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
J Neurophysiol ; 125(2): 556-567, 2021 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-33378250

RESUMEN

To program a goal-directed response in the presence of acoustic reflections, the audio-motor system should suppress the detection of time-delayed sources. We examined the effects of spatial separation and interstimulus delay on the ability of human listeners to localize a pair of broadband sounds in the horizontal plane. Participants indicated how many sounds were heard and where these were perceived by making one or two head-orienting localization responses. Results suggest that perceptual fusion of the two sounds depends on delay and spatial separation. Leading and lagging stimuli in close spatial proximity required longer stimulus delays to be perceptually separated than those further apart. Whenever participants heard one sound, their localization responses for synchronous sounds were oriented to a weighted average of both source locations. For short delays, responses were directed toward the leading stimulus location. Increasing spatial separation enhanced this effect. For longer delays, responses were again directed toward a weighted average. When participants perceived two sounds, the first and the second response were directed to either of the leading and lagging source locations. Perceived locations were interchanged often in their temporal order (in ∼40% of trials). We show that the percept of two sounds occurring requires sufficient spatiotemporal separation, after which localization can be performed with high accuracy. We propose that the percept of temporal order of two concurrent sounds results from a different process than localization and discuss how dynamic lateral excitatory-inhibitory interactions within a spatial sensorimotor map could explain the findings.NEW & NOTEWORTHY Sound localization requires spectral and temporal processing of implicit acoustic cues, and is seriously challenged when multiple sources coincide closely in space and time. We systematically varied spatial-temporal disparities for two sounds and instructed listeners to generate goal-directed head movements. We found that even when the auditory system has accurate representations of both sources, it still has trouble to decide whether the scene contained one or two sounds, and in which order they appeared.


Asunto(s)
Localización de Sonidos , Conducta Espacial , Adulto , Encéfalo/fisiología , Señales (Psicología) , Femenino , Movimientos de la Cabeza , Humanos , Masculino
2.
J Acoust Soc Am ; 142(5): 3094, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-29195479

RESUMEN

To program a goal-directed response in the presence of multiple sounds, the audiomotor system should separate the sound sources. The authors examined whether the brain can segregate synchronous broadband sounds in the midsagittal plane, using amplitude modulations as an acoustic discrimination cue. To succeed in this task, the brain has to use pinna-induced spectral-shape cues and temporal envelope information. The authors tested spatial segregation performance in the midsagittal plane in two paradigms in which human listeners were required to localize, or distinguish, a target amplitude-modulated broadband sound when a non-modulated broadband distractor was played simultaneously at another location. The level difference between the amplitude-modulated and distractor stimuli was systematically varied, as well as the modulation frequency of the target sound. The authors found that participants were unable to segregate, or localize, the synchronous sounds. Instead, they invariably responded toward a level-weighted average of both sound locations, irrespective of the modulation frequency. An increased variance in the response distributions for double sounds of equal level was also observed, which cannot be accounted for by a segregation model, or by a probabilistic averaging model.

3.
Eur J Neurosci ; 39(9): 1538-50, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24649904

RESUMEN

We characterised task-related top-down signals in monkey auditory cortex cells by comparing single-unit activity during passive sound exposure with neuronal activity during a predictable and unpredictable reaction-time task for a variety of spectral-temporally modulated broadband sounds. Although animals were not trained to attend to particular spectral or temporal sound modulations, their reaction times demonstrated clear acoustic spectral-temporal sensitivity for unpredictable modulation onsets. Interestingly, this sensitivity was absent for predictable trials with fast manual responses, but re-emerged for the slower reactions in these trials. Our analysis of neural activity patterns revealed a task-related dynamic modulation of auditory cortex neurons that was locked to the animal's reaction time, but invariant to the spectral and temporal acoustic modulations. This finding suggests dissociation between acoustic and behavioral signals at the single-unit level. We further demonstrated that single-unit activity during task execution can be described by a multiplicative gain modulation of acoustic-evoked activity and a task-related top-down signal, rather than by linear summation of these signals.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Neuronas/fisiología , Estimulación Acústica , Animales , Discriminación en Psicología/fisiología , Macaca mulatta , Masculino
4.
Eur J Neurosci ; 37(9): 1501-10, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23463919

RESUMEN

Orienting responses to audiovisual events have shorter reaction times and better accuracy and precision when images and sounds in the environment are aligned in space and time. How the brain constructs an integrated audiovisual percept is a computational puzzle because the auditory and visual senses are represented in different reference frames: the retina encodes visual locations with respect to the eyes; whereas the sound localisation cues are referenced to the head. In the well-known ventriloquist effect, the auditory spatial percept of the ventriloquist's voice is attracted toward the synchronous visual image of the dummy, but does this visual bias on sound localisation operate in a common reference frame by correctly taking into account eye and head position? Here we studied this question by independently varying initial eye and head orientations, and the amount of audiovisual spatial mismatch. Human subjects pointed head and/or gaze to auditory targets in elevation, and were instructed to ignore co-occurring visual distracters. Results demonstrate that different initial head and eye orientations are accurately and appropriately incorporated into an audiovisual response. Effectively, sounds and images are perceptually fused according to their physical locations in space independent of an observer's point of view. Implications for neurophysiological findings and modelling efforts that aim to reconcile sensory and motor signals for goal-directed behaviour are discussed.


Asunto(s)
Movimientos Oculares/fisiología , Movimientos de la Cabeza/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Desempeño Psicomotor , Retina/fisiología , Percepción Espacial/fisiología
5.
Eur J Neurosci ; 37(11): 1830-42, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23510187

RESUMEN

It is unclear whether top-down processing in the auditory cortex (AC) interferes with its bottom-up analysis of sound. Recent studies indicated non-acoustic modulations of AC responses, and that attention changes a neuron's spectrotemporal tuning. As a result, the AC would seem ill-suited to represent a stable acoustic environment, which is deemed crucial for auditory perception. To assess whether top-down signals influence acoustic tuning in tasks without directed attention, we compared monkey single-unit AC responses to dynamic spectrotemporal sounds under different behavioral conditions. Recordings were mostly made from neurons located in primary fields (primary AC and area R of the AC) that were well tuned to pure tones, with short onset latencies. We demonstrated that responses in the AC were substantially modulated during an auditory detection task and that these modulations were systematically related to top-down processes. Importantly, despite these significant modulations, the spectrotemporal receptive fields of all neurons remained remarkably stable. Our results suggest multiplexed encoding of bottom-up acoustic and top-down task-related signals at single AC neurons. This mechanism preserves a stable representation of the acoustic environment despite strong non-acoustic modulations.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva , Estimulación Acústica , Animales , Atención , Corteza Auditiva/citología , Macaca mulatta , Masculino , Neuronas/fisiología , Tiempo de Reacción
6.
Trends Hear ; 27: 23312165221143907, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36605011

RESUMEN

Many cochlear implant users with binaural residual (acoustic) hearing benefit from combining electric and acoustic stimulation (EAS) in the implanted ear with acoustic amplification in the other. These bimodal EAS listeners can potentially use low-frequency binaural cues to localize sounds. However, their hearing is generally asymmetric for mid- and high-frequency sounds, perturbing or even abolishing binaural cues. Here, we investigated the effect of a frequency-dependent binaural asymmetry in hearing thresholds on sound localization by seven bimodal EAS listeners. Frequency dependence was probed by presenting sounds with power in low-, mid-, high-, or mid-to-high-frequency bands. Frequency-dependent hearing asymmetry was present in the bimodal EAS listening condition (when using both devices) but was also induced by independently switching devices on or off. Using both devices, hearing was near symmetric for low frequencies, asymmetric for mid frequencies with better hearing thresholds in the implanted ear, and monaural for high frequencies with no hearing in the non-implanted ear. Results show that sound-localization performance was poor in general. Typically, localization was strongly biased toward the better hearing ear. We observed that hearing asymmetry was a good predictor for these biases. Notably, even when hearing was symmetric a preferential bias toward the ear using the hearing aid was revealed. We discuss how frequency dependence of any hearing asymmetry may lead to binaural cues that are spatially inconsistent as the spectrum of a sound changes. We speculate that this inconsistency may prevent accurate sound-localization even after long-term exposure to the hearing asymmetry.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Audífonos , Localización de Sonidos , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Implantación Coclear/métodos , Audición , Localización de Sonidos/fisiología , Estimulación Acústica/métodos
7.
J Neurosci ; 31(48): 17496-504, 2011 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-22131411

RESUMEN

The auditory system represents sound-source directions initially in head-centered coordinates. To program eye-head gaze shifts to sounds, the orientation of eyes and head should be incorporated to specify the target relative to the eyes. Here we test (1) whether this transformation involves a stage in which sounds are represented in a world- or a head-centered reference frame, and (2) whether acoustic spatial updating occurs at a topographically organized motor level representing gaze shifts, or within the tonotopically organized auditory system. Human listeners generated head-unrestrained gaze shifts from a large range of initial eye and head positions toward brief broadband sound bursts, and to tones at different center frequencies, presented in the midsagittal plane. Tones were heard at a fixed illusory elevation, regardless of their actual location, that depended in an idiosyncratic way on initial head and eye position, as well as on the tone's frequency. Gaze shifts to broadband sounds were accurate, fully incorporating initial eye and head positions. The results support the hypothesis that the auditory system represents sounds in a supramodal reference frame, and that signals about eye and head orientation are incorporated at a tonotopic stage.


Asunto(s)
Movimientos Oculares/fisiología , Movimientos de la Cabeza/fisiología , Orientación/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Percepción Auditiva/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad
8.
Trends Hear ; 26: 23312165221127589, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36172759

RESUMEN

We tested whether sensitivity to acoustic spectrotemporal modulations can be observed from reaction times for normal-hearing and impaired-hearing conditions. In a manual reaction-time task, normal-hearing listeners had to detect the onset of a ripple (with density between 0-8 cycles/octave and a fixed modulation depth of 50%), that moved up or down the log-frequency axis at constant velocity (between 0-64 Hz), in an otherwise-unmodulated broadband white-noise. Spectral and temporal modulations elicited band-pass filtered sensitivity characteristics, with fastest detection rates around 1 cycle/oct and 32 Hz for normal-hearing conditions. These results closely resemble data from other studies that typically used the modulation-depth threshold as a sensitivity criterion. To simulate hearing-impairment, stimuli were processed with a 6-channel cochlear-implant vocoder, and a hearing-aid simulation that introduced separate spectral smearing and low-pass filtering. Reaction times were always much slower compared to normal hearing, especially for the highest spectral densities. Binaural performance was predicted well by the benchmark race model of binaural independence, which models statistical facilitation of independent monaural channels. For the impaired-hearing simulations this implied a "best-of-both-worlds" principle in which the listeners relied on the hearing-aid ear to detect spectral modulations, and on the cochlear-implant ear for temporal-modulation detection. Although singular-value decomposition indicated that the joint spectrotemporal sensitivity matrix could be largely reconstructed from independent temporal and spectral sensitivity functions, in line with time-spectrum separability, a substantial inseparable spectral-temporal interaction was present in all hearing conditions. These results suggest that the reaction-time task yields a valid and effective objective measure of acoustic spectrotemporal-modulation sensitivity.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Estimulación Acústica , Umbral Auditivo , Audición , Humanos , Tiempo de Reacción
9.
J Neurosci ; 30(1): 194-204, 2010 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-20053901

RESUMEN

To program a goal-directed orienting response toward a sound source embedded in an acoustic scene, the audiomotor system should detect and select the target against a background. Here, we focus on whether the system can segregate synchronous sounds in the midsagittal plane (elevation), a task requiring the auditory system to dissociate the pinna-induced spectral localization cues. Human listeners made rapid head-orienting responses toward either a single sound source (broadband buzzer or Gaussian noise) or toward two simultaneously presented sounds (buzzer and noise) at a wide variety of locations in the midsagittal plane. In the latter case, listeners had to orient to the buzzer (target) and ignore the noise (nontarget). In the single-sound condition, localization was accurate. However, in the double-sound condition, response endpoints depended on relative sound level and spatial disparity. The loudest sound dominated the responses, regardless of whether it was the target or the nontarget. When the sounds had about equal intensities and their spatial disparity was sufficiently small, endpoint distributions were well described by weighted averaging. However, when spatial disparities exceeded approximately 45 degrees, response endpoint distributions became bimodal. Similar response behavior has been reported for visuomotor experiments, for which averaging and bimodal endpoint distributions are thought to arise from neural interactions within retinotopically organized visuomotor maps. We show, however, that the auditory-evoked responses can be well explained by the idiosyncratic acoustics of the pinnae. Hence basic principles of target representation and selection for audition and vision appear to differ profoundly.


Asunto(s)
Estimulación Acústica/métodos , Señales (Psicología) , Pabellón Auricular/fisiología , Orientación/fisiología , Tiempo de Reacción/fisiología , Localización de Sonidos/fisiología , Adulto , Percepción Auditiva/fisiología , Femenino , Humanos , Masculino
10.
Front Neurosci ; 15: 683804, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34393707

RESUMEN

The cochlear implant (CI) allows profoundly deaf individuals to partially recover hearing. Still, due to the coarse acoustic information provided by the implant, CI users have considerable difficulties in recognizing speech, especially in noisy environments. CI users therefore rely heavily on visual cues to augment speech recognition, more so than normal-hearing individuals. However, it is unknown how attention to one (focused) or both (divided) modalities plays a role in multisensory speech recognition. Here we show that unisensory speech listening and reading were negatively impacted in divided-attention tasks for CI users-but not for normal-hearing individuals. Our psychophysical experiments revealed that, as expected, listening thresholds were consistently better for the normal-hearing, while lipreading thresholds were largely similar for the two groups. Moreover, audiovisual speech recognition for normal-hearing individuals could be described well by probabilistic summation of auditory and visual speech recognition, while CI users were better integrators than expected from statistical facilitation alone. Our results suggest that this benefit in integration comes at a cost. Unisensory speech recognition is degraded for CI users when attention needs to be divided across modalities. We conjecture that CI users exhibit an integration-attention trade-off. They focus solely on a single modality during focused-attention tasks, but need to divide their limited attentional resources in situations with uncertainty about the upcoming stimulus modality. We argue that in order to determine the benefit of a CI for speech recognition, situational factors need to be discounted by presenting speech in realistic or complex audiovisual environments.

11.
eNeuro ; 8(3)2021.
Artículo en Inglés | MEDLINE | ID: mdl-33875456

RESUMEN

Although moving sound-sources abound in natural auditory scenes, it is not clear how the human brain processes auditory motion. Previous studies have indicated that, although ocular localization responses to stationary sounds are quite accurate, ocular smooth pursuit of moving sounds is very poor. We here demonstrate that human subjects faithfully track a sound's unpredictable movements in the horizontal plane with smooth-pursuit responses of the head. Our analysis revealed that the stimulus-response relation was well described by an under-damped passive, second-order low-pass filter in series with an idiosyncratic, fixed, pure delay. The model contained only two free parameters: the system's damping coefficient, and its central (resonance) frequency. We found that the latter remained constant at ∼0.6 Hz throughout the experiment for all subjects. Interestingly, the damping coefficient systematically increased with trial number, suggesting the presence of an adaptive mechanism in the auditory pursuit system (APS). This mechanism functions even for unpredictable sound-motion trajectories endowed with fixed, but covert, frequency characteristics in open-loop tracking conditions. We conjecture that the APS optimizes a trade-off between response speed and effort. Taken together, our data support the existence of a pursuit system for auditory head-tracking, which would suggest the presence of a neural representation of a spatial auditory fovea (AF).


Asunto(s)
Percepción de Movimiento , Seguimiento Ocular Uniforme , Humanos , Movimiento , Desempeño Psicomotor , Tiempo de Reacción , Sonido
12.
J Speech Lang Hear Res ; 64(12): 5000-5013, 2021 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-34714704

RESUMEN

PURPOSE: Speech understanding in noise and horizontal sound localization is poor in most cochlear implant (CI) users with a hearing aid (bimodal stimulation). This study investigated the effect of static and less-extreme adaptive frequency compression in hearing aids on spatial hearing. By means of frequency compression, we aimed to restore high-frequency audibility, and thus improve sound localization and spatial speech recognition. METHOD: Sound-detection thresholds, sound localization, and spatial speech recognition were measured in eight bimodal CI users, with and without frequency compression. We tested two compression algorithms: a static algorithm, which compressed frequencies beyond the compression knee point (160 or 480 Hz), and an adaptive algorithm, which aimed to compress only consonants leaving vowels unaffected (adaptive knee-point frequencies from 736 to 2946 Hz). RESULTS: Compression yielded a strong audibility benefit (high-frequency thresholds improved by 40 and 24 dB for static and adaptive compression, respectively), no meaningful improvement in localization performance (errors remained > 30 deg), and spatial speech recognition across all participants. Localization biases without compression (toward the hearing-aid and implant side for low- and high-frequency sounds, respectively) disappeared or reversed with compression. The audibility benefits provided to each bimodal user partially explained any individual improvements in localization performance; shifts in bias; and, for six out of eight participants, benefits in spatial speech recognition. CONCLUSIONS: We speculate that limiting factors such as a persistent hearing asymmetry and mismatch in spectral overlap prevent compression in bimodal users from improving sound localization. Therefore, the benefit in spatial release from masking by compression is likely due to a shift of attention to the ear with the better signal-to-noise ratio facilitated by compression, rather than an improved spatial selectivity. Supplemental Material https://doi.org/10.23641/asha.16869485.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Audífonos , Localización de Sonidos , Percepción del Habla , Audición , Humanos , Percepción del Habla/fisiología
13.
Eur J Neurosci ; 31(10): 1763-71, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-20584180

RESUMEN

Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.


Asunto(s)
Percepción Auditiva/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Atención/fisiología , Calibración , Señales (Psicología) , Electroencefalografía , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Percepción Espacial/fisiología , Adulto Joven
14.
Biol Cybern ; 103(6): 415-32, 2010 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-21082199

RESUMEN

The double magnetic induction (DMI) method has successfully been used to record head-unrestrained gaze shifts in human subjects (Bremen et al., J Neurosci Methods 160:75-84, 2007a, J Neurophysiol, 98:3759-3769, 2007b). This method employs a small golden ring placed on the eye that, when positioned within oscillating magnetic fields, induces orientation-dependent voltages in a pickup coil in front of the eye. Here we develop and test a streamlined calibration routine for use with experimental animals, in particular, with monkeys. The calibration routine requires the animal solely to accurately follow visual targets presented at random locations in the visual field. Animals can readily learn this task. In addition, we use the fact that the pickup coil can be fixed rigidly and reproducibly on implants on the animal's skull. Therefore, accumulation of calibration data leads to increasing accuracy. As a first step, we simulated gaze shifts and the resulting DMI signals. Our simulations showed that the complex DMI signals can be effectively calibrated with the use of random target sequences, which elicit substantial decoupling of eye- and head orientations in a natural way. Subsequently, we tested our paradigm on three macaque monkeys. Our results show that the data for a successful calibration can be collected in a single recording session, in which the monkey makes about 1,500-2,000 goal-directed saccades. We obtained a resolution of 30 arc minutes (measurement range [-60,+60]°). This resolution compares to the fixation resolution of the monkey's oculomotor system, and to the standard scleral search-coil method.


Asunto(s)
Movimientos Oculares , Magnetismo , Animales , Calibración , Modelos Teóricos
15.
Exp Brain Res ; 198(2-3): 425-37, 2009 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-19415249

RESUMEN

In a previous study we quantified the effect of multisensory integration on the latency and accuracy of saccadic eye movements toward spatially aligned audiovisual (AV) stimuli within a rich AV-background (Corneil et al. in J Neurophysiol 88:438-454, 2002). In those experiments both stimulus modalities belonged to the same object, and subjects were instructed to foveate that source, irrespective of modality. Under natural conditions, however, subjects have no prior knowledge as to whether visual and auditory events originated from the same, or from different objects in space and time. In the present experiments we included these possibilities by introducing various spatial and temporal disparities between the visual and auditory events within the AV-background. Subjects had to orient fast and accurately to the visual target, thereby ignoring the auditory distractor. We show that this task belies a dichotomy, as it was quite difficult to produce fast responses (<250 ms) that were not aurally driven. Subjects therefore made many erroneous saccades. Interestingly, for the spatially aligned events the inability to ignore auditory stimuli produced shorter reaction times, but also more accurate responses than for the unisensory target conditions. These findings, which demonstrate effective multisensory integration, are similar to the previous study, and the same multisensory integration rules are applied (Corneil et al. in J Neurophysiol 88:438-454, 2002). In contrast, with increasing spatial disparity, integration gradually broke down, as the subjects' responses became bistable: saccades were directed either to the auditory (fast responses), or to the visual stimulus (late responses). Interestingly, also in this case responses were faster and more accurate than to the respective unisensory stimuli.


Asunto(s)
Atención , Desempeño Psicomotor , Movimientos Sacádicos , Detección de Señal Psicológica , Estimulación Acústica , Adulto , Humanos , Masculino , Estimulación Luminosa , Psicofísica , Tiempo de Reacción , Regresión Psicológica , Localización de Sonidos , Análisis y Desempeño de Tareas , Factores de Tiempo , Adulto Joven
16.
eNeuro ; 6(2)2019.
Artículo en Inglés | MEDLINE | ID: mdl-30963103

RESUMEN

The auditory system relies on binaural differences and spectral pinna cues to localize sounds in azimuth and elevation. However, the acoustic input can be unreliable, due to uncertainty about the environment, and neural noise. A possible strategy to reduce sound-location uncertainty is to integrate the sensory observations with sensorimotor information from previous experience, to infer where sounds are more likely to occur. We investigated whether and how human sound localization performance is affected by the spatial distribution of target sounds, and changes thereof. We tested three different open-loop paradigms, in which we varied the spatial range of sounds in different ways. For the narrowest ranges, target-response gains were highly idiosyncratic and deviated from an optimal gain predicted by error-minimization; in the horizontal plane the deviation typically consisted of a response overshoot. Moreover, participants adjusted their behavior by rapidly adapting their gain to the target range, both in elevation and in azimuth, yielding behavior closer to optimal for larger target ranges. Notably, gain changes occurred without any exogenous feedback about performance. We discuss how the findings can be explained by a sub-optimal model in which the motor-control system reduces its response error across trials to within an acceptable range, rather than strictly minimizing the error.


Asunto(s)
Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Señales (Psicología) , Femenino , Humanos , Masculino , Adulto Joven
17.
Front Neurol ; 10: 637, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31293495

RESUMEN

This study describes sound localization and speech-recognition-in-noise abilities of a cochlear-implant user with electro-acoustic stimulation (EAS) in one ear, and a hearing aid in the contralateral ear. This listener had low-frequency, up to 250 Hz, residual hearing within the normal range in both ears. The objective was to determine how hearing devices affect spatial hearing for an individual with substantial unaided low-frequency residual hearing. Sound-localization performance was assessed for three sounds with different bandpass characteristics: low center frequency (100-400 Hz), mid center frequency (500-1,500 Hz) and high frequency broad-band (500-20,000 Hz) noise. Speech recognition was assessed with the Dutch Matrix sentence test presented in noise. Tests were performed while the listener used several on-off combinations of the devices. The listener localized low-center frequency sounds well in all hearing conditions, but mid-center frequency and high frequency broadband sounds were localized well almost exclusively in the completely unaided condition (mid-center frequency sounds were also localized well with the EAS device alone). Speech recognition was best in the fully aided condition with speech presented in the front and noise presented at either side. Furthermore, there was no significant improvement in speech recognition with all devices on, compared to when the listener used her cochlear implant only. Hearing aids and cochlear implant impair high frequency spatial hearing due to improper weighing of interaural time and level difference cues. The results reinforce the notion that hearing symmetry is important for sound localization. The symmetry is perturbed by the hearing devices for higher frequencies. Speech recognition depends mainly on hearing through the cochlear implant and is not significantly improved with the added information from hearing aids. A contralateral hearing aid provides benefit when the noise is spatially separated from the speech. However, this benefit is explained by the head shadow in that ear, rather than by an ability to spatially segregate noise from speech, as sound localization was perturbed with all devices in use.

18.
Trends Hear ; 23: 2331216519847332, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31088265

RESUMEN

Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus-response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.


Asunto(s)
Percepción Auditiva , Implantes Cocleares , Localización de Sonidos , Estimulación Acústica , Adulto , Percepción Auditiva/fisiología , Humanos , Masculino , Localización de Sonidos/fisiología
19.
Front Hum Neurosci ; 13: 335, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31611780

RESUMEN

We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic noise. In simple audiovisual perceptual tasks, inverse effectiveness is often observed, which holds that the weaker the unimodal stimuli, or the poorer their signal-to-noise ratio, the stronger the audiovisual benefit. So far, however, inverse effectiveness has not been demonstrated for complex audiovisual speech stimuli. Here we assess whether this multisensory integration effect can also be observed for the recognizability of spoken words. To that end, we presented audiovisual sentences to 18 native-Dutch normal-hearing participants, who had to identify the spoken words from a finite list. Speech-recognition performance was determined for auditory-only, visual-only (lipreading), and auditory-visual conditions. To modulate acoustic task difficulty, we systematically varied the auditory signal-to-noise ratio. In line with a commonly observed multisensory enhancement on speech recognition, audiovisual words were more easily recognized than auditory-only words (recognition thresholds of -15 and -12 dB, respectively). We here show that the difficulty of recognizing a particular word, either acoustically or visually, determines the occurrence of inverse effectiveness in audiovisual word integration. Thus, words that are better heard or recognized through lipreading, benefit less from bimodal presentation. Audiovisual performance at the lowest acoustic signal-to-noise ratios (45%) fell below the visual recognition rates (60%), reflecting an actual deterioration of lipreading in the presence of excessive acoustic noise. This suggests that the brain may adopt a strategy in which attention has to be divided between listening and lipreading.

20.
Sci Rep ; 8(1): 8670, 2018 06 06.
Artículo en Inglés | MEDLINE | ID: mdl-29875363

RESUMEN

Two synchronous sounds at different locations in the midsagittal plane induce a fused percept at a weighted-average position, with weights depending on relative sound intensities. In the horizontal plane, sound fusion (stereophony) disappears with a small onset asynchrony of 1-4 ms. The leading sound then fully determines the spatial percept (the precedence effect). Given that accurate localisation in the median plane requires an analysis of pinna-related spectral-shape cues, which takes ~25-30 ms of sound input to complete, we wondered at what time scale a precedence effect for elevation would manifest. Listeners localised the first of two sounds, with spatial disparities between 10-80 deg, and inter-stimulus delays between 0-320 ms. We demonstrate full fusion (averaging), and largest response variability, for onset asynchronies up to at least 40 ms for all spatial disparities. Weighted averaging persisted, and gradually decayed, for delays >160 ms, suggesting considerable backward masking. Moreover, response variability decreased with increasing delays. These results demonstrate that localisation undergoes substantial spatial blurring in the median plane by lagging sounds. Thus, the human auditory system, despite its high temporal resolution, is unable to spatially dissociate sounds in the midsagittal plane that co-occur within a time window of at least 160 ms.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA