Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Eur J Neurosci ; 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39085952

RESUMO

Sound-source localization is based on spatial cues arising due to interactions of sound waves with the torso, head and ears. Here, we evaluated neural responses to free-field sound sources in the central nucleus of the inferior colliculus (CIC), the medial geniculate body (MGB) and the primary auditory cortex (A1) of Mongolian gerbils. Using silicon probes we recorded from anaesthetized gerbils positioned in the centre of a sound-attenuating, anechoic chamber. We measured rate-azimuth functions (RAFs) with broad-band noise of varying levels presented from loudspeakers spanning 210° in azimuth and characterized RAFs by calculating spatial centroids, Equivalent Rectangular Receptive Fields (ERRFs), steepest slope locations and spatial-separation thresholds. To compare neuronal responses with behavioural discrimination thresholds from the literature we performed a neurometric analysis based on signal-detection theory. All structures demonstrated heterogeneous spatial tuning with a clear dominance of contralateral tuning. However, the relative amount of contralateral tuning decreased from the CIC to A1. In all three structures spatial tuning broadened with increasing sound-level. This effect was strongest in CIC and weakest in A1. Neurometric spatial-separation thresholds compared well with behavioural discrimination thresholds for locations directly in front of the animal. Our findings contrast with those reported for another rodent, the rat, which exhibits homogenous and sharply delimited contralateral spatial tuning. Spatial tuning in gerbils resembles more closely the tuning reported in A1 of cats, ferrets and non-human primates. Interestingly, gerbils, in contrast to rats, share good low-frequency hearing with carnivores and non-human primates, which may account for the observed spatial tuning properties.

2.
Sci Rep ; 8(1): 8670, 2018 06 06.
Artigo em Inglês | MEDLINE | ID: mdl-29875363

RESUMO

Two synchronous sounds at different locations in the midsagittal plane induce a fused percept at a weighted-average position, with weights depending on relative sound intensities. In the horizontal plane, sound fusion (stereophony) disappears with a small onset asynchrony of 1-4 ms. The leading sound then fully determines the spatial percept (the precedence effect). Given that accurate localisation in the median plane requires an analysis of pinna-related spectral-shape cues, which takes ~25-30 ms of sound input to complete, we wondered at what time scale a precedence effect for elevation would manifest. Listeners localised the first of two sounds, with spatial disparities between 10-80 deg, and inter-stimulus delays between 0-320 ms. We demonstrate full fusion (averaging), and largest response variability, for onset asynchronies up to at least 40 ms for all spatial disparities. Weighted averaging persisted, and gradually decayed, for delays >160 ms, suggesting considerable backward masking. Moreover, response variability decreased with increasing delays. These results demonstrate that localisation undergoes substantial spatial blurring in the median plane by lagging sounds. Thus, the human auditory system, despite its high temporal resolution, is unable to spatially dissociate sounds in the midsagittal plane that co-occur within a time window of at least 160 ms.

3.
Front Syst Neurosci ; 11: 89, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29238295

RESUMO

The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.

4.
J Neurosci ; 35(49): 16199-212, 2015 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-26658870

RESUMO

Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. SIGNIFICANCE STATEMENT: Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex.


Assuntos
Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Neurônios/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Potenciais de Ação/efeitos dos fármacos , Potenciais de Ação/fisiologia , Animais , Discriminação Psicológica , Antagonistas GABAérgicos/farmacologia , Corpos Geniculados/citologia , Corpos Geniculados/fisiologia , Colículos Inferiores/citologia , Colículos Inferiores/fisiologia , Masculino , Neurônios/efeitos dos fármacos , Ratos , Ratos Sprague-Dawley , Limiar Sensorial , Estatísticas não Paramétricas
5.
PLoS One ; 10(7): e0132423, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26176553

RESUMO

Cochlear implant (CI) listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH) listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking) or outside (central masking) the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners.


Assuntos
Percepção Auditiva/fisiologia , Implantes Cocleares , Sinais (Psicologia) , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Demografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Mascaramento Perceptivo , Processamento de Sinais Assistido por Computador , Fatores de Tempo
6.
J Neurophysiol ; 113(9): 3098-111, 2015 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-25744891

RESUMO

Locations of sounds are computed in the central auditory pathway based primarily on differences in sound level and timing at the two ears. In rats, the results of that computation appear in the primary auditory cortex (A1) as exclusively contralateral hemifield spatial sensitivity, with strong responses to sounds contralateral to the recording site, sharp cutoffs across the midline, and weak, sound-level-tolerant responses to ipsilateral sounds. We surveyed the auditory pathway in anesthetized rats to identify the brain level(s) at which level-tolerant spatial sensitivity arises. Noise-burst stimuli were varied in horizontal sound location and in sound level. Neurons in the central nucleus of the inferior colliculus (ICc) displayed contralateral tuning at low sound levels, but tuning was degraded at successively higher sound levels. In contrast, neurons in the nucleus of the brachium of the inferior colliculus (BIN) showed sharp, level-tolerant spatial sensitivity. The ventral division of the medial geniculate body (MGBv) contained two discrete neural populations, one showing broad sensitivity like the ICc and one showing sharp sensitivity like A1. Dorsal, medial, and shell regions of the MGB showed fairly sharp spatial sensitivity, likely reflecting inputs from A1 and/or the BIN. The results demonstrate two parallel brainstem pathways for spatial hearing. The tectal pathway, in which sharp, level-tolerant spatial sensitivity arises between ICc and BIN, projects to the superior colliculus and could support reflexive orientation to sounds. The lemniscal pathway, in which such sensitivity arises between ICc and the MGBv, projects to the forebrain to support perception of sound location.


Assuntos
Potenciais de Ação/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Corpos Geniculados/fisiologia , Colículos Inferiores/fisiologia , Células Receptoras Sensoriais/fisiologia , Estimulação Acústica , Animais , Mapeamento Encefálico , Masculino , Curva ROC , Ratos
7.
Artigo em Inglês | MEDLINE | ID: mdl-24822037

RESUMO

Coincidence detection by binaural neurons in the medial superior olive underlies sensitivity to interaural time difference (ITD) and interaural correlation (ρ). It is unclear whether this process is akin to a counting of individual coinciding spikes, or rather to a correlation of membrane potential waveforms resulting from converging inputs from each side. We analyzed spike trains of axons of the cat trapezoid body (TB) and auditory nerve (AN) in a binaural coincidence scheme. ITD was studied by delaying "ipsi-" vs. "contralateral" inputs; ρ was studied by using responses to different noises. We varied the number of inputs; the monaural and binaural threshold and the coincidence window duration. We examined physiological plausibility of output "spike trains" by comparing their rate and tuning to ITD and ρ to those of binaural cells. We found that multiple inputs are required to obtain a plausible output spike rate. In contrast to previous suggestions, monaural threshold almost invariably needed to exceed binaural threshold. Elevation of the binaural threshold to values larger than 2 spikes caused a drastic decrease in rate for a short coincidence window. Longer coincidence windows allowed a lower number of inputs and higher binaural thresholds, but decreased the depth of modulation. Compared to AN fibers, TB fibers allowed higher output spike rates for a low number of inputs, but also generated more monaural coincidences. We conclude that, within the parameter space explored, the temporal patterns of monaural fibers require convergence of multiple inputs to achieve physiological binaural spike rates; that monaural coincidences have to be suppressed relative to binaural ones; and that the neuron has to be sensitive to single binaural coincidences of spikes, for a number of excitatory inputs per side of 10 or less. These findings suggest that the fundamental operation in the mammalian binaural circuit is coincidence counting of single binaural input spikes.


Assuntos
Potenciais de Ação/fisiologia , Vias Auditivas/fisiologia , Nervo Coclear/fisiologia , Neurônios/fisiologia , Núcleo Olivar/fisiologia , Estimulação Acústica , Animais , Gatos , Localização de Som/fisiologia
8.
J Neurosci ; 33(44): 17506-18, 2013 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-24174683

RESUMO

Interaural time differences (ITDs) are a major cue for localizing low-frequency (<1.5 kHz) sounds. Sensitivity to this cue first occurs in the medial superior olive (MSO), which is thought to perform a coincidence analysis on its monaural inputs. Extracellular single-neuron recordings in MSO are difficult to obtain because (1) MSO action potentials are small and (2) a large field potential locked to the stimulus waveform hampers spike isolation. Consequently, only a limited number of studies report MSO data, and even in these studies data are limited in the variety of stimuli used, in the number of neurons studied, and in spike isolation. More high-quality data are needed to better understand the mechanisms underlying neuronal ITD-sensitivity. We circumvented these difficulties by recording from the axons of MSO neurons in the lateral lemniscus (LL) of the chinchilla, a species with pronounced low-frequency sensitivity. Employing sharp glass electrodes we successfully recorded from neurons with ITD sensitivity: the location, response properties, latency, and spike shape were consistent with an MSO axonal origin. The main difficulty encountered was mechanical stability. We obtained responses to binaural beats and dichotic noise bursts to characterize the best delay versus characteristic frequency distribution, and compared the data to recordings we obtained in the inferior colliculus (IC). In contrast to most reports in other rodents, many best delays were close to zero ITD, both in MSO and IC, with a majority of the neurons recorded in the LL firing maximally within the presumed ethological ITD range.


Assuntos
Vias Auditivas/fisiologia , Axônios/fisiologia , Potenciais Evocados Auditivos/fisiologia , Neurônios/fisiologia , Núcleo Olivar/fisiologia , Estimulação Acústica/métodos , Animais , Vias Auditivas/citologia , Tronco Encefálico/citologia , Tronco Encefálico/fisiologia , Chinchila , Sinais (Psicologia) , Feminino , Masculino , Núcleo Olivar/citologia
9.
J Neurophysiol ; 110(9): 2140-51, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23945782

RESUMO

The rat is a widely used species for study of the auditory system. Psychophysical results from rats have shown an inability to discriminate sound source locations within a lateral hemifield, despite showing fairly sharp near-midline acuity. We tested the hypothesis that those characteristics of the rat's sound localization psychophysics are evident in the characteristics of spatial sensitivity of its cortical neurons. In addition, we sought quantitative descriptions of in vivo spatial sensitivity of cortical neurons that would support development of an in vitro experimental model to study cortical mechanisms of spatial hearing. We assessed the spatial sensitivity of single- and multiple-neuron responses in the primary auditory cortex (A1) of urethane-anesthetized rats. Free-field noise bursts were varied throughout 360° of azimuth in the horizontal plane at sound levels from 10 to 40 dB above neural thresholds. All neurons encountered in A1 displayed contralateral-hemifield spatial tuning in that they responded strongly to contralateral sound source locations, their responses cut off sharply for locations near the frontal midline, and they showed weak or no responses to ipsilateral sources. Spatial tuning was quite stable across a 30-dB range of sound levels. Consistent with rat psychophysical results, a linear discriminator analysis of spike counts exhibited high spatial acuity for near-midline sounds and poor discrimination for off-midline locations. Hemifield spatial tuning is the most common pattern across all mammals tested previously. The homogeneous population of neurons in rat area A1 will make an excellent system for study of the mechanisms underlying that pattern.


Assuntos
Potenciais de Ação , Córtex Auditivo/fisiologia , Localização de Som , Animais , Córtex Auditivo/citologia , Masculino , Neurônios/fisiologia , Ratos , Ratos Sprague-Dawley
10.
J Neurosci ; 33(27): 10986-1001, 2013 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-23825404

RESUMO

In a complex auditory scene, a "cocktail party" for example, listeners can disentangle multiple competing sequences of sounds. A recent psychophysical study in our laboratory demonstrated a robust spatial component of stream segregation showing ∼8° acuity. Here, we recorded single- and multiple-neuron responses from the primary auditory cortex of anesthetized cats while presenting interleaved sound sequences that human listeners would experience as segregated streams. Sequences of broadband sounds alternated between pairs of locations. Neurons synchronized preferentially to sounds from one or the other location, thereby segregating competing sound sequences. Neurons favoring one source location or the other tended to aggregate within the cortex, suggestive of modular organization. The spatial acuity of stream segregation was as narrow as ∼10°, markedly sharper than the broad spatial tuning for single sources that is well known in the literature. Spatial sensitivity was sharpest among neurons having high characteristic frequencies. Neural stream segregation was predicted well by a parameter-free model that incorporated single-source spatial sensitivity and a measured forward-suppression term. We found that the forward suppression was not due to post discharge adaptation in the cortex and, therefore, must have arisen in the subcortical pathway or at the level of thalamocortical synapses. A linear-classifier analysis of single-neuron responses to rhythmic stimuli like those used in our psychophysical study yielded thresholds overlapping those of human listeners. Overall, the results indicate that the ascending auditory system does the work of segregating auditory streams, bringing them to discrete modules in the cortex for selection by top-down processes.


Assuntos
Estimulação Acústica/métodos , Potenciais de Ação/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Neurônios/fisiologia , Localização de Som/fisiologia , Animais , Vias Auditivas/fisiologia , Gatos , Humanos , Masculino
11.
PLoS One ; 8(3): e59815, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23527271

RESUMO

The auditory system creates a neuronal representation of the acoustic world based on spectral and temporal cues present at the listener's ears, including cues that potentially signal the locations of sounds. Discrimination of concurrent sounds from multiple sources is especially challenging. The current study is part of an effort to better understand the neuronal mechanisms governing this process, which has been termed "auditory scene analysis". In particular, we are interested in spatial release from masking by which spatial cues can segregate signals from other competing sounds, thereby overcoming the tendency of overlapping spectra and/or common temporal envelopes to fuse signals with maskers. We studied detection of pulsed tones in free-field conditions in the presence of concurrent multi-tone non-speech maskers. In "energetic" masking conditions, in which the frequencies of maskers fell within the ± 1/3-octave band containing the signal, spatial release from masking at low frequencies (~600 Hz) was found to be about 10 dB. In contrast, negligible spatial release from energetic masking was seen at high frequencies (~4000 Hz). We observed robust spatial release from masking in broadband "informational" masking conditions, in which listeners could confuse signal with masker even though there was no spectral overlap. Substantial spatial release was observed in conditions in which the onsets of the signal and all masker components were synchronized, and spatial release was even greater under asynchronous conditions. Spatial cues limited to high frequencies (>1500 Hz), which could have included interaural level differences and the better-ear effect, produced only limited improvement in signal detection. Substantially greater improvement was seen for low-frequency sounds, for which interaural time differences are the dominant spatial cue.


Assuntos
Percepção Auditiva/fisiologia , Sinais (Psicologia) , Discriminação Psicológica , Mascaramento Perceptivo/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Modelos Lineares , Masculino , Psicoacústica , Detecção de Sinal Psicológico/fisiologia , Estatísticas não Paramétricas
12.
Biol Cybern ; 103(6): 415-32, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21082199

RESUMO

The double magnetic induction (DMI) method has successfully been used to record head-unrestrained gaze shifts in human subjects (Bremen et al., J Neurosci Methods 160:75-84, 2007a, J Neurophysiol, 98:3759-3769, 2007b). This method employs a small golden ring placed on the eye that, when positioned within oscillating magnetic fields, induces orientation-dependent voltages in a pickup coil in front of the eye. Here we develop and test a streamlined calibration routine for use with experimental animals, in particular, with monkeys. The calibration routine requires the animal solely to accurately follow visual targets presented at random locations in the visual field. Animals can readily learn this task. In addition, we use the fact that the pickup coil can be fixed rigidly and reproducibly on implants on the animal's skull. Therefore, accumulation of calibration data leads to increasing accuracy. As a first step, we simulated gaze shifts and the resulting DMI signals. Our simulations showed that the complex DMI signals can be effectively calibrated with the use of random target sequences, which elicit substantial decoupling of eye- and head orientations in a natural way. Subsequently, we tested our paradigm on three macaque monkeys. Our results show that the data for a successful calibration can be collected in a single recording session, in which the monkey makes about 1,500-2,000 goal-directed saccades. We obtained a resolution of 30 arc minutes (measurement range [-60,+60]°). This resolution compares to the fixation resolution of the monkey's oculomotor system, and to the standard scleral search-coil method.


Assuntos
Movimentos Oculares , Magnetismo , Animais , Calibragem , Modelos Teóricos
13.
Eur J Neurosci ; 31(10): 1763-71, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-20584180

RESUMO

Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.


Assuntos
Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Atenção/fisiologia , Calibragem , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Percepção Espacial/fisiologia , Adulto Jovem
14.
Artigo em Inglês | MEDLINE | ID: mdl-20140676

RESUMO

We studied the influence of frequency on sound localization in free-flying barn owls by quantifying aspects of their target-approaching behavior to a distant sound source during ongoing auditory stimulation. In the baseline condition with a stimulus covering most of the owls hearing range (1-10 kHz), all owls landed within a radius of 20 cm from the loudspeaker in more than 80% of the cases and localization along the azimuth was more accurate than localization in elevation. When the stimulus contained only high frequencies (>5 kHz) no changes in striking behavior were observed. But when only frequencies from 1 to 5 kHz were presented, localization accuracy and precision decreased. In a second step we tested whether a further border exists at 2.5 kHz as suggested by optimality models. When we compared striking behavior for a stimulus having energy from 2.5 to 5 kHz with a stimulus having energy between 1 and 2.5 kHz, no consistent differences in striking behavior were observed. It was further found that pre-takeoff latency was longer for the latter stimulus than for baseline and that center frequency was a better predictor for landing precision than stimulus bandwidth. These data fit well with what is known from head-turning studies and from neurophysiology.


Assuntos
Condicionamento Operante/fisiologia , Voo Animal/fisiologia , Comportamento Predatório/fisiologia , Localização de Som/fisiologia , Estrigiformes/fisiologia , Estimulação Acústica/métodos , Acústica , Animais , Comportamento Animal/fisiologia , Feminino , Masculino , Orientação/fisiologia , Tempo de Reação/fisiologia , Reforço Psicológico
15.
J Neurosci ; 30(1): 194-204, 2010 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-20053901

RESUMO

To program a goal-directed orienting response toward a sound source embedded in an acoustic scene, the audiomotor system should detect and select the target against a background. Here, we focus on whether the system can segregate synchronous sounds in the midsagittal plane (elevation), a task requiring the auditory system to dissociate the pinna-induced spectral localization cues. Human listeners made rapid head-orienting responses toward either a single sound source (broadband buzzer or Gaussian noise) or toward two simultaneously presented sounds (buzzer and noise) at a wide variety of locations in the midsagittal plane. In the latter case, listeners had to orient to the buzzer (target) and ignore the noise (nontarget). In the single-sound condition, localization was accurate. However, in the double-sound condition, response endpoints depended on relative sound level and spatial disparity. The loudest sound dominated the responses, regardless of whether it was the target or the nontarget. When the sounds had about equal intensities and their spatial disparity was sufficiently small, endpoint distributions were well described by weighted averaging. However, when spatial disparities exceeded approximately 45 degrees, response endpoint distributions became bimodal. Similar response behavior has been reported for visuomotor experiments, for which averaging and bimodal endpoint distributions are thought to arise from neural interactions within retinotopically organized visuomotor maps. We show, however, that the auditory-evoked responses can be well explained by the idiosyncratic acoustics of the pinnae. Hence basic principles of target representation and selection for audition and vision appear to differ profoundly.


Assuntos
Estimulação Acústica/métodos , Sinais (Psicologia) , Pavilhão Auricular/fisiologia , Orientação/fisiologia , Tempo de Reação/fisiologia , Localização de Som/fisiologia , Adulto , Percepção Auditiva/fisiologia , Feminino , Humanos , Masculino
16.
J Neurophysiol ; 98(6): 3759-69, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17898139

RESUMO

This study compares the performance of a newly developed gaze (eye-in-space) measurement technique based on double magnetic induction (DMI) by a custom-made gold-plated copper ring on the eye with the classical scleral search coil (SSC) technique to record two-dimensional (2D) head-unrestrained gaze shifts. We tested both systems simultaneously during head-free saccades toward light-emitting diodes (LEDs) within the entire oculomotor range (+/-35 deg). The absence of irritating lead wires in the case of the DMI method leads to a higher guarantee of success (no coil breakage) and to less irritation on the subject's eye, which results in a longer and more comfortable measurement time. Correlations between DMI and SSC signals for horizontal and vertical eye position, velocity, and acceleration were close to 1.0. The difference between the SSC signal and the DMI signal remains within a few degrees. In our current setup the resolution was about 0.3 deg for the DMI method versus 0.2 deg for the SSC technique. The DMI method is an especially good alternative in the case of patient and laboratory animal gaze control studies where breakage of the SSC lead wires is particularly cumbersome.


Assuntos
Movimentos Oculares/fisiologia , Fixação Ocular/fisiologia , Neurofisiologia/instrumentação , Adulto , Calibragem , Interpretação Estatística de Dados , Fenômenos Eletromagnéticos , Eletrônica , Movimentos da Cabeça/fisiologia , Humanos , Masculino , Movimentos Sacádicos/fisiologia
17.
J Neurosci ; 27(15): 4191-200, 2007 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-17428997

RESUMO

Interaural time differences are an important cue for azimuthal sound localization. It is still unclear whether the same neuronal mechanisms underlie the representation in the brain of interaural time difference in different vertebrates and whether these mechanisms are driven by common constraints, such as optimal coding. Current sound localization models may be discriminated by studying the spectral distribution of response peaks in tuning curves that measure the sensitivity to interaural time difference. The sound localization system of the barn owl has been studied intensively, but data that would allow discrimination between currently discussed models are missing from this animal. We have therefore obtained extracellular recordings from the time-sensitive subnuclei of the barn owl's inferior colliculus. Response peaks were broadly scattered over the physiological range of interaural time differences. A change in the representation of the interaural phase differences with frequency was not observed. In some neurons, response peaks fell outside the physiological range of interaural time differences. For a considerable number of neurons, the peak closest to zero interaural time difference was not the behaviorally relevant peak. The data are in best accordance with models suggesting that a place code underlies the representation of interaural time difference. The data from the high-frequency range, but not from the low-frequency range, are consistent with predictions of optimal coding. We speculate that the deviation of the representation of interaural time difference from optimal-coding models in the low-frequency range is attributable to the diminished importance of low frequencies for catching prey in this species.


Assuntos
Estimulação Acústica/métodos , Colículos Inferiores/fisiologia , Discriminação da Altura Tonal/fisiologia , Estrigiformes/fisiologia , Potenciais de Ação/fisiologia , Animais , Feminino , Masculino , Neurônios/fisiologia , Fatores de Tempo
18.
J Neurosci Methods ; 160(1): 75-84, 2007 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-16997380

RESUMO

So far, the double-magnetic induction (DMI) method has been successfully applied to record eye movements from head-restrained humans, monkeys and cats. An advantage of the DMI method, compared to the more widely used scleral search coil technique, is the absence of vulnerable lead wires on the eye. A disadvantage, however, is that the relationship between the eye-in-head orientation and the secondary induction signal is highly non-linear and non-monotonic. This limits the effective measuring range to maximum eye orientations of about +/-30 degrees . Here, we analyze and test two extensions required to record the full eye-head orienting range, well exceeding 90 degrees from straight-ahead in all directions. (1) The use of mutually perpendicular magnetic fields allows for the disambiguation of the non-monotonic signal from the ring. (2) The application of an artificial neural network for offline calibration of the signals. The theoretical predictions are tested for horizontal rotations with a gimbal system. Our results show that the method is a promising alternative to the search coil technique.


Assuntos
Movimentos Oculares/fisiologia , Movimentos da Cabeça/fisiologia , Magnetismo , Redes Neurais de Computação , Desempenho Psicomotor , Animais , Humanos , Orientação , Reprodutibilidade dos Testes
19.
Artigo em Inglês | MEDLINE | ID: mdl-17021830

RESUMO

Standard electrophysiology and virtual auditory stimuli were used to investigate the influence of interaural time difference on the azimuthal tuning of neurons in the core and the lateral shell of the central nucleus of the inferior colliculus of the barn owl. The responses of the neurons to virtual azimuthal stimuli depended in a periodic way on azimuth. Fixation of the interaural time difference, while leaving all other spatial cues unchanged, caused a loss of periodicity and a broadening of azimuthal tuning. This effect was studied in more detail in neurons of the core. The azimuthal range tested and the frequency selectivity of the neurons were additional parameters influencing the changes induced by fixating the interaural time difference. The addition of an interaural time difference to the virtual stimuli resulted in a shift of the tuning curves that correlated with the interaural time difference added. In this condition, tuning strength did not change. These results suggest that interaural time difference is an important determinant of azimuthal tuning in all neurons of the core and lateral shell of the central nucleus of the inferior colliculus, and is the only determinant in many of the neurons from the core.


Assuntos
Lateralidade Funcional/fisiologia , Colículos Inferiores/fisiologia , Tempo de Reação/fisiologia , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Estrigiformes/fisiologia , Estimulação Acústica , Animais , Potenciais Evocados Auditivos/fisiologia , Orientação/fisiologia , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA