Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
1.
J Neurophysiol ; 131(1): 38-63, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37965933

RESUMO

Human speech and vocalizations in animals are rich in joint spectrotemporal (S-T) modulations, wherein acoustic changes in both frequency and time are functionally related. In principle, the primate auditory system could process these complex dynamic sounds based on either an inseparable representation of S-T features or, alternatively, a separable representation. The separability hypothesis implies an independent processing of spectral and temporal modulations. We collected comparative data on the S-T hearing sensitivity in humans and macaque monkeys to a wide range of broadband dynamic spectrotemporal ripple stimuli employing a yes-no signal-detection task. Ripples were systematically varied, as a function of density (spectral modulation frequency), velocity (temporal modulation frequency), or modulation depth, to cover a listener's full S-T modulation sensitivity, derived from a total of 87 psychometric ripple detection curves. Audiograms were measured to control for normal hearing. Determined were hearing thresholds, reaction time distributions, and S-T modulation transfer functions (MTFs), both at the ripple detection thresholds and at suprathreshold modulation depths. Our psychophysically derived MTFs are consistent with the hypothesis that both monkeys and humans employ analogous perceptual strategies: S-T acoustic information is primarily processed separable. Singular value decomposition (SVD), however, revealed a small, but consistent, inseparable spectral-temporal interaction. Finally, SVD analysis of the known visual spatiotemporal contrast sensitivity function (CSF) highlights that human vision is space-time inseparable to a much larger extent than is the case for S-T sensitivity in hearing. Thus, the specificity with which the primate brain encodes natural sounds appears to be less strict than is required to adequately deal with natural images.NEW & NOTEWORTHY We provide comparative data on primate audition of naturalistic sounds comprising hearing thresholds, reaction time distributions, and spectral-temporal modulation transfer functions. Our psychophysical experiments demonstrate that auditory information is primarily processed in a spectral-temporal-independent manner by both monkeys and humans. Singular value decomposition of known visual spatiotemporal contrast sensitivity, in comparison to our auditory spectral-temporal sensitivity, revealed a striking contrast in how the brain encodes natural sounds as opposed to natural images, as vision appears to be space-time inseparable.


Assuntos
Percepção da Fala , Percepção do Tempo , Animais , Humanos , Haplorrinos , Percepção Auditiva , Audição , Estimulação Acústica/métodos
2.
PLoS Comput Biol ; 17(5): e1008975, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-34029310

RESUMO

An interesting problem for the human saccadic eye-movement system is how to deal with the degrees-of-freedom problem: the six extra-ocular muscles provide three rotational degrees of freedom, while only two are needed to point gaze at any direction. Measurements show that 3D eye orientations during head-fixed saccades in far-viewing conditions lie in Listing's plane (LP), in which the eye's cyclotorsion is zero (Listing's law, LL). Moreover, while saccades are executed as single-axis rotations around a stable eye-angular velocity axis, they follow straight trajectories in LP. Another distinctive saccade property is their nonlinear main-sequence dynamics: the affine relationship between saccade size and movement duration, and the saturation of peak velocity with amplitude. To explain all these properties, we developed a computational model, based on a simplified and upscaled robotic prototype of an eye with 3 degrees of freedom, driven by three independent motor commands, coupled to three antagonistic elastic muscle pairs. As the robotic prototype was not intended to faithfully mimic the detailed biomechanics of the human eye, we did not impose specific prior mechanical constraints on the ocular plant that could, by themselves, generate Listing's law and the main-sequence. Instead, our goal was to study how these properties can emerge from the application of optimal control principles to simplified eye models. We performed a numerical linearization of the nonlinear system dynamics around the origin using system identification techniques, and developed open-loop controllers for 3D saccade generation. Applying optimal control to the simulated model, could reproduce both Listing's law and and the main-sequence. We verified the contribution of different terms in the cost optimization functional to realistic 3D saccade behavior, and identified four essential terms: total energy expenditure by the motors, movement duration, gaze accuracy, and the total static force exerted by the muscles during fixation. Our findings suggest that Listing's law, as well as the saccade dynamics and their trajectories, may all emerge from the same common mechanism that aims to optimize speed-accuracy trade-off for saccades, while minimizing the total muscle force during eccentric fixation.


Assuntos
Modelos Biológicos , Movimentos Sacádicos , Fenômenos Biomecânicos , Humanos , Orientação , Visão Ocular
3.
J Neurophysiol ; 125(2): 556-567, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-33378250

RESUMO

To program a goal-directed response in the presence of acoustic reflections, the audio-motor system should suppress the detection of time-delayed sources. We examined the effects of spatial separation and interstimulus delay on the ability of human listeners to localize a pair of broadband sounds in the horizontal plane. Participants indicated how many sounds were heard and where these were perceived by making one or two head-orienting localization responses. Results suggest that perceptual fusion of the two sounds depends on delay and spatial separation. Leading and lagging stimuli in close spatial proximity required longer stimulus delays to be perceptually separated than those further apart. Whenever participants heard one sound, their localization responses for synchronous sounds were oriented to a weighted average of both source locations. For short delays, responses were directed toward the leading stimulus location. Increasing spatial separation enhanced this effect. For longer delays, responses were again directed toward a weighted average. When participants perceived two sounds, the first and the second response were directed to either of the leading and lagging source locations. Perceived locations were interchanged often in their temporal order (in ∼40% of trials). We show that the percept of two sounds occurring requires sufficient spatiotemporal separation, after which localization can be performed with high accuracy. We propose that the percept of temporal order of two concurrent sounds results from a different process than localization and discuss how dynamic lateral excitatory-inhibitory interactions within a spatial sensorimotor map could explain the findings.NEW & NOTEWORTHY Sound localization requires spectral and temporal processing of implicit acoustic cues, and is seriously challenged when multiple sources coincide closely in space and time. We systematically varied spatial-temporal disparities for two sounds and instructed listeners to generate goal-directed head movements. We found that even when the auditory system has accurate representations of both sources, it still has trouble to decide whether the scene contained one or two sounds, and in which order they appeared.


Assuntos
Localização de Som , Comportamento Espacial , Adulto , Encéfalo/fisiologia , Sinais (Psicologia) , Feminino , Movimentos da Cabeça , Humanos , Masculino
4.
PLoS Comput Biol ; 15(4): e1006522, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30978180

RESUMO

The midbrain superior colliculus (SC) generates a rapid saccadic eye movement to a sensory stimulus by recruiting a population of cells in its topographically organized motor map. Supra-threshold electrical microstimulation in the SC reveals that the site of stimulation produces a normometric saccade vector with little effect of the stimulation parameters. Moreover, electrically evoked saccades (E-saccades) have kinematic properties that strongly resemble natural, visual-evoked saccades (V-saccades). These findings support models in which the saccade vector is determined by a center-of-gravity computation of activated neurons, while its trajectory and kinematics arise from downstream feedback circuits in the brainstem. Recent single-unit recordings, however, have indicated that the SC population also specifies instantaneous kinematics. These results support an alternative model, in which the desired saccade trajectory, including its kinematics, follows from instantaneous summation of movement effects of all SC spike trains. But how to reconcile this model with microstimulation results? Although it is thought that microstimulation activates a large population of SC neurons, the mechanism through which it arises is unknown. We developed a spiking neural network model of the SC, in which microstimulation directly activates a relatively small set of neurons around the electrode tip, which subsequently sets up a large population response through lateral synaptic interactions. We show that through this mechanism the population drives an E-saccade with near-normal kinematics that are largely independent of the stimulation parameters. Only at very low stimulus intensities the network recruits a population with low firing rates, resulting in abnormally slow saccades.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Colículos Superiores/fisiologia , Potenciais de Ação/fisiologia , Animais , Biologia Computacional , Estimulação Elétrica , Haplorrinos , Humanos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Movimentos Sacádicos/fisiologia
5.
J Neurophysiol ; 119(5): 1795-1808, 2018 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-29384452

RESUMO

In dynamic visual or auditory gaze double-steps, a brief target flash or sound burst is presented in midflight of an ongoing eye-head gaze shift. Behavioral experiments in humans and monkeys have indicated that the subsequent eye and head movements to the target are goal-directed, regardless of stimulus timing, first gaze shift characteristics, and initial conditions. This remarkable behavior requires that the gaze-control system 1) has continuous access to accurate signals about eye-in-head position and ongoing eye-head movements, 2) that it accounts for different internal signal delays, and 3) that it is able to update the retinal ( TE) and head-centric ( TH) target coordinates into appropriate eye-centered and head-centered motor commands on millisecond time scales. As predictive, feedforward remapping of targets cannot account for this behavior, we propose that targets are transformed and stored into a stable reference frame as soon as their sensory information becomes available. We present a computational model, in which recruited cells in the midbrain superior colliculus drive eyes and head to the stored target location through a common dynamic oculocentric gaze-velocity command, which is continuously updated from the stable goal and transformed into appropriate oculocentric and craniocentric motor commands. We describe two equivalent, yet conceptually different, implementations that both account for the complex, but accurate, kinematic behaviors and trajectories of eye-head gaze shifts under a variety of challenging multisensory conditions, such as in dynamic visual-auditory multisteps.


Assuntos
Movimentos Oculares/fisiologia , Movimentos da Cabeça/fisiologia , Modelos Teóricos , Percepção Espacial/fisiologia , Colículos Superiores/fisiologia , Percepção Visual/fisiologia , Animais , Humanos
6.
Neurocomputing (Amst) ; 302: 55-65, 2018 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-30245550

RESUMO

Graphical processing units (GPUs) can significantly accelerate spiking neural network (SNN) simulations by exploiting parallelism for independent computations. Both the changes in membrane potential at each time-step, and checking for spiking threshold crossings for each neuron, can be calculated independently. However, because synaptic transmission requires communication between many different neurons, efficient parallel processing may be hindered, either by data transfers between GPU and CPU at each time-step or, alternatively, by running many parallel computations for neurons that do not elicit any spikes. This, in turn, would lower the effective throughput of the simulations. Traditionally, a central processing unit (CPU, host) administers the execution of parallel processes on the GPU (device), such as memory initialization on the device, data transfer between host and device, and starting and synchronizing parallel processes. The parallel computing platform CUDA 5.0 introduced dynamic parallelism, which allows the initiation of new parallel applications within an ongoing parallel kernel. Here, we apply dynamic parallelism for synaptic updating in SNN simulations on a GPU. Our algorithm eliminates the need to start many parallel applications at each time-step, and the associated lags of data transfer between CPU and GPU memories. We report a significant speed-up of SNN simulations, when compared to former accelerated parallelization strategies for SNNs on a GPU.

7.
Biol Cybern ; 111(3-4): 249-268, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28528360

RESUMO

Single-unit recordings suggest that the midbrain superior colliculus (SC) acts as an optimal controller for saccadic gaze shifts. The SC is proposed to be the site within the visuomotor system where the nonlinear spatial-to-temporal transformation is carried out: the population encodes the intended saccade vector by its location in the motor map (spatial), and its trajectory and velocity by the distribution of firing rates (temporal). The neurons' burst profiles vary systematically with their anatomical positions and intended saccade vectors, to account for the nonlinear main-sequence kinematics of saccades. Yet, the underlying collicular mechanisms that could result in these firing patterns are inaccessible to current neurobiological techniques. Here, we propose a simple spiking neural network model that reproduces the spike trains of saccade-related cells in the intermediate and deep SC layers during saccades. The model assumes that SC neurons have distinct biophysical properties for spike generation that depend on their anatomical position in combination with a center-surround lateral connectivity. Both factors are needed to account for the observed firing patterns. Our model offers a basis for neuronal algorithms for spatiotemporal transformations and bio-inspired optimal controllers.


Assuntos
Potenciais de Ação , Rede Nervosa/fisiologia , Vias Neurais , Movimentos Sacádicos/fisiologia , Colículos Superiores/citologia , Colículos Superiores/fisiologia , Algoritmos , Rede Nervosa/citologia
8.
J Acoust Soc Am ; 142(5): 3094, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-29195479

RESUMO

To program a goal-directed response in the presence of multiple sounds, the audiomotor system should separate the sound sources. The authors examined whether the brain can segregate synchronous broadband sounds in the midsagittal plane, using amplitude modulations as an acoustic discrimination cue. To succeed in this task, the brain has to use pinna-induced spectral-shape cues and temporal envelope information. The authors tested spatial segregation performance in the midsagittal plane in two paradigms in which human listeners were required to localize, or distinguish, a target amplitude-modulated broadband sound when a non-modulated broadband distractor was played simultaneously at another location. The level difference between the amplitude-modulated and distractor stimuli was systematically varied, as well as the modulation frequency of the target sound. The authors found that participants were unable to segregate, or localize, the synchronous sounds. Instead, they invariably responded toward a level-weighted average of both sound locations, irrespective of the modulation frequency. An increased variance in the response distributions for double sounds of equal level was also observed, which cannot be accounted for by a segregation model, or by a probabilistic averaging model.

9.
Proc Natl Acad Sci U S A ; 110(38): 15225-30, 2013 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-24003112

RESUMO

After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Audição/fisiologia , Modelos Biológicos , Psicoacústica , Estimulação Acústica , Humanos
10.
Eur J Neurosci ; 39(9): 1538-50, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24649904

RESUMO

We characterised task-related top-down signals in monkey auditory cortex cells by comparing single-unit activity during passive sound exposure with neuronal activity during a predictable and unpredictable reaction-time task for a variety of spectral-temporally modulated broadband sounds. Although animals were not trained to attend to particular spectral or temporal sound modulations, their reaction times demonstrated clear acoustic spectral-temporal sensitivity for unpredictable modulation onsets. Interestingly, this sensitivity was absent for predictable trials with fast manual responses, but re-emerged for the slower reactions in these trials. Our analysis of neural activity patterns revealed a task-related dynamic modulation of auditory cortex neurons that was locked to the animal's reaction time, but invariant to the spectral and temporal acoustic modulations. This finding suggests dissociation between acoustic and behavioral signals at the single-unit level. We further demonstrated that single-unit activity during task execution can be described by a multiplicative gain modulation of acoustic-evoked activity and a task-related top-down signal, rather than by linear summation of these signals.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Neurônios/fisiologia , Estimulação Acústica , Animais , Discriminação Psicológica/fisiologia , Macaca mulatta , Masculino
11.
Front Robot AI ; 11: 1393637, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38835930

RESUMO

We recently developed a biomimetic robotic eye with six independent tendons, each controlled by their own rotatory motor, and with insertions on the eye ball that faithfully mimic the biomechanics of the human eye. We constructed an accurate physical computational model of this system, and learned to control its nonlinear dynamics by optimising a cost that penalised saccade inaccuracy, movement duration, and total energy expenditure of the motors. To speed up the calculations, the physical simulator was approximated by a recurrent neural network (NARX). We showed that the system can produce realistic eye movements that closely resemble human saccades in all directions: their nonlinear main-sequence dynamics (amplitude-peak eye velocity and duration relationships), cross-coupling of the horizontal and vertical movement components leading to approximately straight saccade trajectories, and the 3D kinematics that restrict 3D eye orientations to a plane (Listing's law). Interestingly, the control algorithm had organised the motors into appropriate agonist-antagonist muscle pairs, and the motor signals for the eye resembled the well-known pulse-step characteristics that have been reported for monkey motoneuronal activity. We here fully analyse the eye-movement properties produced by the computational model across the entire oculomotor range and the underlying control signals. We argue that our system may shed new light on the neural control signals and their couplings within the final neural pathways of the primate oculomotor system, and that an optimal control principle may account for a wide variety of oculomotor behaviours. The generated data are publicly available at https://data.ru.nl/collections/di/dcn/DSC_626870_0003_600.

12.
Invest Ophthalmol Vis Sci ; 65(5): 39, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38787546

RESUMO

Purpose: Post-saccadic oscillations (PSOs) reflect movements of gaze that result from motion of the pupil and lens relative to the eyeball rather than eyeball rotations. Here, we analyzed the characteristics of PSOs in subjects with age-related macular degeneration (AMD), retinitis pigmentosa (RP), and normal vision (NV). Our aim was to assess the differences in PSOs between people with vision loss and healthy controls because PSOs affect retinal image stability after each saccade. Methods: Participants completed a horizontal saccade task and their gaze was measured using a pupil-based eye tracker. Oscillations occurring in the 80 to 200 ms post-saccadic period were described with a damped oscillation model. We compared the amplitude, decay time constant, and frequency of the PSOs for the three different groups. We also examined the correlation between these PSO parameters and the amplitude, peak velocity, and final deceleration of the preceding saccades. Results: Subjects with vision loss (AMD, n = 6, and RP, n = 5) had larger oscillation amplitudes, longer decay constants, and lower frequencies than subjects with NV (n = 7). The oscillation amplitudes increased with increases in saccade deceleration in all three groups. The other PSO parameters, however, did not show consistent correlations with either saccade amplitude or peak velocity. Conclusions: Post-saccadic fixation stability in AMD and RP is reduced due to abnormal PSOs. The differences with respect to NV are not due to differences in saccade kinematics, suggesting that anatomic and neuronal variations affect the suspension of the iris and the lens in the patients' eyes.


Assuntos
Fixação Ocular , Degeneração Macular , Pupila , Retinose Pigmentar , Movimentos Sacádicos , Humanos , Movimentos Sacádicos/fisiologia , Retinose Pigmentar/fisiopatologia , Feminino , Masculino , Fixação Ocular/fisiologia , Pessoa de Meia-Idade , Degeneração Macular/fisiopatologia , Idoso , Pupila/fisiologia , Cristalino/fisiopatologia , Adulto , Acuidade Visual/fisiologia
13.
Eur J Neurosci ; 37(11): 1830-42, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23510187

RESUMO

It is unclear whether top-down processing in the auditory cortex (AC) interferes with its bottom-up analysis of sound. Recent studies indicated non-acoustic modulations of AC responses, and that attention changes a neuron's spectrotemporal tuning. As a result, the AC would seem ill-suited to represent a stable acoustic environment, which is deemed crucial for auditory perception. To assess whether top-down signals influence acoustic tuning in tasks without directed attention, we compared monkey single-unit AC responses to dynamic spectrotemporal sounds under different behavioral conditions. Recordings were mostly made from neurons located in primary fields (primary AC and area R of the AC) that were well tuned to pure tones, with short onset latencies. We demonstrated that responses in the AC were substantially modulated during an auditory detection task and that these modulations were systematically related to top-down processes. Importantly, despite these significant modulations, the spectrotemporal receptive fields of all neurons remained remarkably stable. Our results suggest multiplexed encoding of bottom-up acoustic and top-down task-related signals at single AC neurons. This mechanism preserves a stable representation of the acoustic environment despite strong non-acoustic modulations.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Estimulação Acústica , Animais , Atenção , Córtex Auditivo/citologia , Macaca mulatta , Masculino , Neurônios/fisiologia , Tempo de Reação
14.
J Psychiatry Neurosci ; 38(6): 398-406, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24148845

RESUMO

BACKGROUND: Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. METHODS: We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). RESULTS: Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. LIMITATIONS: The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. CONCLUSION: Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.


Assuntos
Transtornos Globais do Desenvolvimento Infantil/psicologia , Discriminação Psicológica , Localização de Som , Estimulação Acústica , Adolescente , Adulto , Estudos de Casos e Controles , Feminino , Humanos , Masculino
15.
Commun Biol ; 6(1): 927, 2023 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-37689726

RESUMO

The midbrain superior colliculus is a crucial sensorimotor stage for programming and generating saccadic eye-head gaze shifts. Although it is well established that superior colliculus cells encode a neural command that specifies the amplitude and direction of the upcoming gaze-shift vector, there is controversy about the role of the firing-rate dynamics of these neurons during saccades. In our earlier work, we proposed a simple quantitative model that explains how the recruited superior colliculus population may specify the detailed kinematics (trajectories and velocity profiles) of head-restrained saccadic eye movements. We here show that the same principles may apply to a wide range of saccadic eye-head gaze shifts with strongly varying kinematics, despite the substantial nonlinearities and redundancy in programming and execute rapid goal-directed eye-head gaze shifts to peripheral targets. Our findings could provide additional evidence for an important role of the superior colliculus in the optimal control of saccades.


Assuntos
Neurônios , Colículos Superiores , Fenômenos Biomecânicos , Fixação Ocular , Movimentos Sacádicos
16.
Front Neurosci ; 17: 1183126, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37521701

RESUMO

A cochlear implant (CI) is a neurotechnological device that restores total sensorineural hearing loss. It contains a sophisticated speech processor that analyzes and transforms the acoustic input. It distributes its time-enveloped spectral content to the auditory nerve as electrical pulsed stimulation trains of selected frequency channels on a multi-contact electrode that is surgically inserted in the cochlear duct. This remarkable brain interface enables the deaf to regain hearing and understand speech. However, tuning of the large (>50) number of parameters of the speech processor, so-called "device fitting," is a tedious and complex process, which is mainly carried out in the clinic through 'one-size-fits-all' procedures. Current fitting typically relies on limited and often subjective data that must be collected in limited time. Despite the success of the CI as a hearing-restoration device, variability in speech-recognition scores among users is still very large, and mostly unexplained. The major factors that underly this variability incorporate three levels: (i) variability in auditory-system malfunction of CI-users, (ii) variability in the selectivity of electrode-to-auditory nerve (EL-AN) activation, and (iii) lack of objective perceptual measures to optimize the fitting. We argue that variability in speech recognition can only be alleviated by using objective patient-specific data for an individualized fitting procedure, which incorporates knowledge from all three levels. In this paper, we propose a series of experiments, aimed at collecting a large amount of objective (i.e., quantitative, reproducible, and reliable) data that characterize the three processing levels of the user's auditory system. Machine-learning algorithms that process these data will eventually enable the clinician to derive reliable and personalized characteristics of the user's auditory system, the quality of EL-AN signal transfer, and predictions of the perceptual effects of changes in the current fitting.

17.
Trends Hear ; 27: 23312165221143907, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36605011

RESUMO

Many cochlear implant users with binaural residual (acoustic) hearing benefit from combining electric and acoustic stimulation (EAS) in the implanted ear with acoustic amplification in the other. These bimodal EAS listeners can potentially use low-frequency binaural cues to localize sounds. However, their hearing is generally asymmetric for mid- and high-frequency sounds, perturbing or even abolishing binaural cues. Here, we investigated the effect of a frequency-dependent binaural asymmetry in hearing thresholds on sound localization by seven bimodal EAS listeners. Frequency dependence was probed by presenting sounds with power in low-, mid-, high-, or mid-to-high-frequency bands. Frequency-dependent hearing asymmetry was present in the bimodal EAS listening condition (when using both devices) but was also induced by independently switching devices on or off. Using both devices, hearing was near symmetric for low frequencies, asymmetric for mid frequencies with better hearing thresholds in the implanted ear, and monaural for high frequencies with no hearing in the non-implanted ear. Results show that sound-localization performance was poor in general. Typically, localization was strongly biased toward the better hearing ear. We observed that hearing asymmetry was a good predictor for these biases. Notably, even when hearing was symmetric a preferential bias toward the ear using the hearing aid was revealed. We discuss how frequency dependence of any hearing asymmetry may lead to binaural cues that are spatially inconsistent as the spectrum of a sound changes. We speculate that this inconsistency may prevent accurate sound-localization even after long-term exposure to the hearing asymmetry.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Localização de Som , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Implante Coclear/métodos , Audição , Localização de Som/fisiologia , Estimulação Acústica/métodos
18.
J Neurosci ; 31(48): 17496-504, 2011 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-22131411

RESUMO

The auditory system represents sound-source directions initially in head-centered coordinates. To program eye-head gaze shifts to sounds, the orientation of eyes and head should be incorporated to specify the target relative to the eyes. Here we test (1) whether this transformation involves a stage in which sounds are represented in a world- or a head-centered reference frame, and (2) whether acoustic spatial updating occurs at a topographically organized motor level representing gaze shifts, or within the tonotopically organized auditory system. Human listeners generated head-unrestrained gaze shifts from a large range of initial eye and head positions toward brief broadband sound bursts, and to tones at different center frequencies, presented in the midsagittal plane. Tones were heard at a fixed illusory elevation, regardless of their actual location, that depended in an idiosyncratic way on initial head and eye position, as well as on the tone's frequency. Gaze shifts to broadband sounds were accurate, fully incorporating initial eye and head positions. The results support the hypothesis that the auditory system represents sounds in a supramodal reference frame, and that signals about eye and head orientation are incorporated at a tonotopic stage.


Assuntos
Movimentos Oculares/fisiologia , Movimentos da Cabeça/fisiologia , Orientação/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
19.
J Neurosci ; 31(29): 10558-68, 2011 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-21775600

RESUMO

How does the visuomotor system decide whether a target is moving or stationary in space or whether it moves relative to the eyes or head? A visual flash during a rapid eye-head gaze shift produces a brief visual streak on the retina that could provide information about target motion, when appropriately combined with eye and head self-motion signals. Indeed, double-step experiments have demonstrated that the visuomotor system incorporates actively generated intervening gaze shifts in the final localization response. Also saccades to brief head-fixed flashes during passive whole-body rotation compensate for vestibular-induced ocular nystagmus. However, both the amount of retinal motion to invoke spatial updating and the default strategy in the absence of detectable retinal motion remain unclear. To study these questions, we determined the contribution of retinal motion and the vestibular canals to spatial updating of visual flashes during passive whole-body rotation. Head- and body-restrained humans made saccades toward very brief (0.5 and 4 ms) and long (100 ms) visual flashes during sinusoidal rotation around the vertical body axis in total darkness. Stimuli were either attached to the chair (head-fixed) or stationary in space and were always well localizable. Surprisingly, spatial updating only occurred when retinal stimulus motion provided sufficient information: long-duration stimuli were always appropriately localized, thus adequately compensating for vestibular nystagmus and the passive head movement during the saccade reaction time. For the shortest stimuli, however, the target was kept in retinocentric coordinates, thus ignoring intervening nystagmus and passive head displacement, regardless of whether the target was moving with the head or not.


Assuntos
Percepção de Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Movimentos Sacádicos/fisiologia , Percepção Espacial/fisiologia , Vestíbulo do Labirinto/fisiologia , Feminino , Movimentos da Cabeça , Humanos , Masculino , Modelos Biológicos , Dinâmica não Linear , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Reflexo Vestíbulo-Ocular/fisiologia , Análise de Regressão , Retina/fisiologia , Vias Visuais/fisiologia
20.
Neuroimage ; 62(1): 67-76, 2012 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-22521477

RESUMO

Non-invasive measuring methods such as EEG/MEG, fMRI and DTI are increasingly utilised to extract quantitative information on functional and anatomical connectivity in the human brain. These methods typically register their data in Euclidean space, so that one can refer to a particular activity pattern by specifying its spatial coordinates. Since each of these methods has limited resolution in either the time or spatial domain, incorporating additional data, such as those obtained from invasive animal studies, would be highly beneficial to link structure and function. Here we describe an approach to spatially register all cortical brain regions from the macaque structural connectivity database CoCoMac, which contains the combined tracing study results from 459 publications (http://cocomac.g-node.org). Brain regions from 9 different brain maps were directly mapped to a standard macaque cortex using the tool Caret (Van Essen and Dierker, 2007). The remaining regions in the CoCoMac database were semantically linked to these 9 maps using previously developed algebraic and machine-learning techniques (Bezgin et al., 2008; Stephan et al., 2000). We analysed neural connectivity using several graph-theoretical measures to capture global properties of the derived network, and found that Markov Centrality provides the most direct link between structure and function. With this registration approach, users can query the CoCoMac database by specifying spatial coordinates. Availability of deformation tools and homology evidence then allow one to directly attribute detailed anatomical animal data to human experimental results.


Assuntos
Encéfalo/anatomia & histologia , Bases de Dados Factuais/normas , Macaca/anatomia & histologia , Modelos Anatômicos , Modelos Neurológicos , Rede Nervosa/anatomia & histologia , Técnica de Subtração , Animais , Simulação por Computador , Interpretação de Imagem Assistida por Computador/métodos , Valores de Referência , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA