Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
PLoS Comput Biol ; 20(4): e1011183, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38557984

RESUMO

One of the key problems the brain faces is inferring the state of the world from a sequence of dynamically changing stimuli, and it is not yet clear how the sensory system achieves this task. A well-established computational framework for describing perceptual processes in the brain is provided by the theory of predictive coding. Although the original proposals of predictive coding have discussed temporal prediction, later work developing this theory mostly focused on static stimuli, and key questions on neural implementation and computational properties of temporal predictive coding networks remain open. Here, we address these questions and present a formulation of the temporal predictive coding model that can be naturally implemented in recurrent networks, in which activity dynamics rely only on local inputs to the neurons, and learning only utilises local Hebbian plasticity. Additionally, we show that temporal predictive coding networks can approximate the performance of the Kalman filter in predicting behaviour of linear systems, and behave as a variant of a Kalman filter which does not track its own subjective posterior variance. Importantly, temporal predictive coding networks can achieve similar accuracy as the Kalman filter without performing complex mathematical operations, but just employing simple computations that can be implemented by biological networks. Moreover, when trained with natural dynamic inputs, we found that temporal predictive coding can produce Gabor-like, motion-sensitive receptive fields resembling those observed in real neurons in visual areas. In addition, we demonstrate how the model can be effectively generalized to nonlinear systems. Overall, models presented in this paper show how biologically plausible circuits can predict future stimuli and may guide research on understanding specific neural circuits in brain areas involved in temporal prediction.


Assuntos
Encéfalo , Modelos Neurológicos , Encéfalo/fisiologia , Aprendizagem , Neurônios/fisiologia
2.
J Neurosci ; 44(10)2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38267259

RESUMO

Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.


Assuntos
Córtex Auditivo , Colículos Inferiores , Feminino , Ratos , Animais , Vias Auditivas/fisiologia , Colículos Inferiores/fisiologia , Mesencéfalo/fisiologia , Som , Córtex Auditivo/fisiologia , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia
3.
Elife ; 122023 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-37844199

RESUMO

Visual neurons respond selectively to features that become increasingly complex from the eyes to the cortex. Retinal neurons prefer flashing spots of light, primary visual cortical (V1) neurons prefer moving bars, and those in higher cortical areas favor complex features like moving textures. Previously, we showed that V1 simple cell tuning can be accounted for by a basic model implementing temporal prediction - representing features that predict future sensory input from past input (Singer et al., 2018). Here, we show that hierarchical application of temporal prediction can capture how tuning properties change across at least two levels of the visual system. This suggests that the brain does not efficiently represent all incoming information; instead, it selectively represents sensory inputs that help in predicting the future. When applied hierarchically, temporal prediction extracts time-varying features that depend on increasingly high-level statistics of the sensory input.


Assuntos
Percepção de Movimento , Vias Visuais , Vias Visuais/fisiologia , Percepção de Movimento/fisiologia , Estimulação Luminosa , Neurônios/fisiologia , Encéfalo , Percepção Visual/fisiologia
4.
Elife ; 112022 05 26.
Artigo em Inglês | MEDLINE | ID: mdl-35617119

RESUMO

In almost every natural environment, sounds are reflected by nearby objects, producing many delayed and distorted copies of the original sound, known as reverberation. Our brains usually cope well with reverberation, allowing us to recognize sound sources regardless of their environments. In contrast, reverberation can cause severe difficulties for speech recognition algorithms and hearing-impaired people. The present study examines how the auditory system copes with reverberation. We trained a linear model to recover a rich set of natural, anechoic sounds from their simulated reverberant counterparts. The model neurons achieved this by extending the inhibitory component of their receptive filters for more reverberant spaces, and did so in a frequency-dependent manner. These predicted effects were observed in the responses of auditory cortical neurons of ferrets in the same simulated reverberant environments. Together, these results suggest that auditory cortical neurons adapt to reverberation by adjusting their filtering properties in a manner consistent with dereverberation.


Assuntos
Córtex Auditivo , Percepção da Fala , Estimulação Acústica , Adaptação Fisiológica , Animais , Córtex Auditivo/fisiologia , Furões , Humanos , Som , Percepção da Fala/fisiologia
5.
Hear Res ; 412: 108357, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34739889

RESUMO

Previous psychophysical studies have identified a hierarchy of time-averaged statistics which determine the identity of natural sound textures. However, it is unclear whether the neurons in the inferior colliculus (IC) are sensitive to each of these statistical features in the natural sound textures. We used 13 representative sound textures spanning the space of 3 statistics extracted from over 200 natural textures. The synthetic textures were generated by incorporating the statistical features in a step-by-step manner, in which a particular statistical feature was changed while the other statistical features remain unchanged. The extracellular activity in response to the synthetic texture stimuli was recorded in the IC of anesthetized rats. Analysis of the transient and sustained multiunit activity after each transition of statistical feature showed that the IC units were sensitive to the changes of all types of statistics, although to a varying extent. For example, we found that more neurons were sensitive to the changes in variance than that in the modulation correlations. Our results suggest that the sensitivity of the statistical features in the subcortical levels contributes to the identification and discrimination of natural sound textures.


Assuntos
Colículos Inferiores , Estimulação Acústica , Animais , Colículos Inferiores/fisiologia , Neurônios/fisiologia , Ratos , Som
6.
PLoS One ; 16(6): e0238960, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34161323

RESUMO

Sounds like "running water" and "buzzing bees" are classes of sounds which are a collective result of many similar acoustic events and are known as "sound textures". A recent psychoacoustic study using sound textures has reported that natural sounding textures can be synthesized from white noise by imposing statistical features such as marginals and correlations computed from the outputs of cochlear models responding to the textures. The outputs being the envelopes of bandpass filter responses, the 'cochlear envelope'. This suggests that the perceptual qualities of many natural sounds derive directly from such statistical features, and raises the question of how these statistical features are distributed in the acoustic environment. To address this question, we collected a corpus of 200 sound textures from public online sources and analyzed the distributions of the textures' marginal statistics (mean, variance, skew, and kurtosis), cross-frequency correlations and modulation power statistics. A principal component analysis of these parameters revealed a great deal of redundancy in the texture parameters. For example, just two marginal principal components, which can be thought of as measuring the sparseness or burstiness of a texture, capture as much as 64% of the variance of the 128 dimensional marginal parameter space, while the first two principal components of cochlear correlations capture as much as 88% of the variance in the 496 correlation parameters. Knowledge of the statistical distributions documented here may help guide the choice of acoustic stimuli with high ecological validity in future research.


Assuntos
Percepção Auditiva/fisiologia , Som , Estimulação Acústica/métodos , Acústica , Cóclea/fisiologia , Bases de Dados Factuais , Humanos , Modelos Estatísticos , Ruído , Análise de Componente Principal/métodos , Psicoacústica
7.
Proc Natl Acad Sci U S A ; 117(45): 28442-28451, 2020 11 10.
Artigo em Inglês | MEDLINE | ID: mdl-33097665

RESUMO

Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Som , Estimulação Acústica , Animais , Percepção Auditiva/fisiologia , Cóclea , Nervo Coclear/fisiologia , Furões , Humanos , Modelos Neurológicos , Neurônios/fisiologia , Fala
8.
R Soc Open Sci ; 7(3): 191194, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32269783

RESUMO

Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This 'neural emphasis' distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the 'bottom-up' processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.

9.
PLoS Comput Biol ; 15(5): e1006618, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31059503

RESUMO

Auditory neurons encode stimulus history, which is often modelled using a span of time-delays in a spectro-temporal receptive field (STRF). We propose an alternative model for the encoding of stimulus history, which we apply to extracellular recordings of neurons in the primary auditory cortex of anaesthetized ferrets. For a linear-non-linear STRF model (LN model) to achieve a high level of performance in predicting single unit neural responses to natural sounds in the primary auditory cortex, we found that it is necessary to include time delays going back at least 200 ms in the past. This is an unrealistic time span for biological delay lines. We therefore asked how much of this dependence on stimulus history can instead be explained by dynamical aspects of neurons. We constructed a neural-network model whose output is the weighted sum of units whose responses are determined by a dynamic firing-rate equation. The dynamic aspect performs low-pass filtering on each unit's response, providing an exponentially decaying memory whose time constant is individual to each unit. We find that this dynamic network (DNet) model, when fitted to the neural data using STRFs of only 25 ms duration, can achieve prediction performance on a held-out dataset comparable to the best performing LN model with STRFs of 200 ms duration. These findings suggest that integration due to the membrane time constants or other exponentially-decaying memory processes may underlie linear temporal receptive fields of neurons beyond 25 ms.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Potenciais de Ação/fisiologia , Animais , Potenciais Evocados Auditivos/fisiologia , Furões , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Dinâmica não Linear
10.
PLoS Comput Biol ; 15(1): e1006595, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30653497

RESUMO

We investigate how the neural processing in auditory cortex is shaped by the statistics of natural sounds. Hypothesising that auditory cortex (A1) represents the structural primitives out of which sounds are composed, we employ a statistical model to extract such components. The input to the model are cochleagrams which approximate the non-linear transformations a sound undergoes from the outer ear, through the cochlea to the auditory nerve. Cochleagram components do not superimpose linearly, but rather according to a rule which can be approximated using the max function. This is a consequence of the compression inherent in the cochleagram and the sparsity of natural sounds. Furthermore, cochleagrams do not have negative values. Cochleagrams are therefore not matched well by the assumptions of standard linear approaches such as sparse coding or ICA. We therefore consider a new encoding approach for natural sounds, which combines a model of early auditory processing with maximal causes analysis (MCA), a sparse coding model which captures both the non-linear combination rule and non-negativity of the data. An efficient truncated EM algorithm is used to fit the MCA model to cochleagram data. We characterize the generative fields (GFs) inferred by MCA with respect to in vivo neural responses in A1 by applying reverse correlation to estimate spectro-temporal receptive fields (STRFs) implied by the learned GFs. Despite the GFs being non-negative, the STRF estimates are found to contain both positive and negative subfields, where the negative subfields can be attributed to explaining away effects as captured by the applied inference method. A direct comparison with ferret A1 shows many similar forms, and the spectral and temporal modulation tuning of both ferret and model STRFs show similar ranges over the population. In summary, our model represents an alternative to linear approaches for biological auditory encoding while it captures salient data properties and links inhibitory subfields to explaining away effects.


Assuntos
Córtex Auditivo/fisiologia , Cóclea/fisiologia , Modelos Neurológicos , Modelos Estatísticos , Processamento de Sinais Assistido por Computador , Estimulação Acústica , Algoritmos , Animais , Feminino , Furões , Testes Auditivos , Humanos , Masculino
11.
Elife ; 72018 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-29911971

RESUMO

Neurons in sensory cortex are tuned to diverse features in natural scenes. But what determines which features neurons become selective to? Here we explore the idea that neuronal selectivity is optimized to represent features in the recent sensory past that best predict immediate future inputs. We tested this hypothesis using simple feedforward neural networks, which were trained to predict the next few moments of video or audio in clips of natural scenes. The networks developed receptive fields that closely matched those of real cortical neurons in different mammalian species, including the oriented spatial tuning of primary visual cortex, the frequency selectivity of primary auditory cortex and, most notably, their temporal tuning properties. Furthermore, the better a network predicted future inputs the more closely its receptive fields resembled those in the brain. This suggests that sensory processing is optimized to extract those features with the most capacity to predict future input.


Assuntos
Antecipação Psicológica , Córtex Auditivo/fisiologia , Redes Neurais de Computação , Células Receptoras Sensoriais/fisiologia , Córtex Visual/fisiologia , Estimulação Acústica , Animais , Córtex Auditivo/anatomia & histologia , Simulação por Computador , Mamíferos/anatomia & histologia , Mamíferos/fisiologia , Estimulação Luminosa , Tempo de Reação/fisiologia , Células Receptoras Sensoriais/citologia , Gravação em Vídeo , Córtex Visual/anatomia & histologia
12.
Proc Biol Sci ; 284(1866)2017 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-29118141

RESUMO

The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat.


Assuntos
Percepção Auditiva/fisiologia , Gerbillinae/fisiologia , Colículos Inferiores/fisiologia , Música , Desempenho Psicomotor , Estimulação Acústica , Adulto , Animais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
13.
Nat Commun ; 7: 13442, 2016 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-27883088

RESUMO

Neural adaptation is central to sensation. Neurons in auditory midbrain, for example, rapidly adapt their firing rates to enhance coding precision of common sound intensities. However, it remains unknown whether this adaptation is fixed, or dynamic and dependent on experience. Here, using guinea pigs as animal models, we report that adaptation accelerates when an environment is re-encountered-in response to a sound environment that repeatedly switches between quiet and loud, midbrain neurons accrue experience to find an efficient code more rapidly. This phenomenon, which we term meta-adaptation, suggests a top-down influence on the midbrain. To test this, we inactivate auditory cortex and find acceleration of adaptation with experience is attenuated, indicating a role for cortex-and its little-understood projections to the midbrain-in modulating meta-adaptation. Given the prevalence of adaptation across organisms and senses, meta-adaptation might be similarly common, with extensive implications for understanding how neurons encode the rapidly changing environments of the real world.


Assuntos
Adaptação Fisiológica , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Mesencéfalo/fisiologia , Neurônios/fisiologia , Estimulação Acústica , Animais , Córtex Auditivo/fisiologia , Feminino , Cobaias , Hipotermia Induzida , Masculino , Mesencéfalo/citologia , Modelos Animais
14.
PLoS Comput Biol ; 12(11): e1005113, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27835647

RESUMO

Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Inibição Neural/fisiologia , Células Receptoras Sensoriais/fisiologia , Estimulação Acústica/métodos , Animais , Simulação por Computador , Humanos , Dinâmica não Linear , Integração de Sistemas
15.
Front Comput Neurosci ; 10: 10, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26903851

RESUMO

Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CC norm , Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CC norm is better behaved in that it is effectively bounded between -1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CC norm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CC norm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models.

16.
Front Neurosci ; 10: 9, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26858589

RESUMO

This study investigates the influence of temporal regularity on human listeners' ability to detect a repeating noise pattern embedded in statistically identical non-repeating noise. Human listeners were presented with white noise stimuli that either contained a frozen segment of noise that repeated in a temporally regular or irregular manner, or did not contain any repetition at all. Subjects were instructed to respond as soon as they detected any repetition in the stimulus. Pattern detection performance was best when repeated targets occurred in a temporally regular manner, suggesting that temporal regularity plays a facilitative role in pattern detection. A modulation filterbank model could account for these results.

17.
J Neurosci ; 36(2): 280-9, 2016 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-26758822

RESUMO

Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear-nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT: An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too.


Assuntos
Adaptação Fisiológica/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mesencéfalo/fisiologia , Som , Estimulação Acústica , Animais , Vias Auditivas/fisiologia , Feminino , Furões , Modelos Lineares , Masculino , Modelos Neurológicos , Espectrografia do Som
18.
IEEE Trans Biomed Eng ; 62(6): 1526-1534, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25608301

RESUMO

OBJECTIVE: We present a device that combines principles of ultrasonic echolocation and spatial hearing to provide human users with environmental cues that are 1) not otherwise available to the human auditory system, and 2) richer in object and spatial information than the more heavily processed sonar cues of other assistive devices. The device consists of a wearable headset with an ultrasonic emitter and stereo microphones with affixed artificial pinnae. The goal of this study is to describe the device and evaluate the utility of the echoic information it provides. METHODS: The echoes of ultrasonic pulses were recorded and time stretched to lower their frequencies into the human auditory range, then played back to the user. We tested performance among naive and experienced sighted volunteers using a set of localization experiments, in which the locations of echo-reflective surfaces were judged using these time-stretched echoes. RESULTS: Naive subjects were able to make laterality and distance judgments, suggesting that the echoes provide innately useful information without prior training. Naive subjects were generally unable to make elevation judgments from recorded echoes. However, trained subjects demonstrated an ability to judge elevation as well. CONCLUSION: This suggests that the device can be used effectively to examine the environment and that the human auditory system can rapidly adapt to these artificial echolocation cues. SIGNIFICANCE: Interpreting and interacting with the external world constitutes a major challenge for persons who are blind or visually impaired. This device has the potential to aid blind people in interacting with their environment.


Assuntos
Ecolocação/fisiologia , Tecnologia Assistiva , Processamento de Sinais Assistido por Computador/instrumentação , Ultrassom/instrumentação , Ultrassom/métodos , Adulto , Animais , Pavilhão Auricular , Desenho de Equipamento , Feminino , Humanos , Masculino , Modelos Biológicos , Pessoas com Deficiência Visual , Adulto Jovem
19.
PLoS One ; 9(11): e108154, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25372405

RESUMO

A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species.


Assuntos
Potenciais Evocados Auditivos , Modelos Neurológicos , Localização de Som , Animais , Gatos , Sinais (Psicologia) , Macaca , Neurônios/fisiologia , Roedores , Som
20.
J Acoust Soc Am ; 135(6): EL357-63, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24907846

RESUMO

Periodic stimuli are common in natural environments and are ecologically relevant, for example, footsteps and vocalizations. This study reports a detectability enhancement for temporally cued, periodic sequences. Target noise bursts (embedded in background noise) arriving at the time points which followed on from an introductory, periodic "cue" sequence were more easily detected (by ∼1.5 dB SNR) than identical noise bursts which randomly deviated from the cued temporal pattern. Temporal predictability and corresponding neuronal "entrainment" have been widely theorized to underlie important processes in auditory scene analysis and to confer perceptual advantage. This is the first study in the auditory domain to clearly demonstrate a perceptual enhancement of temporally predictable, near-threshold stimuli.


Assuntos
Percepção Auditiva , Sinais (Psicologia) , Detecção de Sinal Psicológico , Percepção do Tempo , Estimulação Acústica , Adulto , Audiometria , Limiar Auditivo , Feminino , Humanos , Masculino , Movimento (Física) , Psicoacústica , Som , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA