Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
J Neurosci ; 44(10)2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38267259

RESUMEN

Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.


Asunto(s)
Corteza Auditiva , Colículos Inferiores , Femenino , Ratas , Animales , Vías Auditivas/fisiología , Colículos Inferiores/fisiología , Mesencéfalo/fisiología , Sonido , Corteza Auditiva/fisiología , Estimulación Acústica/métodos , Percepción Auditiva/fisiología
2.
PLoS Comput Biol ; 20(4): e1011183, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38557984

RESUMEN

One of the key problems the brain faces is inferring the state of the world from a sequence of dynamically changing stimuli, and it is not yet clear how the sensory system achieves this task. A well-established computational framework for describing perceptual processes in the brain is provided by the theory of predictive coding. Although the original proposals of predictive coding have discussed temporal prediction, later work developing this theory mostly focused on static stimuli, and key questions on neural implementation and computational properties of temporal predictive coding networks remain open. Here, we address these questions and present a formulation of the temporal predictive coding model that can be naturally implemented in recurrent networks, in which activity dynamics rely only on local inputs to the neurons, and learning only utilises local Hebbian plasticity. Additionally, we show that temporal predictive coding networks can approximate the performance of the Kalman filter in predicting behaviour of linear systems, and behave as a variant of a Kalman filter which does not track its own subjective posterior variance. Importantly, temporal predictive coding networks can achieve similar accuracy as the Kalman filter without performing complex mathematical operations, but just employing simple computations that can be implemented by biological networks. Moreover, when trained with natural dynamic inputs, we found that temporal predictive coding can produce Gabor-like, motion-sensitive receptive fields resembling those observed in real neurons in visual areas. In addition, we demonstrate how the model can be effectively generalized to nonlinear systems. Overall, models presented in this paper show how biologically plausible circuits can predict future stimuli and may guide research on understanding specific neural circuits in brain areas involved in temporal prediction.


Asunto(s)
Encéfalo , Modelos Neurológicos , Encéfalo/fisiología , Aprendizaje , Neuronas/fisiología
3.
Proc Natl Acad Sci U S A ; 117(45): 28442-28451, 2020 11 10.
Artículo en Inglés | MEDLINE | ID: mdl-33097665

RESUMEN

Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.


Asunto(s)
Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Sonido , Estimulación Acústica , Animales , Percepción Auditiva/fisiología , Cóclea , Nervio Coclear/fisiología , Hurones , Humanos , Modelos Neurológicos , Neuronas/fisiología , Habla
4.
PLoS Comput Biol ; 15(5): e1006618, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-31059503

RESUMEN

Auditory neurons encode stimulus history, which is often modelled using a span of time-delays in a spectro-temporal receptive field (STRF). We propose an alternative model for the encoding of stimulus history, which we apply to extracellular recordings of neurons in the primary auditory cortex of anaesthetized ferrets. For a linear-non-linear STRF model (LN model) to achieve a high level of performance in predicting single unit neural responses to natural sounds in the primary auditory cortex, we found that it is necessary to include time delays going back at least 200 ms in the past. This is an unrealistic time span for biological delay lines. We therefore asked how much of this dependence on stimulus history can instead be explained by dynamical aspects of neurons. We constructed a neural-network model whose output is the weighted sum of units whose responses are determined by a dynamic firing-rate equation. The dynamic aspect performs low-pass filtering on each unit's response, providing an exponentially decaying memory whose time constant is individual to each unit. We find that this dynamic network (DNet) model, when fitted to the neural data using STRFs of only 25 ms duration, can achieve prediction performance on a held-out dataset comparable to the best performing LN model with STRFs of 200 ms duration. These findings suggest that integration due to the membrane time constants or other exponentially-decaying memory processes may underlie linear temporal receptive fields of neurons beyond 25 ms.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica/métodos , Potenciales de Acción/fisiología , Animales , Potenciales Evocados Auditivos/fisiología , Hurones , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Dinámicas no Lineales
5.
PLoS Comput Biol ; 15(1): e1006595, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30653497

RESUMEN

We investigate how the neural processing in auditory cortex is shaped by the statistics of natural sounds. Hypothesising that auditory cortex (A1) represents the structural primitives out of which sounds are composed, we employ a statistical model to extract such components. The input to the model are cochleagrams which approximate the non-linear transformations a sound undergoes from the outer ear, through the cochlea to the auditory nerve. Cochleagram components do not superimpose linearly, but rather according to a rule which can be approximated using the max function. This is a consequence of the compression inherent in the cochleagram and the sparsity of natural sounds. Furthermore, cochleagrams do not have negative values. Cochleagrams are therefore not matched well by the assumptions of standard linear approaches such as sparse coding or ICA. We therefore consider a new encoding approach for natural sounds, which combines a model of early auditory processing with maximal causes analysis (MCA), a sparse coding model which captures both the non-linear combination rule and non-negativity of the data. An efficient truncated EM algorithm is used to fit the MCA model to cochleagram data. We characterize the generative fields (GFs) inferred by MCA with respect to in vivo neural responses in A1 by applying reverse correlation to estimate spectro-temporal receptive fields (STRFs) implied by the learned GFs. Despite the GFs being non-negative, the STRF estimates are found to contain both positive and negative subfields, where the negative subfields can be attributed to explaining away effects as captured by the applied inference method. A direct comparison with ferret A1 shows many similar forms, and the spectral and temporal modulation tuning of both ferret and model STRFs show similar ranges over the population. In summary, our model represents an alternative to linear approaches for biological auditory encoding while it captures salient data properties and links inhibitory subfields to explaining away effects.


Asunto(s)
Corteza Auditiva/fisiología , Cóclea/fisiología , Modelos Neurológicos , Modelos Estadísticos , Procesamiento de Señales Asistido por Computador , Estimulación Acústica , Algoritmos , Animales , Femenino , Hurones , Pruebas Auditivas , Humanos , Masculino
6.
J Neurosci ; 36(2): 280-9, 2016 Jan 13.
Artículo en Inglés | MEDLINE | ID: mdl-26758822

RESUMEN

Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear-nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT: An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too.


Asunto(s)
Adaptación Fisiológica/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mesencéfalo/fisiología , Sonido , Estimulación Acústica , Animales , Vías Auditivas/fisiología , Femenino , Hurones , Modelos Lineales , Masculino , Modelos Neurológicos , Espectrografía del Sonido
7.
Proc Biol Sci ; 284(1866)2017 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-29118141

RESUMEN

The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat.


Asunto(s)
Percepción Auditiva/fisiología , Gerbillinae/fisiología , Colículos Inferiores/fisiología , Música , Desempeño Psicomotor , Estimulación Acústica , Adulto , Animales , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
8.
PLoS Comput Biol ; 12(11): e1005113, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27835647

RESUMEN

Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Inhibición Neural/fisiología , Células Receptoras Sensoriales/fisiología , Estimulación Acústica/métodos , Animales , Simulación por Computador , Humanos , Dinámicas no Lineales , Integración de Sistemas
9.
J Neurosci ; 34(5): 1963-9, 2014 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-24478375

RESUMEN

Adaptation to both common and rare sounds has been independently reported in neurophysiological studies using probabilistic stimulus paradigms in small mammals. However, the apparent sensitivity of the mammalian auditory system to the statistics of incoming sound has not yet been generalized to task-related human auditory perception. Here, we show that human listeners selectively adapt to novel sounds within scenes unfolding over minutes. Listeners' performance in an auditory discrimination task remains steady for the most common elements within the scene but, after the first minute, performance improves for distinct and rare (oddball) sound elements, at the expense of rare sounds that are relatively less distinct. Our data provide the first evidence of enhanced coding of oddball sounds in a human auditory discrimination task and suggest the existence of an adaptive mechanism that tracks the long-term statistics of sounds and deploys coding resources accordingly.


Asunto(s)
Adaptación Fisiológica/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Sonido , Estimulación Acústica , Humanos , Probabilidad , Psicoacústica , Estadística como Asunto , Factores de Tiempo
10.
J Acoust Soc Am ; 135(6): EL357-63, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24907846

RESUMEN

Periodic stimuli are common in natural environments and are ecologically relevant, for example, footsteps and vocalizations. This study reports a detectability enhancement for temporally cued, periodic sequences. Target noise bursts (embedded in background noise) arriving at the time points which followed on from an introductory, periodic "cue" sequence were more easily detected (by ∼1.5 dB SNR) than identical noise bursts which randomly deviated from the cued temporal pattern. Temporal predictability and corresponding neuronal "entrainment" have been widely theorized to underlie important processes in auditory scene analysis and to confer perceptual advantage. This is the first study in the auditory domain to clearly demonstrate a perceptual enhancement of temporally predictable, near-threshold stimuli.


Asunto(s)
Percepción Auditiva , Señales (Psicología) , Detección de Señal Psicológica , Percepción del Tiempo , Estimulación Acústica , Adulto , Audiometría , Umbral Auditivo , Femenino , Humanos , Masculino , Movimiento (Física) , Psicoacústica , Sonido , Factores de Tiempo , Adulto Joven
11.
J Acoust Soc Am ; 134(1): EL98-104, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23862914

RESUMEN

This study reports a role of temporal regularity on the perception of auditory streams. Listeners were presented with two-tone sequences in an A-B-A-B rhythm that was either regular or had a controlled amount of temporal jitter added independently to each of the B tones. Subjects were asked to report whether they perceived one or two streams. The percentage of trials in which two streams were reported substantially and significantly increased with increasing amounts of temporal jitter. This suggests that temporal predictability may serve as a binding cue during auditory scene analysis.


Asunto(s)
Atención , Señales (Psicología) , Ilusiones , Discriminación de la Altura Tonal , Espectrografía del Sonido , Percepción del Tiempo , Humanos , Psicoacústica
12.
Elife ; 122023 10 16.
Artículo en Inglés | MEDLINE | ID: mdl-37844199

RESUMEN

Visual neurons respond selectively to features that become increasingly complex from the eyes to the cortex. Retinal neurons prefer flashing spots of light, primary visual cortical (V1) neurons prefer moving bars, and those in higher cortical areas favor complex features like moving textures. Previously, we showed that V1 simple cell tuning can be accounted for by a basic model implementing temporal prediction - representing features that predict future sensory input from past input (Singer et al., 2018). Here, we show that hierarchical application of temporal prediction can capture how tuning properties change across at least two levels of the visual system. This suggests that the brain does not efficiently represent all incoming information; instead, it selectively represents sensory inputs that help in predicting the future. When applied hierarchically, temporal prediction extracts time-varying features that depend on increasingly high-level statistics of the sensory input.


Asunto(s)
Percepción de Movimiento , Vías Visuales , Vías Visuales/fisiología , Percepción de Movimiento/fisiología , Estimulación Luminosa , Neuronas/fisiología , Encéfalo , Percepción Visual/fisiología
13.
Elife ; 112022 05 26.
Artículo en Inglés | MEDLINE | ID: mdl-35617119

RESUMEN

In almost every natural environment, sounds are reflected by nearby objects, producing many delayed and distorted copies of the original sound, known as reverberation. Our brains usually cope well with reverberation, allowing us to recognize sound sources regardless of their environments. In contrast, reverberation can cause severe difficulties for speech recognition algorithms and hearing-impaired people. The present study examines how the auditory system copes with reverberation. We trained a linear model to recover a rich set of natural, anechoic sounds from their simulated reverberant counterparts. The model neurons achieved this by extending the inhibitory component of their receptive filters for more reverberant spaces, and did so in a frequency-dependent manner. These predicted effects were observed in the responses of auditory cortical neurons of ferrets in the same simulated reverberant environments. Together, these results suggest that auditory cortical neurons adapt to reverberation by adjusting their filtering properties in a manner consistent with dereverberation.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Adaptación Fisiológica , Animales , Corteza Auditiva/fisiología , Hurones , Humanos , Sonido , Percepción del Habla/fisiología
14.
PLoS One ; 16(6): e0238960, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34161323

RESUMEN

Sounds like "running water" and "buzzing bees" are classes of sounds which are a collective result of many similar acoustic events and are known as "sound textures". A recent psychoacoustic study using sound textures has reported that natural sounding textures can be synthesized from white noise by imposing statistical features such as marginals and correlations computed from the outputs of cochlear models responding to the textures. The outputs being the envelopes of bandpass filter responses, the 'cochlear envelope'. This suggests that the perceptual qualities of many natural sounds derive directly from such statistical features, and raises the question of how these statistical features are distributed in the acoustic environment. To address this question, we collected a corpus of 200 sound textures from public online sources and analyzed the distributions of the textures' marginal statistics (mean, variance, skew, and kurtosis), cross-frequency correlations and modulation power statistics. A principal component analysis of these parameters revealed a great deal of redundancy in the texture parameters. For example, just two marginal principal components, which can be thought of as measuring the sparseness or burstiness of a texture, capture as much as 64% of the variance of the 128 dimensional marginal parameter space, while the first two principal components of cochlear correlations capture as much as 88% of the variance in the 496 correlation parameters. Knowledge of the statistical distributions documented here may help guide the choice of acoustic stimuli with high ecological validity in future research.


Asunto(s)
Percepción Auditiva/fisiología , Sonido , Estimulación Acústica/métodos , Acústica , Cóclea/fisiología , Bases de Datos Factuales , Humanos , Modelos Estadísticos , Ruido , Análisis de Componente Principal/métodos , Psicoacústica
15.
Hear Res ; 412: 108357, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34739889

RESUMEN

Previous psychophysical studies have identified a hierarchy of time-averaged statistics which determine the identity of natural sound textures. However, it is unclear whether the neurons in the inferior colliculus (IC) are sensitive to each of these statistical features in the natural sound textures. We used 13 representative sound textures spanning the space of 3 statistics extracted from over 200 natural textures. The synthetic textures were generated by incorporating the statistical features in a step-by-step manner, in which a particular statistical feature was changed while the other statistical features remain unchanged. The extracellular activity in response to the synthetic texture stimuli was recorded in the IC of anesthetized rats. Analysis of the transient and sustained multiunit activity after each transition of statistical feature showed that the IC units were sensitive to the changes of all types of statistics, although to a varying extent. For example, we found that more neurons were sensitive to the changes in variance than that in the modulation correlations. Our results suggest that the sensitivity of the statistical features in the subcortical levels contributes to the identification and discrimination of natural sound textures.


Asunto(s)
Colículos Inferiores , Estimulación Acústica , Animales , Colículos Inferiores/fisiología , Neuronas/fisiología , Ratas , Sonido
16.
Nature ; 430(7000): 682-6, 2004 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-15295602

RESUMEN

A sound, depending on the position of its source, can take more time to reach one ear than the other. This interaural (between the ears) time difference (ITD) provides a major cue for determining the source location. Many auditory neurons are sensitive to ITDs, but the means by which such neurons represent ITD is a contentious issue. Recent studies question whether the classical general model (the Jeffress model) applies across species. Here we show that ITD coding strategies of different species can be explained by a unifying principle: that the ITDs an animal naturally encounters should be coded with maximal accuracy. Using statistical techniques and a stochastic neural model, we demonstrate that the optimal coding strategy for ITD depends critically on head size and sound frequency. For small head sizes and/or low-frequency sounds, the optimal coding strategy tends towards two distinct sub-populations tuned to ITDs outside the range created by the head. This is consistent with recent observations in small mammals. For large head sizes and/or high frequencies, the optimal strategy is a homogeneous distribution of ITD tunings within the range created by the head. This is consistent with observations in the barn owl. For humans, the optimal strategy to code ITDs from an acoustically measured distribution depends on frequency; above 400 Hz a homogeneous distribution is optimal, and below 400 Hz distinct sub-populations are optimal.


Asunto(s)
Percepción Auditiva/fisiología , Cabeza/anatomía & histología , Cabeza/fisiología , Modelos Neurológicos , Neuronas/fisiología , Estimulación Acústica , Potenciales de Acción , Animales , Gatos , Señales (Psicología) , Electrofisiología , Femenino , Gerbillinae/fisiología , Humanos , Masculino , Probabilidad , Especificidad de la Especie , Procesos Estocásticos , Estrigiformes/fisiología
17.
R Soc Open Sci ; 7(3): 191194, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32269783

RESUMEN

Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This 'neural emphasis' distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the 'bottom-up' processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.

18.
J Neurosci ; 28(25): 6430-8, 2008 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-18562614

RESUMEN

Auditory neurons must represent accurately a wide range of sound levels using firing rates that vary over a far narrower range of levels. Recently, we demonstrated that this "dynamic range problem" is lessened by neural adaptation, whereby neurons adjust their input-output functions for sound level according to the prevailing distribution of levels. These adjustments in input-output functions increase the accuracy with which levels around those occurring most commonly are coded by the neural population. Here, we examine how quickly this adaptation occurs. We recorded from single neurons in the auditory midbrain during a stimulus that switched repeatedly between two distributions of sound levels differing in mean level. The high-resolution analysis afforded by this stimulus showed that a prominent component of the adaptation occurs rapidly, with an average time constant across neurons of 160 ms after an increase in mean level, much faster than our previous experiments were able to assess. This time course appears to be independent of both the timescale over which sound levels varied and that over which sound level distributions varied, but is related to neural characteristic frequency. We find that adaptation to an increase in mean level occurs more rapidly than to a decrease. Finally, we observe an additional, slow adaptation in some neurons, which occurs over a timescale of tens of seconds. Our findings provide constraints in the search for mechanisms underlying adaptation to sound level. They also have functional implications for the role of adaptation in the representation of natural sounds.


Asunto(s)
Estimulación Acústica/métodos , Adaptación Fisiológica/fisiología , Sonido , Potenciales de Acción/fisiología , Animales , Percepción Auditiva/fisiología , Cobayas , Factores de Tiempo
19.
Nat Neurosci ; 8(12): 1684-9, 2005 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-16286934

RESUMEN

Mammals can hear sounds extending over a vast range of sound levels with remarkable accuracy. How auditory neurons code sound level over such a range is unclear; firing rates of individual neurons increase with sound level over only a very limited portion of the full range of hearing. We show that neurons in the auditory midbrain of the guinea pig adjust their responses to the mean, variance and more complex statistics of sound level distributions. We demonstrate that these adjustments improve the accuracy of the neural population code close to the region of most commonly occurring sound levels. This extends the range of sound levels that can be accurately encoded, fine-tuning hearing to the local acoustic environment.


Asunto(s)
Potenciales de Acción/fisiología , Percepción Auditiva/fisiología , Umbral Auditivo/fisiología , Colículos Inferiores/fisiología , Neuronas/fisiología , Transmisión Sináptica/fisiología , Estimulación Acústica , Animales , Vías Auditivas/fisiología , Cobayas , Percepción Sonora/fisiología , Percepción de la Altura Tonal/fisiología , Tiempo de Reacción/fisiología , Localización de Sonidos/fisiología , Factores de Tiempo
20.
Elife ; 72018 06 18.
Artículo en Inglés | MEDLINE | ID: mdl-29911971

RESUMEN

Neurons in sensory cortex are tuned to diverse features in natural scenes. But what determines which features neurons become selective to? Here we explore the idea that neuronal selectivity is optimized to represent features in the recent sensory past that best predict immediate future inputs. We tested this hypothesis using simple feedforward neural networks, which were trained to predict the next few moments of video or audio in clips of natural scenes. The networks developed receptive fields that closely matched those of real cortical neurons in different mammalian species, including the oriented spatial tuning of primary visual cortex, the frequency selectivity of primary auditory cortex and, most notably, their temporal tuning properties. Furthermore, the better a network predicted future inputs the more closely its receptive fields resembled those in the brain. This suggests that sensory processing is optimized to extract those features with the most capacity to predict future input.


Asunto(s)
Anticipación Psicológica , Corteza Auditiva/fisiología , Redes Neurales de la Computación , Células Receptoras Sensoriales/fisiología , Corteza Visual/fisiología , Estimulación Acústica , Animales , Corteza Auditiva/anatomía & histología , Simulación por Computador , Mamíferos/anatomía & histología , Mamíferos/fisiología , Estimulación Luminosa , Tiempo de Reacción/fisiología , Células Receptoras Sensoriales/citología , Grabación en Video , Corteza Visual/anatomía & histología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA