Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
PLoS Biol ; 20(9): e3001771, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-36074782

RESUMEN

Despite increasing representation in graduate training programs, a disproportionate number of women leave academic research without obtaining an independent position that enables them to train the next generation of academic researchers. To understand factors underlying this trend, we analyzed formal PhD and postdoctoral mentoring relationships in the life sciences during the years 2000 to 2020. Student and mentor gender are both associated with differences in rates of student's continuation to positions that allow formal academic mentorship. Although trainees of women mentors are less likely to take on positions as academic mentors than trainees of men mentors, this effect is reduced substantially after controlling for several measurements of mentor status. Thus, the effect of mentor gender can be explained at least partially by gender disparities in social and financial resources available to mentors. Because trainees and mentors tend to be of the same gender, this association between mentor gender and academic continuation disproportionately impacts women trainees. On average, gender homophily in graduate training is unrelated to mentor status. A notable exception to this trend is the special case of scientists having been granted an outstanding distinction, evidenced by membership in the National Academy of Sciences, being a grantee of the Howard Hughes Medical Institute, or having been awarded the Nobel Prize. This group of mentors trains men graduate students at higher rates than their most successful colleagues. These results suggest that, in addition to other factors that limit career choices for women trainees, gender inequities in mentors' access to resources and prestige contribute to women's attrition from independent research positions.


Asunto(s)
Disciplinas de las Ciencias Biológicas , Tutoría , Disciplinas de las Ciencias Biológicas/educación , Femenino , Humanos , Masculino , Mentores , Investigadores/educación , Encuestas y Cuestionarios
2.
PLoS Biol ; 19(6): e3001299, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-34133413

RESUMEN

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages-the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.


Asunto(s)
Corteza Auditiva/fisiología , Neuronas/fisiología , Vocalización Animal/fisiología , Estimulación Acústica , Anestesia , Animales , Femenino , Masculino , Modelos Biológicos , Factores de Tiempo
3.
PLoS Comput Biol ; 19(5): e1011110, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37146065

RESUMEN

Convolutional neural networks (CNNs) can provide powerful and flexible models of neural sensory processing. However, the utility of CNNs in studying the auditory system has been limited by their requirement for large datasets and the complex response properties of single auditory neurons. To address these limitations, we developed a population encoding model: a CNN that simultaneously predicts activity of several hundred neurons recorded during presentation of a large set of natural sounds. This approach defines a shared spectro-temporal space and pools statistical power across neurons. Population models of varying architecture performed consistently and substantially better than traditional linear-nonlinear models on data from primary and non-primary auditory cortex. Moreover, population models were highly generalizable. The output layer of a model pre-trained on one population of neurons could be fit to data from novel single units, achieving performance equivalent to that of neurons in the original fit data. This ability to generalize suggests that population encoding models capture a complete representational space across neurons in an auditory cortical field.


Asunto(s)
Corteza Auditiva , Corteza Auditiva/fisiología , Estimulación Acústica , Percepción Auditiva/fisiología , Modelos Neurológicos , Redes Neurales de la Computación
4.
J Neurosci ; 41(2): 284-297, 2021 01 13.
Artículo en Inglés | MEDLINE | ID: mdl-33208469

RESUMEN

While task-dependent changes have been demonstrated in auditory cortex for a number of behavioral paradigms and mammalian species, less is known about how behavioral state can influence neural coding in the midbrain areas that provide auditory information to cortex. We measured single-unit activity in the inferior colliculus (IC) of common marmosets of both sexes while they performed a tone-in-noise detection task and during passive presentation of identical task stimuli. In contrast to our previous study in the ferret IC, task engagement had little effect on sound-evoked activity in central (lemniscal) IC of the marmoset. However, activity was significantly modulated in noncentral fields, where responses were selectively enhanced for the target tone relative to the distractor noise. This led to an increase in neural discriminability between target and distractors. The results confirm that task engagement can modulate sound coding in the auditory midbrain, and support a hypothesis that subcortical pathways can mediate highly trained auditory behaviors.SIGNIFICANCE STATEMENT While the cerebral cortex is widely viewed as playing an essential role in the learning and performance of complex auditory behaviors, relatively little attention has been paid to the role of brainstem and midbrain areas that process sound information before it reaches cortex. This study demonstrates that the auditory midbrain is also modulated during behavior. These modulations amplify task-relevant sensory information, a process that is traditionally attributed to cortex.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Estimulación Acústica , Animales , Conducta Animal , Callithrix , Femenino , Hurones , Colículos Inferiores/fisiología , Masculino , Plasticidad Neuronal/fisiología , Ruido , Desempeño Psicomotor/fisiología
5.
J Neurosci ; 40(19): 3783-3798, 2020 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-32273487

RESUMEN

Statistical regularities in natural sounds facilitate the perceptual segregation of auditory sources, or streams. Repetition is one cue that drives stream segregation in humans, but the neural basis of this perceptual phenomenon remains unknown. We demonstrated a similar perceptual ability in animals by training ferrets of both sexes to detect a stream of repeating noise samples (foreground) embedded in a stream of random samples (background). During passive listening, we recorded neural activity in primary auditory cortex (A1) and secondary auditory cortex (posterior ectosylvian gyrus, PEG). We used two context-dependent encoding models to test for evidence of streaming of the repeating stimulus. The first was based on average evoked activity per noise sample and the second on the spectro-temporal receptive field. Both approaches tested whether differences in neural responses to repeating versus random stimuli were better modeled by scaling the response to both streams equally (global gain) or by separately scaling the response to the foreground versus background stream (stream-specific gain). Consistent with previous observations of adaptation, we found an overall reduction in global gain when the stimulus began to repeat. However, when we measured stream-specific changes in gain, responses to the foreground were enhanced relative to the background. This enhancement was stronger in PEG than A1. In A1, enhancement was strongest in units with low sparseness (i.e., broad sensory tuning) and with tuning selective for the repeated sample. Enhancement of responses to the foreground relative to the background provides evidence for stream segregation that emerges in A1 and is refined in PEG.SIGNIFICANCE STATEMENT To interact with the world successfully, the brain must parse behaviorally important information from a complex sensory environment. Complex mixtures of sounds often arrive at the ears simultaneously or in close succession, yet they are effortlessly segregated into distinct perceptual sources. This process breaks down in hearing-impaired individuals and speech recognition devices. By identifying the underlying neural mechanisms that facilitate perceptual segregation, we can develop strategies for ameliorating hearing loss and improving speech recognition technology in the presence of background noise. Here, we present evidence to support a hierarchical process, present in primary auditory cortex and refined in secondary auditory cortex, in which sound repetition facilitates segregation.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Ruido , Animales , Femenino , Hurones , Masculino , Neuronas/fisiología
6.
J Neurophysiol ; 123(1): 191-208, 2020 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-31721652

RESUMEN

Recent research in mice indicates that luminance-independent fluctuations in pupil size predict variability in spontaneous and evoked activity of single neurons in auditory and visual cortex. These findings suggest that pupil is an indicator of large-scale changes in arousal state that affect sensory processing. However, it is not known whether pupil-related state also influences the selectivity of auditory neurons. We recorded pupil size and single-unit spiking activity in the primary auditory cortex (A1) of nonanesthetized male and female ferrets during presentation of natural vocalizations and tone stimuli that allow measurement of frequency and level tuning. Neurons showed a systematic increase in both spontaneous and sound-evoked activity when pupil was large, as well as desynchronization and a decrease in trial-to-trial variability. Relationships between pupil size and firing rate were nonmonotonic in some cells. In most neurons, several measurements of tuning, including acoustic threshold, spectral bandwidth, and best frequency, remained stable across large changes in pupil size. Across the population, however, there was a small but significant decrease in acoustic threshold when pupil was dilated. In some recordings, we observed rapid, saccade-like eye movements during sustained pupil constriction, which may indicate sleep. Including the presence of this state as a separate variable in a regression model of neural variability accounted for some, but not all, of the variability and nonmonotonicity associated with changes in pupil size.NEW & NOTEWORTHY Cortical neurons vary in their response to repeated stimuli, and some portion of the variability is due to fluctuations in network state. By simultaneously recording pupil and single-neuron activity in auditory cortex of ferrets, we provide new evidence that network state affects the excitability of auditory neurons, but not sensory selectivity. In addition, we report the occurrence of possible sleep states, adding to evidence that pupil provides an index of both sleep and physiological arousal.


Asunto(s)
Nivel de Alerta/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Sincronización Cortical/fisiología , Potenciales Evocados Auditivos/fisiología , Pupila/fisiología , Sueño/fisiología , Animales , Femenino , Hurones , Masculino , Neuronas/fisiología , Vocalización Animal/fisiología
7.
PLoS Comput Biol ; 15(10): e1007430, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31626624

RESUMEN

Perception of vocalizations and other behaviorally relevant sounds requires integrating acoustic information over hundreds of milliseconds. Sound-evoked activity in auditory cortex typically has much shorter latency, but the acoustic context, i.e., sound history, can modulate sound evoked activity over longer periods. Contextual effects are attributed to modulatory phenomena, such as stimulus-specific adaption and contrast gain control. However, an encoding model that links context to natural sound processing has yet to be established. We tested whether a model in which spectrally tuned inputs undergo adaptation mimicking short-term synaptic plasticity (STP) can account for contextual effects during natural sound processing. Single-unit activity was recorded from primary auditory cortex of awake ferrets during presentation of noise with natural temporal dynamics and fully natural sounds. Encoding properties were characterized by a standard linear-nonlinear spectro-temporal receptive field (LN) model and variants that incorporated STP-like adaptation. In the adapting models, STP was applied either globally across all input spectral channels or locally to subsets of channels. For most neurons, models incorporating local STP predicted neural activity as well or better than LN and global STP models. The strength of nonlinear adaptation varied across neurons. Within neurons, adaptation was generally stronger for spectral channels with excitatory than inhibitory gain. Neurons showing improved STP model performance also tended to undergo stimulus-specific adaptation, suggesting a common mechanism for these phenomena. When STP models were compared between passive and active behavior conditions, response gain often changed, but average STP parameters were stable. Thus, spectrally and temporally heterogeneous adaptation, subserved by a mechanism with STP-like dynamics, may support representation of the complex spectro-temporal patterns that comprise natural sounds across wide-ranging sensory contexts.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Potenciales de Acción/fisiología , Adaptación Fisiológica/fisiología , Animales , Femenino , Hurones , Masculino , Modelos Neurológicos , Modelos de Interacción Espacial , Plasticidad Neuronal , Neuronas/fisiología , Ruido , Sonido
8.
J Neurosci ; 38(46): 9955-9966, 2018 11 14.
Artículo en Inglés | MEDLINE | ID: mdl-30266740

RESUMEN

Responses of auditory cortical neurons encode sound features of incoming acoustic stimuli and also are shaped by stimulus context and history. Previous studies of mammalian auditory cortex have reported a variable time course for such contextual effects ranging from milliseconds to minutes. However, in secondary auditory forebrain areas of songbirds, long-term stimulus-specific neuronal habituation to acoustic stimuli can persist for much longer periods of time, ranging from hours to days. Such long-term habituation in the songbird is a form of long-term auditory memory that requires gene expression. Although such long-term habituation has been demonstrated in avian auditory forebrain, this phenomenon has not previously been described in the mammalian auditory system. Utilizing a similar version of the avian habituation paradigm, we explored whether such long-term effects of stimulus history also occur in auditory cortex of a mammalian auditory generalist, the ferret. Following repetitive presentation of novel complex sounds, we observed significant response habituation in secondary auditory cortex, but not in primary auditory cortex. This long-term habituation appeared to be independent for each novel stimulus and often lasted for at least 20 min. These effects could not be explained by simple neuronal fatigue in the auditory pathway, because time-reversed sounds induced undiminished responses similar to those elicited by completely novel sounds. A parallel set of pupillometric response measurements in the ferret revealed long-term habituation effects similar to observed long-term neural habituation, supporting the hypothesis that habituation to passively presented stimuli is correlated with implicit learning and long-term recognition of familiar sounds.SIGNIFICANCE STATEMENT Long-term habituation in higher areas of songbird auditory forebrain is associated with gene expression and is correlated with recognition memory. Similar long-term auditory habituation in mammals has not been previously described. We studied such habituation in single neurons in the auditory cortex of awake ferrets that were passively listening to repeated presentations of various complex sounds. Responses exhibited long-lasting habituation (at least 20 min) in the secondary, but not primary auditory cortex. Habituation ceased when stimuli were played backward, despite having identical spectral content to the original sound. This long-term neural habituation correlated with similar habituation of ferret pupillary responses to repeated presentations of the same stimuli, suggesting that stimulus habituation is retained as a long-term behavioral memory.


Asunto(s)
Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Habituación Psicofisiológica/fisiología , Memoria/fisiología , Animales , Vías Auditivas/fisiología , Femenino , Hurones
9.
Cereb Cortex ; 28(1): 323-339, 2018 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-29136104

RESUMEN

Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1.


Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Neuronas/fisiología , Estimulación Acústica , Potenciales de Acción , Animales , Hurones , Masculino , Microelectrodos , Pruebas Neuropsicológicas
10.
Nature ; 548(7665): 35-36, 2017 08 03.
Artículo en Inglés | MEDLINE | ID: mdl-28723899

Asunto(s)
Cognición , Neuronas , Humanos
11.
Proc Natl Acad Sci U S A ; 111(18): 6792-7, 2014 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-24753585

RESUMEN

Humans and animals can reliably perceive behaviorally relevant sounds in noisy and reverberant environments, yet the neural mechanisms behind this phenomenon are largely unknown. To understand how neural circuits represent degraded auditory stimuli with additive and reverberant distortions, we compared single-neuron responses in ferret primary auditory cortex to speech and vocalizations in four conditions: clean, additive white and pink (1/f) noise, and reverberation. Despite substantial distortion, responses of neurons to the vocalization signal remained stable, maintaining the same statistical distribution in all conditions. Stimulus spectrograms reconstructed from population responses to the distorted stimuli resembled more the original clean than the distorted signals. To explore mechanisms contributing to this robustness, we simulated neural responses using several spectrotemporal receptive field models that incorporated either a static nonlinearity or subtractive synaptic depression and multiplicative gain normalization. The static model failed to suppress the distortions. A dynamic model incorporating feed-forward synaptic depression could account for the reduction of additive noise, but only the combined model with feedback gain normalization was able to predict the effects across both additive and reverberant conditions. Thus, both mechanisms can contribute to the abilities of humans and animals to extract relevant sounds in diverse noisy environments.


Asunto(s)
Corteza Auditiva/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Animales , Femenino , Hurones/fisiología , Humanos , Modelos Neurológicos , Neuronas/fisiología , Ruido , Dinámicas no Lineales , Vocalización Animal
12.
Proc Natl Acad Sci U S A ; 111(52): 18745-50, 2014 Dec 30.
Artículo en Inglés | MEDLINE | ID: mdl-25512496

RESUMEN

Noninvasive functional imaging holds great promise for serving as a translational bridge between human and animal models of various neurological and psychiatric disorders. However, despite a depth of knowledge of the cellular and molecular underpinnings of atypical processes in mouse models, little is known about the large-scale functional architecture measured by functional brain imaging, limiting translation to human conditions. Here, we provide a robust processing pipeline to generate high-resolution, whole-brain resting-state functional connectivity MRI (rs-fcMRI) images in the mouse. Using a mesoscale structural connectome (i.e., an anterograde tracer mapping of axonal projections across the mouse CNS), we show that rs-fcMRI in the mouse has strong structural underpinnings, validating our procedures. We next directly show that large-scale network properties previously identified in primates are present in rodents, although they differ in several ways. Last, we examine the existence of the so-called default mode network (DMN)--a distributed functional brain system identified in primates as being highly important for social cognition and overall brain function and atypically functionally connected across a multitude of disorders. We show the presence of a potential DMN in the mouse brain both structurally and functionally. Together, these studies confirm the presence of basic network properties and functional networks of high translational importance in structural and functional systems in the mouse brain. This work clears the way for an important bridge measurement between human and rodent models, enabling us to make stronger conclusions about how regionally specific cellular and molecular manipulations in mice relate back to humans.


Asunto(s)
Axones/patología , Conectoma , Imagen por Resonancia Magnética , Red Nerviosa , Enfermedades del Sistema Nervioso , Trastornos Psicóticos , Animales , Modelos Animales de Enfermedad , Humanos , Masculino , Ratones , Red Nerviosa/patología , Red Nerviosa/fisiopatología , Enfermedades del Sistema Nervioso/patología , Enfermedades del Sistema Nervioso/fisiopatología , Trastornos Psicóticos/patología , Trastornos Psicóticos/fisiopatología
13.
J Neurosci ; 35(38): 13090-102, 2015 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-26400939

RESUMEN

Previous research has demonstrated that auditory cortical neurons can modify their receptive fields when animals engage in auditory detection tasks. We tested for this form of task-related plasticity in the inferior colliculus (IC) of ferrets trained to detect a pure tone target in a sequence of noise distractors that did not overlap in time. During behavior, responses were suppressed at the target tone frequency in approximately half of IC neurons relative to the passive state. This suppression often resulted from a combination of a local tuning change and a global change in overall excitability. Local and global suppression were stronger when the target frequency was aligned to neuronal best frequency. Local suppression in the IC was indistinguishable from that described previously in auditory cortex, while global suppression was unique to the IC. The results demonstrate that engaging in an auditory task can change selectivity for task-relevant features in the midbrain, an area where these effects have not been reported previously. Significance statement: Previous studies have demonstrated that the receptive fields of cortical neurons are modified when animals engage in auditory behaviors, a process that is hypothesized to provide the basis for segregating sound sources in an auditory scene. This study demonstrates for the first time that receptive fields of neurons in the midbrain inferior colliculus are also modified during behavior. The magnitude of the tuning changes is similar to previous reports in cortex. These results support a hierarchical model of behaviorally driven sound segregation that begins in the subcortical auditory network.


Asunto(s)
Percepción Auditiva/fisiología , Colículos Inferiores/citología , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Detección de Señal Psicológica/fisiología , Estimulación Acústica , Acústica , Potenciales de Acción/fisiología , Animales , Hurones , Masculino , Modelos Biológicos
14.
J Neurophysiol ; 115(5): 2389-98, 2016 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-26912594

RESUMEN

Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva , Potenciales Evocados Auditivos , Animales , Corteza Auditiva/citología , Hurones , Neuronas/fisiología , Tiempo de Reacción , Sonido
15.
PLoS Comput Biol ; 11(12): e1004628, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26683490

RESUMEN

Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Células Receptoras Sensoriales/fisiología , Estimulación Acústica/métodos , Algoritmos , Animales , Simulación por Computador , Hurones , Procesamiento de Señales Asistido por Computador
16.
PLoS Biol ; 10(1): e1001251, 2012 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-22303281

RESUMEN

How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.


Asunto(s)
Corteza Auditiva/fisiología , Mapeo Encefálico , Acústica del Lenguaje , Algoritmos , Simulación por Computador , Electrodos Implantados , Electroencefalografía , Femenino , Humanos , Modelos Lineales , Masculino , Modelos Biológicos
17.
Proc Natl Acad Sci U S A ; 109(6): 2144-9, 2012 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-22308415

RESUMEN

As sensory stimuli and behavioral demands change, the attentive brain quickly identifies task-relevant stimuli and associates them with appropriate motor responses. The effects of attention on sensory processing vary across task paradigms, suggesting that the brain may use multiple strategies and mechanisms to highlight attended stimuli and link them to motor action. To better understand factors that contribute to these variable effects, we studied sensory representations in primary auditory cortex (A1) during two instrumental tasks that shared the same auditory discrimination but required different behavioral responses, either approach or avoidance. In the approach task, ferrets were rewarded for licking a spout when they heard a target tone amid a sequence of reference noise sounds. In the avoidance task, they were punished unless they inhibited licking to the target. To explore how these changes in task reward structure influenced attention-driven rapid plasticity in A1, we measured changes in sensory neural responses during behavior. Responses to the target changed selectively during both tasks but did so with opposite sign. Despite the differences in sign, both effects were consistent with a general neural coding strategy that maximizes discriminability between sound classes. The dependence of the direction of plasticity on task suggests that representations in A1 change not only to sharpen representations of task-relevant stimuli but also to amplify responses to stimuli that signal aversive outcomes and lead to behavioral inhibition. Thus, top-down control of sensory processing can be shaped by task reward structure in addition to the required sensory discrimination.


Asunto(s)
Corteza Auditiva/fisiología , Plasticidad Neuronal/fisiología , Recompensa , Análisis y Desempeño de Tareas , Estimulación Acústica , Animales , Reacción de Prevención/fisiología , Conducta Animal/fisiología , Hurones , Factores de Tiempo
18.
J Neurosci ; 33(49): 19154-66, 2013 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-24305812

RESUMEN

Speech and other natural vocalizations are characterized by large modulations in their sound envelope. The timing of these modulations contains critical information for discrimination of important features, such as phonemes. We studied how depression of synaptic inputs, a mechanism frequently reported in cortex, can contribute to the encoding of envelope dynamics. Using a nonlinear stimulus-response model that accounted for synaptic depression, we predicted responses of neurons in ferret primary auditory cortex (A1) to stimuli with natural temporal modulations. The depression model consistently performed better than linear and second-order models previously used to characterize A1 neurons, and it produced more biologically plausible fits. To test how synaptic depression can contribute to temporal stimulus integration, we used nonparametric maximum a posteriori decoding to compare the ability of neurons showing and not showing depression to reconstruct the stimulus envelope. Neurons showing evidence for depression reconstructed stimuli over a longer range of latencies. These findings suggest that variation in depression across the cortical population supports a rich code for representing the temporal dynamics of natural sounds.


Asunto(s)
Corteza Auditiva/fisiología , Hurones/fisiología , Estimulación Acústica , Algoritmos , Animales , Percepción Auditiva/fisiología , Craneotomía , Interpretación Estadística de Datos , Fenómenos Electrofisiológicos , Femenino , Modelos Neurológicos , Neuronas/fisiología , Dinámicas no Lineales , Sinapsis/fisiología , Vocalización Animal/fisiología
19.
Curr Res Neurobiol ; 6: 100118, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38152461

RESUMEN

Accurate sound perception can require integrating information over hundreds of milliseconds or even seconds. Spectro-temporal models of sound coding by single neurons in auditory cortex indicate that the majority of sound-evoked activity can be attributed to stimuli with a few tens of milliseconds. It remains uncertain how the auditory system integrates information about sensory context on a longer timescale. Here we characterized long-lasting contextual effects in auditory cortex (AC) using a diverse set of natural sound stimuli. We measured context effects as the difference in a neuron's response to a single probe sound following two different context sounds. Many AC neurons showed context effects lasting longer than the temporal window of a traditional spectro-temporal receptive field. The duration and magnitude of context effects varied substantially across neurons and stimuli. This diversity of context effects formed a sparse code across the neural population that encoded a wider range of contexts than any constituent neuron. Encoding model analysis indicates that context effects can be explained by activity in the local neural population, suggesting that recurrent local circuits support a long-lasting representation of sensory context in auditory cortex.

20.
PLoS Comput Biol ; 8(11): e1002775, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23166484

RESUMEN

How interactions between neurons relate to tuned neural responses is a longstanding question in systems neuroscience. Here we use statistical modeling and simultaneous multi-electrode recordings to explore the relationship between these interactions and tuning curves in six different brain areas. We find that, in most cases, functional interactions between neurons provide an explanation of spiking that complements and, in some cases, surpasses the influence of canonical tuning curves. Modeling functional interactions improves both encoding and decoding accuracy by accounting for noise correlations and features of the external world that tuning curves fail to capture. In cortex, modeling coupling alone allows spikes to be predicted more accurately than tuning curve models based on external variables. These results suggest that statistical models of functional interactions between even relatively small numbers of neurons may provide a useful framework for examining neural coding.


Asunto(s)
Modelos Neurológicos , Modelos Estadísticos , Neuronas/fisiología , Potenciales de Acción/fisiología , Animales , Encéfalo/fisiología , Biología Computacional , Simulación por Computador , Bases de Datos Factuales , Electrodos , Electrofisiología , Macaca , Red Nerviosa/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA