RESUMO
To increase computational flexibility, the processing of sensory inputs changes with behavioural context. In the visual system, active behavioural states characterized by motor activity and pupil dilation1,2 enhance sensory responses, but typically leave the preferred stimuli of neurons unchanged2-9. Here we find that behavioural state also modulates stimulus selectivity in the mouse visual cortex in the context of coloured natural scenes. Using population imaging in behaving mice, pharmacology and deep neural network modelling, we identified a rapid shift in colour selectivity towards ultraviolet stimuli during an active behavioural state. This was exclusively caused by state-dependent pupil dilation, which resulted in a dynamic switch from rod to cone photoreceptors, thereby extending their role beyond night and day vision. The change in tuning facilitated the decoding of ethological stimuli, such as aerial predators against the twilight sky10. For decades, studies in neuroscience and cognitive science have used pupil dilation as an indirect measure of brain state. Our data suggest that, in addition, state-dependent pupil dilation itself tunes visual representations to behavioural demands by differentially recruiting rods and cones on fast timescales.
Assuntos
Cor , Pupila , Reflexo Pupilar , Visão Ocular , Córtex Visual , Animais , Escuridão , Aprendizado Profundo , Camundongos , Estimulação Luminosa , Pupila/fisiologia , Pupila/efeitos da radiação , Reflexo Pupilar/fisiologia , Células Fotorreceptoras Retinianas Cones/efeitos dos fármacos , Células Fotorreceptoras Retinianas Cones/fisiologia , Células Fotorreceptoras Retinianas Bastonetes/efeitos dos fármacos , Células Fotorreceptoras Retinianas Bastonetes/fisiologia , Fatores de Tempo , Raios Ultravioleta , Visão Ocular/fisiologia , Córtex Visual/fisiologiaRESUMO
Color is an important visual feature that informs behavior, and the retinal basis for color vision has been studied across various vertebrate species. While many studies have investigated how color information is processed in visual brain areas of primate species, we have limited understanding of how it is organized beyond the retina in other species, including most dichromatic mammals. In this study, we systematically characterized how color is represented in the primary visual cortex (V1) of mice. Using large-scale neuronal recordings and a luminance and color noise stimulus, we found that more than a third of neurons in mouse V1 are color-opponent in their receptive field center, while the receptive field surround predominantly captures luminance contrast. Furthermore, we found that color-opponency is especially pronounced in posterior V1 that encodes the sky, matching the statistics of natural scenes experienced by mice. Using unsupervised clustering, we demonstrate that the asymmetry in color representations across cortex can be explained by an uneven distribution of green-On/UV-Off color-opponent response types that are represented in the upper visual field. Finally, a simple model with natural scene-inspired parametric stimuli shows that green-On/UV-Off color-opponent response types may enhance the detection of 'predatory'-like dark UV-objects in noisy daylight scenes. The results from this study highlight the relevance of color processing in the mouse visual system and contribute to our understanding of how color information is organized in the visual hierarchy across species.
Assuntos
Visão de Cores , Córtex Visual , Animais , Camundongos , Visão de Cores/fisiologia , Córtex Visual/fisiologia , Percepção de Cores/fisiologia , Estimulação Luminosa , Camundongos Endogâmicos C57BL , Neurônios/fisiologia , Córtex Visual Primário/fisiologia , MasculinoRESUMO
A key feature of neurons in the primary visual cortex (V1) of primates is their orientation selectivity. Recent studies using deep neural network models showed that the most exciting input (MEI) for mouse V1 neurons exhibit complex spatial structures that predict non-uniform orientation selectivity across the receptive field (RF), in contrast to the classical Gabor filter model. Using local patches of drifting gratings, we identified heterogeneous orientation tuning in mouse V1 that varied up to 90° across sub-regions of the RF. This heterogeneity correlated with deviations from optimal Gabor filters and was consistent across cortical layers and recording modalities (calcium vs. spikes). In contrast, model-synthesized MEIs for macaque V1 neurons were predominantly Gabor like, consistent with previous studies. These findings suggest that complex spatial feature selectivity emerges earlier in the visual pathway in mice than in primates. This may provide a faster, though less general, method of extracting task-relevant information.
Assuntos
Córtex Visual Primário , Animais , Camundongos , Córtex Visual Primário/fisiologia , Orientação/fisiologia , Camundongos Endogâmicos C57BL , Neurônios/fisiologia , Estimulação Luminosa , Masculino , Campos Visuais/fisiologia , Córtex Visual/fisiologia , Vias Visuais/fisiologia , PrimatasRESUMO
Understanding how biological visual systems process information is challenging because of the nonlinear relationship between visual input and neuronal responses. Artificial neural networks allow computational neuroscientists to create predictive models that connect biological and machine vision. Machine learning has benefited tremendously from benchmarks that compare different model on the same task under standardized conditions. However, there was no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we established the SENSORIUM 2023 Benchmark Competition with dynamic input, featuring a new large-scale dataset from the primary visual cortex of ten mice. This dataset includes responses from 78,853 neurons to 2 hours of dynamic stimuli per neuron, together with the behavioral measurements such as running speed, pupil dilation, and eye movements. The competition ranked models in two tracks based on predictive performance for neuronal responses on a held-out test set: one focusing on predicting in-domain natural stimuli and another on out-of-distribution (OOD) stimuli to assess model generalization. As part of the NeurIPS 2023 competition track, we received more than 160 model submissions from 22 teams. Several new architectures for predictive models were proposed, and the winning teams improved the previous state-of-the-art model by 50%. Access to the dataset as well as the benchmarking infrastructure will remain online at www.sensorium-competition.net.
RESUMO
Color is an important visual feature that informs behavior, and the retinal basis for color vision has been studied across various vertebrate species. While we know how color information is processed in visual brain areas of primates, we have limited understanding of how it is organized beyond the retina in other species, including most dichromatic mammals. In this study, we systematically characterized how color is represented in the primary visual cortex (V1) of mice. Using large-scale neuronal recordings and a luminance and color noise stimulus, we found that more than a third of neurons in mouse V1 are color-opponent in their receptive field center, while the receptive field surround predominantly captures luminance contrast. Furthermore, we found that color-opponency is especially pronounced in posterior V1 that encodes the sky, matching the statistics of mouse natural scenes. Using unsupervised clustering, we demonstrate that the asymmetry in color representations across cortex can be explained by an uneven distribution of green-On/UV-Off color-opponent response types that are represented in the upper visual field. This type of color-opponency in the receptive field center was not present at the level of the retinal output and, therefore, is likely computed in the cortex by integrating upstream visual signals. Finally, a simple model with natural scene-inspired parametric stimuli shows that green-On/UV-Off color-opponent response types may enhance the detection of "predatory"-like dark UV-objects in noisy daylight scenes. The results from this study highlight the relevance of color processing in the mouse visual system and contribute to our understanding of how color information is organized in the visual hierarchy across species. More broadly, they support the hypothesis that visual cortex combines upstream information towards computing neuronal selectivity to behaviorally-relevant sensory features.
RESUMO
Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input (i.e. images). However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Benchmark Competition with dynamic input (https://www.sensorium-competition.net/). This competition includes the collection of a new large-scale dataset from the primary visual cortex of five mice, containing responses from over 38,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input (i.e. video). We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.
RESUMO
A defining characteristic of intelligent systems, whether natural or artificial, is the ability to generalize and infer behaviorally relevant latent causes from high-dimensional sensory input, despite significant variations in the environment. To understand how brains achieve generalization, it is crucial to identify the features to which neurons respond selectively and invariantly. However, the high-dimensional nature of visual inputs, the non-linearity of information processing in the brain, and limited experimental time make it challenging to systematically characterize neuronal tuning and invariances, especially for natural stimuli. Here, we extended "inception loops" - a paradigm that iterates between large-scale recordings, neural predictive models, and in silico experiments followed by in vivo verification - to systematically characterize single neuron invariances in the mouse primary visual cortex. Using the predictive model we synthesized Diverse Exciting Inputs (DEIs), a set of inputs that differ substantially from each other while each driving a target neuron strongly, and verified these DEIs' efficacy in vivo. We discovered a novel bipartite invariance: one portion of the receptive field encoded phase-invariant texture-like patterns, while the other portion encoded a fixed spatial pattern. Our analysis revealed that the division between the fixed and invariant portions of the receptive fields aligns with object boundaries defined by spatial frequency differences present in highly activating natural images. These findings suggest that bipartite invariance might play a role in segmentation by detecting texture-defined object boundaries, independent of the phase of the texture. We also replicated these bipartite DEIs in the functional connectomics MICrONs data set, which opens the way towards a circuit-level mechanistic understanding of this novel type of invariance. Our study demonstrates the power of using a data-driven deep learning approach to systematically characterize neuronal invariances. By applying this method across the visual hierarchy, cell types, and sensory modalities, we can decipher how latent variables are robustly extracted from natural scenes, leading to a deeper understanding of generalization.
RESUMO
A key role of sensory processing is integrating information across space. Neuronal responses in the visual system are influenced by both local features in the receptive field center and contextual information from the surround. While center-surround interactions have been extensively studied using simple stimuli like gratings, investigating these interactions with more complex, ecologically-relevant stimuli is challenging due to the high dimensionality of the stimulus space. We used large-scale neuronal recordings in mouse primary visual cortex to train convolutional neural network (CNN) models that accurately predicted center-surround interactions for natural stimuli. These models enabled us to synthesize surround stimuli that strongly suppressed or enhanced neuronal responses to the optimal center stimulus, as confirmed by in vivo experiments. In contrast to the common notion that congruent center and surround stimuli are suppressive, we found that excitatory surrounds appeared to complete spatial patterns in the center, while inhibitory surrounds disrupted them. We quantified this effect by demonstrating that CNN-optimized excitatory surround images have strong similarity in neuronal response space with surround images generated by extrapolating the statistical properties of the center, and with patches of natural scenes, which are known to exhibit high spatial correlations. Our findings cannot be explained by theories like redundancy reduction or predictive coding previously linked to contextual modulation in visual cortex. Instead, we demonstrated that a hierarchical probabilistic model incorporating Bayesian inference, and modulating neuronal responses based on prior knowledge of natural scene statistics, can explain our empirical results. We replicated these center-surround effects in the multi-area functional connectomics MICrONS dataset using natural movies as visual stimuli, which opens the way towards understanding circuit level mechanism, such as the contributions of lateral and feedback recurrent connections. Our data-driven modeling approach provides a new understanding of the role of contextual interactions in sensory processing and can be adapted across brain areas, sensory modalities, and species.
RESUMO
Understanding the brain's perception algorithm is a highly intricate problem, as the inherent complexity of sensory inputs and the brain's nonlinear processing make characterizing sensory representations difficult. Recent studies have shown that functional models-capable of predicting large-scale neuronal activity in response to arbitrary sensory input-can be powerful tools for characterizing neuronal representations by enabling high-throughput in silico experiments. However, accurately modeling responses to dynamic and ecologically relevant inputs like videos remains challenging, particularly when generalizing to new stimulus domains outside the training distribution. Inspired by recent breakthroughs in artificial intelligence, where foundation models-trained on vast quantities of data-have demonstrated remarkable capabilities and generalization, we developed a "foundation model" of the mouse visual cortex: a deep neural network trained on large amounts of neuronal responses to ecological videos from multiple visual cortical areas and mice. The model accurately predicted neuronal responses not only to natural videos but also to various new stimulus domains, such as coherent moving dots and noise patterns, underscoring its generalization abilities. The foundation model could also be adapted to new mice with minimal natural movie training data. We applied the foundation model to the MICrONS dataset: a study of the brain that integrates structure with function at unprecedented scale, containing nanometer-scale morphology, connectivity with >500,000,000 synapses, and function of >70,000 neurons within a ~1mm3 volume spanning multiple areas of the mouse visual cortex. This accurate functional model of the MICrONS data opens the possibility for a systematic characterization of the relationship between circuit structure and function. By precisely capturing the response properties of the visual cortex and generalizing to new stimulus domains and mice, foundation models can pave the way for a deeper understanding of visual computation.
RESUMO
To understand how the brain computes, it is important to unravel the relationship between circuit connectivity and function. Previous research has shown that excitatory neurons in layer 2/3 of the primary visual cortex of mice with similar response properties are more likely to form connections. However, technical challenges of combining synaptic connectivity and functional measurements have limited these studies to few, highly local connections. Utilizing the millimeter scale and nanometer resolution of the MICrONS dataset, we studied the connectivity-function relationship in excitatory neurons of the mouse visual cortex across interlaminar and interarea projections, assessing connection selectivity at the coarse axon trajectory and fine synaptic formation levels. A digital twin model of this mouse, that accurately predicted responses to arbitrary video stimuli, enabled a comprehensive characterization of the function of neurons. We found that neurons with highly correlated responses to natural videos tended to be connected with each other, not only within the same cortical area but also across multiple layers and visual areas, including feedforward and feedback connections, whereas we did not find that orientation preference predicted connectivity. The digital twin model separated each neuron's tuning into a feature component (what the neuron responds to) and a spatial component (where the neuron's receptive field is located). We show that the feature, but not the spatial component, predicted which neurons were connected at the fine synaptic scale. Together, our results demonstrate the "like-to-like" connectivity rule generalizes to multiple connection types, and the rich MICrONS dataset is suitable to further refine a mechanistic understanding of circuit structure and function.
RESUMO
Glutamate (GLU) and γ-aminobutyric acid (GABA) are the major excitatory (E) and inhibitory (I) neurotransmitters in the brain, respectively. Dysregulation of the E/I ratio is associated with numerous neurological disorders. Enzyme-based microelectrode array biosensors present the potential for improved biocompatibility, localized sample volumes, and much faster sampling rates over existing measurement methods. However, enzymes degrade over time. To overcome the time limitation of permanently implanted microbiosensors, we created a microwire-based biosensor that can be periodically inserted into a permanently implanted cannula. Biosensor coatings were based on our previously developed GLU and reagent-free GABA shank-type biosensor. In addition, the microwire biosensors were in the same geometric plane for the improved acquisition of signals in planar tissue including rodent brain slices, cultured cells, and brain regions with laminar structure. We measured real-time dynamics of GLU and GABA in rat hippocampal slices and observed a significant, nonlinear shift in the E/I ratio from excitatory to inhibitory dominance as electrical stimulation frequency increased from 10 to 140 Hz, suggesting that GABA release is a component of a homeostatic mechanism in the hippocampus to prevent excitotoxic damage. Additionally, we recorded from a freely moving rat over fourteen weeks, inserting fresh biosensors each time, thus demonstrating that the microwire biosensor overcomes the time limitation of permanently implanted biosensors and that the biosensors detect relevant changes in GLU and GABA levels that are consistent with various behaviors.
Assuntos
Técnicas Biossensoriais , Ácido Glutâmico/química , Microeletrodos , Ácido gama-Aminobutírico/química , Animais , Encéfalo/diagnóstico por imagem , Estimulação Elétrica , Homeostase , Masculino , Micro-Ondas , Modelos Neurológicos , Rede Nervosa , Neurônios/metabolismo , Neurotransmissores , Platina/química , Ratos , Ratos Sprague-Dawley , Propriedades de SuperfícieRESUMO
High resolution, in vivo optical imaging of the mouse brain over time often requires anesthesia, which necessitates maintaining the animal's body temperature and level of anesthesia, as well as securing the head in an optimal, stable position. Controlling each parameter usually requires using multiple systems. Assembling multiple components into the small space on a standard microscope stage can be difficult and some commercially available parts simply do not fit. Furthermore, it is time-consuming to position an animal in the identical position over multiple imaging sessions for longitudinal studies. This is especially true when using an implanted gradient index (GRIN) lens for deep brain imaging. The multiphoton laser beam must be parallel with the shaft of the lens because even a slight tilt of the lens can degrade image quality. In response to these challenges, we have designed a compact, integrated in vivo imaging support system to overcome the problems created by using separate systems during optical imaging in mice. It is a single platform that provides (1) sturdy head fixation, (2) an integrated gas anesthesia mask, and (3) safe warm water heating. This THREE-IN-ONE (TRIO) Platform has a small footprint and a low profile that positions a mouse's head only 20 mm above the microscope stage. This height is about one half to one third the height of most commercially available immobilization devices. We have successfully employed this system, using isoflurane in over 40 imaging sessions with an average of 2 h per session with no leaks or other malfunctions. Due to its smaller size, the TRIO Platform can be used with a wider range of upright microscopes and stages. Most of the components were designed in SOLIDWORKS® and fabricated using a 3D printer. This additive manufacturing approach also readily permits size modifications for creating systems for other small animals.