Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 34
1.
PLoS Comput Biol ; 20(5): e1012056, 2024 May.
Article En | MEDLINE | ID: mdl-38781156

Responses to natural stimuli in area V4-a mid-level area of the visual ventral stream-are well predicted by features from convolutional neural networks (CNNs) trained on image classification. This result has been taken as evidence for the functional role of V4 in object classification. However, we currently do not know if and to what extent V4 plays a role in solving other computational objectives. Here, we investigated normative accounts of V4 (and V1 for comparison) by predicting macaque single-neuron responses to natural images from the representations extracted by 23 CNNs trained on different computer vision tasks including semantic, geometric, 2D, and 3D types of tasks. We found that V4 was best predicted by semantic classification features and exhibited high task selectivity, while the choice of task was less consequential to V1 performance. Consistent with traditional characterizations of V4 function that show its high-dimensional tuning to various 2D and 3D stimulus directions, we found that diverse non-semantic tasks explained aspects of V4 function that are not captured by individual semantic tasks. Nevertheless, jointly considering the features of a pair of semantic classification tasks was sufficient to yield one of our top V4 models, solidifying V4's main functional role in semantic processing and suggesting that V4's selectivity to 2D or 3D stimulus properties found by electrophysiologists can result from semantic functional goals.


Models, Neurological , Neural Networks, Computer , Semantics , Visual Cortex , Animals , Visual Cortex/physiology , Computational Biology , Photic Stimulation , Neurons/physiology , Macaca mulatta , Macaca
2.
ArXiv ; 2024 Mar 14.
Article En | MEDLINE | ID: mdl-38560735

Identifying cell types and understanding their functional properties is crucial for unraveling the mechanisms underlying perception and cognition. In the retina, functional types can be identified by carefully selected stimuli, but this requires expert domain knowledge and biases the procedure towards previously known cell types. In the visual cortex, it is still unknown what functional types exist and how to identify them. Thus, for unbiased identification of the functional cell types in retina and visual cortex, new approaches are needed. Here we propose an optimization-based clustering approach using deep predictive models to obtain functional clusters of neurons using Most Discriminative Stimuli (MDS). Our approach alternates between stimulus optimization with cluster reassignment akin to an expectation-maximization algorithm. The algorithm recovers functional clusters in mouse retina, marmoset retina and macaque visual area V4. This demonstrates that our approach can successfully find discriminative stimuli across species, stages of the visual system and recording techniques. The resulting most discriminative stimuli can be used to assign functional cell types fast and on the fly, without the need to train complex predictive models or show a large natural scene dataset, paving the way for experiments that were previously limited by experimental time. Crucially, MDS are interpretable: they visualize the distinctive stimulus patterns that most unambiguously identify a specific type of neuron.

3.
ArXiv ; 2023 May 31.
Article En | MEDLINE | ID: mdl-37396602

Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input (i.e. images). However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Benchmark Competition with dynamic input (https://www.sensorium-competition.net/). This competition includes the collection of a new large-scale dataset from the primary visual cortex of five mice, containing responses from over 38,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input (i.e. video). We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.

4.
bioRxiv ; 2023 Mar 16.
Article En | MEDLINE | ID: mdl-36993218

A defining characteristic of intelligent systems, whether natural or artificial, is the ability to generalize and infer behaviorally relevant latent causes from high-dimensional sensory input, despite significant variations in the environment. To understand how brains achieve generalization, it is crucial to identify the features to which neurons respond selectively and invariantly. However, the high-dimensional nature of visual inputs, the non-linearity of information processing in the brain, and limited experimental time make it challenging to systematically characterize neuronal tuning and invariances, especially for natural stimuli. Here, we extended "inception loops" - a paradigm that iterates between large-scale recordings, neural predictive models, and in silico experiments followed by in vivo verification - to systematically characterize single neuron invariances in the mouse primary visual cortex. Using the predictive model we synthesized Diverse Exciting Inputs (DEIs), a set of inputs that differ substantially from each other while each driving a target neuron strongly, and verified these DEIs' efficacy in vivo. We discovered a novel bipartite invariance: one portion of the receptive field encoded phase-invariant texture-like patterns, while the other portion encoded a fixed spatial pattern. Our analysis revealed that the division between the fixed and invariant portions of the receptive fields aligns with object boundaries defined by spatial frequency differences present in highly activating natural images. These findings suggest that bipartite invariance might play a role in segmentation by detecting texture-defined object boundaries, independent of the phase of the texture. We also replicated these bipartite DEIs in the functional connectomics MICrONs data set, which opens the way towards a circuit-level mechanistic understanding of this novel type of invariance. Our study demonstrates the power of using a data-driven deep learning approach to systematically characterize neuronal invariances. By applying this method across the visual hierarchy, cell types, and sensory modalities, we can decipher how latent variables are robustly extracted from natural scenes, leading to a deeper understanding of generalization.

5.
bioRxiv ; 2023 Mar 14.
Article En | MEDLINE | ID: mdl-36993321

A key role of sensory processing is integrating information across space. Neuronal responses in the visual system are influenced by both local features in the receptive field center and contextual information from the surround. While center-surround interactions have been extensively studied using simple stimuli like gratings, investigating these interactions with more complex, ecologically-relevant stimuli is challenging due to the high dimensionality of the stimulus space. We used large-scale neuronal recordings in mouse primary visual cortex to train convolutional neural network (CNN) models that accurately predicted center-surround interactions for natural stimuli. These models enabled us to synthesize surround stimuli that strongly suppressed or enhanced neuronal responses to the optimal center stimulus, as confirmed by in vivo experiments. In contrast to the common notion that congruent center and surround stimuli are suppressive, we found that excitatory surrounds appeared to complete spatial patterns in the center, while inhibitory surrounds disrupted them. We quantified this effect by demonstrating that CNN-optimized excitatory surround images have strong similarity in neuronal response space with surround images generated by extrapolating the statistical properties of the center, and with patches of natural scenes, which are known to exhibit high spatial correlations. Our findings cannot be explained by theories like redundancy reduction or predictive coding previously linked to contextual modulation in visual cortex. Instead, we demonstrated that a hierarchical probabilistic model incorporating Bayesian inference, and modulating neuronal responses based on prior knowledge of natural scene statistics, can explain our empirical results. We replicated these center-surround effects in the multi-area functional connectomics MICrONS dataset using natural movies as visual stimuli, which opens the way towards understanding circuit level mechanism, such as the contributions of lateral and feedback recurrent connections. Our data-driven modeling approach provides a new understanding of the role of contextual interactions in sensory processing and can be adapted across brain areas, sensory modalities, and species.

6.
bioRxiv ; 2023 Apr 21.
Article En | MEDLINE | ID: mdl-36993435

Understanding the brain's perception algorithm is a highly intricate problem, as the inherent complexity of sensory inputs and the brain's nonlinear processing make characterizing sensory representations difficult. Recent studies have shown that functional models-capable of predicting large-scale neuronal activity in response to arbitrary sensory input-can be powerful tools for characterizing neuronal representations by enabling high-throughput in silico experiments. However, accurately modeling responses to dynamic and ecologically relevant inputs like videos remains challenging, particularly when generalizing to new stimulus domains outside the training distribution. Inspired by recent breakthroughs in artificial intelligence, where foundation models-trained on vast quantities of data-have demonstrated remarkable capabilities and generalization, we developed a "foundation model" of the mouse visual cortex: a deep neural network trained on large amounts of neuronal responses to ecological videos from multiple visual cortical areas and mice. The model accurately predicted neuronal responses not only to natural videos but also to various new stimulus domains, such as coherent moving dots and noise patterns, underscoring its generalization abilities. The foundation model could also be adapted to new mice with minimal natural movie training data. We applied the foundation model to the MICrONS dataset: a study of the brain that integrates structure with function at unprecedented scale, containing nanometer-scale morphology, connectivity with >500,000,000 synapses, and function of >70,000 neurons within a ~1mm3 volume spanning multiple areas of the mouse visual cortex. This accurate functional model of the MICrONS data opens the possibility for a systematic characterization of the relationship between circuit structure and function. By precisely capturing the response properties of the visual cortex and generalizing to new stimulus domains and mice, foundation models can pave the way for a deeper understanding of visual computation.

7.
Nat Commun ; 13(1): 5556, 2022 09 22.
Article En | MEDLINE | ID: mdl-36138007

Retina ganglion cells extract specific features from natural scenes and send this information to the brain. In particular, they respond to local light increase (ON responses), and/or decrease (OFF). However, it is unclear if this ON-OFF selectivity, characterized with synthetic stimuli, is maintained under natural scene stimulation. Here we recorded ganglion cell responses to natural images slightly perturbed by random noise patterns to determine their selectivity during natural stimulation. The ON-OFF selectivity strongly depended on the specific image. A single ganglion cell can signal luminance increase for one image, and luminance decrease for another. Modeling and experiments showed that this resulted from the non-linear combination of different retinal pathways. Despite the versatility of the ON-OFF selectivity, a systematic analysis demonstrated that contrast was reliably encoded in these responses. Our perturbative approach uncovered the selectivity of retinal ganglion cells to more complex features than initially thought.


Retina , Retinal Ganglion Cells , Photic Stimulation , Retina/physiology , Retinal Ganglion Cells/physiology
8.
PLoS Comput Biol ; 17(6): e1009028, 2021 06.
Article En | MEDLINE | ID: mdl-34097695

Divisive normalization (DN) is a prominent computational building block in the brain that has been proposed as a canonical cortical operation. Numerous experimental studies have verified its importance for capturing nonlinear neural response properties to simple, artificial stimuli, and computational studies suggest that DN is also an important component for processing natural stimuli. However, we lack quantitative models of DN that are directly informed by measurements of spiking responses in the brain and applicable to arbitrary stimuli. Here, we propose a DN model that is applicable to arbitrary input images. We test its ability to predict how neurons in macaque primary visual cortex (V1) respond to natural images, with a focus on nonlinear response properties within the classical receptive field. Our model consists of one layer of subunits followed by learned orientation-specific DN. It outperforms linear-nonlinear and wavelet-based feature representations and makes a significant step towards the performance of state-of-the-art convolutional neural network (CNN) models. Unlike deep CNNs, our compact DN model offers a direct interpretation of the nature of normalization. By inspecting the learned normalization pool of our model, we gained insights into a long-standing question about the tuning properties of DN that update the current textbook description: we found that within the receptive field oriented features were normalized preferentially by features with similar orientation rather than non-specifically as currently assumed.


Learning , Visual Cortex/physiology , Animals , Macaca mulatta , Male , Neural Networks, Computer , Neurons/physiology , Photic Stimulation , Visual Cortex/chemistry , Wavelet Analysis
9.
Sci Rep ; 10(1): 4399, 2020 03 10.
Article En | MEDLINE | ID: mdl-32157103

The retina decomposes visual stimuli into parallel channels that encode different features of the visual environment. Central to this computation is the synaptic processing in a dense layer of neuropil, the so-called inner plexiform layer (IPL). Here, different types of bipolar cells stratifying at distinct depths relay the excitatory feedforward drive from photoreceptors to amacrine and ganglion cells. Current experimental techniques for studying processing in the IPL do not allow imaging the entire IPL simultaneously in the intact tissue. Here, we extend a two-photon microscope with an electrically tunable lens allowing us to obtain optical vertical slices of the IPL, which provide a complete picture of the response diversity of bipolar cells at a "single glance". The nature of these axial recordings additionally allowed us to isolate and investigate batch effects, i.e. inter-experimental variations resulting in systematic differences in response speed. As a proof of principle, we developed a simple model that disentangles biological from experimental causes of variability and allowed us to recover the characteristic gradient of response speeds across the IPL with higher precision than before. Our new framework will make it possible to study the computations performed in the central synaptic layer of the retina more efficiently.


Amacrine Cells/ultrastructure , Photoreceptor Cells, Vertebrate/ultrastructure , Retinal Ganglion Cells/ultrastructure , Animals , Female , Male , Mice , Microscopy/instrumentation
10.
Nat Neurosci ; 22(12): 2060-2065, 2019 12.
Article En | MEDLINE | ID: mdl-31686023

Finding sensory stimuli that drive neurons optimally is central to understanding information processing in the brain. However, optimizing sensory input is difficult due to the predominantly nonlinear nature of sensory processing and high dimensionality of the input. We developed 'inception loops', a closed-loop experimental paradigm combining in vivo recordings from thousands of neurons with in silico nonlinear response modeling. Our end-to-end trained, deep-learning-based model predicted thousands of neuronal responses to arbitrary, new natural input with high accuracy and was used to synthesize optimal stimuli-most exciting inputs (MEIs). For mouse primary visual cortex (V1), MEIs exhibited complex spatial features that occurred frequently in natural scenes but deviated strikingly from the common notion that Gabor-like stimuli are optimal for V1. When presented back to the same neurons in vivo, MEIs drove responses significantly better than control stimuli. Inception loops represent a widely applicable technique for dissecting the neural mechanisms of sensation.


Models, Neurological , Neurons/physiology , Visual Cortex/physiology , Animals , Computer Simulation , Eye Movements/physiology , Female , Male , Mice , Mice, Transgenic , Nonlinear Dynamics , Photic Stimulation/methods , Visual Perception/physiology
11.
Elife ; 82019 04 30.
Article En | MEDLINE | ID: mdl-31038458

We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated 'Bouma's Law' of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling.


Pattern Recognition, Visual/physiology , Visual Fields/physiology , Visual Perception/physiology , Crowding/psychology , Discrimination, Psychological , Fixation, Ocular/physiology , Humans , Perceptual Masking , Photic Stimulation , Space Perception/physiology
12.
PLoS Comput Biol ; 15(4): e1006897, 2019 04.
Article En | MEDLINE | ID: mdl-31013278

Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have emerged for modeling these nonlinear computations: transfer learning from artificial neural networks trained on object recognition and data-driven convolutional neural network models trained end-to-end on large populations of neurons. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. We found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.


Models, Neurological , Neural Networks, Computer , Visual Cortex/physiology , Visual Perception/physiology , Algorithms , Animals , Computational Biology , Macaca mulatta/physiology , Male , Neurons/physiology
13.
J Neurophysiol ; 120(5): 2430-2452, 2018 11 01.
Article En | MEDLINE | ID: mdl-30365390

When the brain has determined the position of a moving object, because of anatomical and processing delays the object will have already moved to a new location. Given the statistical regularities present in natural motion, the brain may have acquired compensatory mechanisms to minimize the mismatch between the perceived and real positions of moving objects. A well-known visual illusion-the flash lag effect-points toward such a possibility. Although many psychophysical models have been suggested to explain this illusion, their predictions have not been tested at the neural level, particularly in a species of animal known to perceive the illusion. To this end, we recorded neural responses to flashed and moving bars from primary visual cortex (V1) of awake, fixating macaque monkeys. We found that the response latency to moving bars of varying speed, motion direction, and luminance was shorter than that to flashes, in a manner that is consistent with psychophysical results. At the level of V1, our results support the differential latency model positing that flashed and moving bars have different latencies. As we found a neural correlate of the illusion in passively fixating monkeys, our results also suggest that judging the instantaneous position of the moving bar at the time of flash-as required by the postdiction/motion-biasing model-may not be necessary for observing a neural correlate of the illusion. Our results also suggest that the brain may have evolved mechanisms to process moving stimuli faster and closer to real time compared with briefly appearing stationary stimuli. NEW & NOTEWORTHY We report several observations in awake macaque V1 that provide support for the differential latency model of the flash lag illusion. We find that the equal latency of flash and moving stimuli as assumed by motion integration/postdiction models does not hold in V1. We show that in macaque V1, motion processing latency depends on stimulus luminance, speed and motion direction in a manner consistent with several psychophysical properties of the flash lag illusion.


Illusions , Motion Perception , Visual Cortex/physiology , Animals , Macaca mulatta , Male , Neurons/physiology , Reaction Time , Visual Cortex/cytology , Wakefulness
14.
Nat Commun ; 9(1): 2654, 2018 07 09.
Article En | MEDLINE | ID: mdl-29985411

Variability in neuronal responses to identical stimuli is frequently correlated across a population. Attention is thought to reduce these correlations by suppressing noisy inputs shared by the population. However, even with precise control of the visual stimulus, the subject's attentional state varies across trials. While these state fluctuations are bound to induce some degree of correlated variability, it is currently unknown how strong their effect is, as previous studies generally do not dissociate changes in attentional strength from changes in attentional state variability. We designed a novel paradigm that does so and find both a pronounced effect of attentional fluctuations on correlated variability at long timescales and attention-dependent reductions in correlations at short timescales. These effects predominate in layers 2/3, as expected from a feedback signal such as attention. Thus, significant portions of correlated variability can be attributed to fluctuations in internally generated signals, like attention, rather than noise.


Action Potentials/physiology , Attention/physiology , Neurons/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Animals , Cues , Eye Movements/physiology , Macaca mulatta , Male , Nerve Net/physiology , Photic Stimulation
15.
J Vis ; 17(12): 5, 2017 10 01.
Article En | MEDLINE | ID: mdl-28983571

Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.


Eye Movements/physiology , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Fovea Centralis/physiology , Humans , Photic Stimulation
16.
Curr Opin Neurobiol ; 46: 178-186, 2017 10.
Article En | MEDLINE | ID: mdl-28926765

Although the study of biological vision and computer vision attempt to understand powerful visual information processing from different angles, they have a long history of informing each other. Recent advances in texture synthesis that were motivated by visual neuroscience have led to a substantial advance in image synthesis and manipulation in computer vision using convolutional neural networks (CNNs). Here, we review these recent advances and discuss how they can in turn inspire new research in visual perception and computational neuroscience.


Machine Learning , Models, Neurological , Neural Networks, Computer , Visual Perception/physiology , Animals , Humans
17.
Science ; 353(6304): 1108, 2016 09 09.
Article En | MEDLINE | ID: mdl-27609883

The critique of Barth et al centers on three points: (i) the completeness of our study is overstated; (ii) the connectivity matrix we describe is biased by technical limitations of our brain-slicing and multipatching methods; and (iii) our cell classification scheme is arbitrary and we have simply renamed previously identified interneuron types. We address these criticisms in our Response.


Interneurons , Neocortex , Adult , Humans
18.
Nat Neurosci ; 19(4): 634-641, 2016 Apr.
Article En | MEDLINE | ID: mdl-26974951

Developments in microfabrication technology have enabled the production of neural electrode arrays with hundreds of closely spaced recording sites, and electrodes with thousands of sites are under development. These probes in principle allow the simultaneous recording of very large numbers of neurons. However, use of this technology requires the development of techniques for decoding the spike times of the recorded neurons from the raw data captured from the probes. Here we present a set of tools to solve this problem, implemented in a suite of practical, user-friendly, open-source software. We validate these methods on data from the cortex, hippocampus and thalamus of rat, mouse, macaque and marmoset, demonstrating error rates as low as 5%.


Action Potentials/physiology , Cerebral Cortex/physiology , Electrodes, Implanted , Hippocampus/physiology , Signal Processing, Computer-Assisted , Thalamus/physiology , Animals , Callithrix , Macaca mulatta , Male , Mice , Rats , Signal Processing, Computer-Assisted/instrumentation , Species Specificity
19.
J Neurosci ; 36(5): 1775-89, 2016 Feb 03.
Article En | MEDLINE | ID: mdl-26843656

Attention is commonly thought to improve behavioral performance by increasing response gain and suppressing shared variability in neuronal populations. However, both the focus and the strength of attention are likely to vary from one experimental trial to the next, thereby inducing response variability unknown to the experimenter. Here we study analytically how fluctuations in attentional state affect the structure of population responses in a simple model of spatial and feature attention. In our model, attention acts on the neural response exclusively by modulating each neuron's gain. Neurons are conditionally independent given the stimulus and the attentional gain, and correlated activity arises only from trial-to-trial fluctuations of the attentional state, which are unknown to the experimenter. We find that this simple model can readily explain many aspects of neural response modulation under attention, such as increased response gain, reduced individual and shared variability, increased correlations with firing rates, limited range correlations, and differential correlations. We therefore suggest that attention may act primarily by increasing response gain of individual neurons without affecting their correlation structure. The experimentally observed reduction in correlations may instead result from reduced variability of the attentional gain when a stimulus is attended. Moreover, we show that attentional gain fluctuations, even if unknown to a downstream readout, do not impair the readout accuracy despite inducing limited-range correlations, whereas fluctuations of the attended feature can in principle limit behavioral performance. SIGNIFICANCE STATEMENT: Covert attention is one of the most widely studied examples of top-down modulation of neural activity in the visual system. Recent studies argue that attention improves behavioral performance by shaping of the noise distribution to suppress shared variability rather than by increasing response gain. Our work shows, however, that latent, trial-to-trial fluctuations of the focus and strength of attention lead to shared variability that is highly consistent with known experimental observations. Interestingly, fluctuations in the strength of attention do not affect coding performance. As a consequence, the experimentally observed changes in response variability may not be a mechanism of attention, but rather a side effect of attentional allocation strategies in different behavioral contexts.


Action Potentials/physiology , Attention/physiology , Neurons/physiology , Visual Cortex/physiology , Humans , Photic Stimulation/methods , Reaction Time/physiology
20.
Science ; 350(6264): aac9462, 2015 Nov 27.
Article En | MEDLINE | ID: mdl-26612957

Since the work of Ramón y Cajal in the late 19th and early 20th centuries, neuroscientists have speculated that a complete understanding of neuronal cell types and their connections is key to explaining complex brain functions. However, a complete census of the constituent cell types and their wiring diagram in mature neocortex remains elusive. By combining octuple whole-cell recordings with an optimized avidin-biotin-peroxidase staining technique, we carried out a morphological and electrophysiological census of neuronal types in layers 1, 2/3, and 5 of mature neocortex and mapped the connectivity between more than 11,000 pairs of identified neurons. We categorized 15 types of interneurons, and each exhibited a characteristic pattern of connectivity with other interneuron types and pyramidal cells. The essential connectivity structure of the neocortical microcircuit could be captured by only a few connectivity motifs.


Interneurons/classification , Neocortex/cytology , Neocortex/physiology , Neural Pathways/cytology , Neural Pathways/physiology , Action Potentials , Animals , Avidin , Biotin , GABAergic Neurons/classification , GABAergic Neurons/cytology , GABAergic Neurons/physiology , Interneurons/cytology , Interneurons/physiology , Mice , Neural Inhibition , Patch-Clamp Techniques , Peroxidase , Pyramidal Cells/cytology , Pyramidal Cells/physiology , Staining and Labeling , Synapses/physiology , Synapses/ultrastructure
...