Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Nature ; 610(7930): 128-134, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36171291

RESUMEN

To increase computational flexibility, the processing of sensory inputs changes with behavioural context. In the visual system, active behavioural states characterized by motor activity and pupil dilation1,2 enhance sensory responses, but typically leave the preferred stimuli of neurons unchanged2-9. Here we find that behavioural state also modulates stimulus selectivity in the mouse visual cortex in the context of coloured natural scenes. Using population imaging in behaving mice, pharmacology and deep neural network modelling, we identified a rapid shift in colour selectivity towards ultraviolet stimuli during an active behavioural state. This was exclusively caused by state-dependent pupil dilation, which resulted in a dynamic switch from rod to cone photoreceptors, thereby extending their role beyond night and day vision. The change in tuning facilitated the decoding of ethological stimuli, such as aerial predators against the twilight sky10. For decades, studies in neuroscience and cognitive science have used pupil dilation as an indirect measure of brain state. Our data suggest that, in addition, state-dependent pupil dilation itself tunes visual representations to behavioural demands by differentially recruiting rods and cones on fast timescales.


Asunto(s)
Color , Pupila , Reflejo Pupilar , Visión Ocular , Corteza Visual , Animales , Oscuridad , Aprendizaje Profundo , Ratones , Estimulación Luminosa , Pupila/fisiología , Pupila/efectos de la radiación , Reflejo Pupilar/fisiología , Células Fotorreceptoras Retinianas Conos/efectos de los fármacos , Células Fotorreceptoras Retinianas Conos/fisiología , Células Fotorreceptoras Retinianas Bastones/efectos de los fármacos , Células Fotorreceptoras Retinianas Bastones/fisiología , Factores de Tiempo , Rayos Ultravioleta , Visión Ocular/fisiología , Corteza Visual/fisiología
2.
PLoS Comput Biol ; 20(5): e1012056, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38781156

RESUMEN

Responses to natural stimuli in area V4-a mid-level area of the visual ventral stream-are well predicted by features from convolutional neural networks (CNNs) trained on image classification. This result has been taken as evidence for the functional role of V4 in object classification. However, we currently do not know if and to what extent V4 plays a role in solving other computational objectives. Here, we investigated normative accounts of V4 (and V1 for comparison) by predicting macaque single-neuron responses to natural images from the representations extracted by 23 CNNs trained on different computer vision tasks including semantic, geometric, 2D, and 3D types of tasks. We found that V4 was best predicted by semantic classification features and exhibited high task selectivity, while the choice of task was less consequential to V1 performance. Consistent with traditional characterizations of V4 function that show its high-dimensional tuning to various 2D and 3D stimulus directions, we found that diverse non-semantic tasks explained aspects of V4 function that are not captured by individual semantic tasks. Nevertheless, jointly considering the features of a pair of semantic classification tasks was sufficient to yield one of our top V4 models, solidifying V4's main functional role in semantic processing and suggesting that V4's selectivity to 2D or 3D stimulus properties found by electrophysiologists can result from semantic functional goals.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Semántica , Corteza Visual , Animales , Corteza Visual/fisiología , Biología Computacional , Estimulación Luminosa , Neuronas/fisiología , Macaca mulatta , Macaca
3.
Proc Natl Acad Sci U S A ; 119(24): e2121860119, 2022 06 14.
Artículo en Inglés | MEDLINE | ID: mdl-35675430

RESUMEN

The foveal visual image region provides the human visual system with the highest acuity. However, it is unclear whether such a high fidelity representational advantage is maintained when foveal image locations are committed to short-term memory. Here, we describe a paradoxically large distortion in foveal target location recall by humans. We briefly presented small, but high contrast, points of light at eccentricities ranging from 0.1 to 12°, while subjects maintained their line of sight on a stable target. After a brief memory period, the subjects indicated the remembered target locations via computer controlled cursors. The biggest localization errors, in terms of both directional deviations and amplitude percentage overshoots or undershoots, occurred for the most foveal targets, and such distortions were still present, albeit with qualitatively different patterns, when subjects shifted their gaze to indicate the remembered target locations. Foveal visual images are severely distorted in short-term memory.


Asunto(s)
Fóvea Central , Memoria a Corto Plazo , Recuerdo Mental , Fóvea Central/fisiología , Humanos , Percepción Visual
4.
ArXiv ; 2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38560735

RESUMEN

Identifying cell types and understanding their functional properties is crucial for unraveling the mechanisms underlying perception and cognition. In the retina, functional types can be identified by carefully selected stimuli, but this requires expert domain knowledge and biases the procedure towards previously known cell types. In the visual cortex, it is still unknown what functional types exist and how to identify them. Thus, for unbiased identification of the functional cell types in retina and visual cortex, new approaches are needed. Here we propose an optimization-based clustering approach using deep predictive models to obtain functional clusters of neurons using Most Discriminative Stimuli (MDS). Our approach alternates between stimulus optimization with cluster reassignment akin to an expectation-maximization algorithm. The algorithm recovers functional clusters in mouse retina, marmoset retina and macaque visual area V4. This demonstrates that our approach can successfully find discriminative stimuli across species, stages of the visual system and recording techniques. The resulting most discriminative stimuli can be used to assign functional cell types fast and on the fly, without the need to train complex predictive models or show a large natural scene dataset, paving the way for experiments that were previously limited by experimental time. Crucially, MDS are interpretable: they visualize the distinctive stimulus patterns that most unambiguously identify a specific type of neuron.

5.
ArXiv ; 2023 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-37396602

RESUMEN

Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input (i.e. images). However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Benchmark Competition with dynamic input (https://www.sensorium-competition.net/). This competition includes the collection of a new large-scale dataset from the primary visual cortex of five mice, containing responses from over 38,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input (i.e. video). We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.

6.
bioRxiv ; 2023 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-37292670

RESUMEN

In recent years, most exciting inputs (MEIs) synthesized from encoding models of neuronal activity have become an established method to study tuning properties of biological and artificial visual systems. However, as we move up the visual hierarchy, the complexity of neuronal computations increases. Consequently, it becomes more challenging to model neuronal activity, requiring more complex models. In this study, we introduce a new attention readout for a convolutional data-driven core for neurons in macaque V4 that outperforms the state-of-the-art task-driven ResNet model in predicting neuronal responses. However, as the predictive network becomes deeper and more complex, synthesizing MEIs via straightforward gradient ascent (GA) can struggle to produce qualitatively good results and overfit to idiosyncrasies of a more complex model, potentially decreasing the MEI's model-to-brain transferability. To solve this problem, we propose a diffusion-based method for generating MEIs via Energy Guidance (EGG). We show that for models of macaque V4, EGG generates single neuron MEIs that generalize better across architectures than the state-of-the-art GA while preserving the within-architectures activation and requiring 4.7x less compute time. Furthermore, EGG diffusion can be used to generate other neurally exciting images, like most exciting natural images that are on par with a selection of highly activating natural images, or image reconstructions that generalize better across architectures. Finally, EGG is simple to implement, requires no retraining of the diffusion model, and can easily be generalized to provide other characterizations of the visual system, such as invariances. Thus EGG provides a general and flexible framework to study coding properties of the visual system in the context of natural images.

7.
Nat Commun ; 10(1): 3710, 2019 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-31420546

RESUMEN

Despite strong evidence to the contrary in the literature, microsaccades are overwhelmingly described as involuntary eye movements. Here we show in both human subjects and monkeys that individual microsaccades of any direction can easily be triggered: (1) on demand, based on an arbitrary instruction, (2) without any special training, (3) without visual guidance by a stimulus, and (4) in a spatially and temporally accurate manner. Subjects voluntarily generated instructed "memory-guided" microsaccades readily, and similarly to how they made normal visually-guided ones. In two monkeys, we also observed midbrain superior colliculus neurons that exhibit movement-related activity bursts exclusively for memory-guided microsaccades, but not for similarly-sized visually-guided movements. Our results demonstrate behavioral and neural evidence for voluntary control over individual microsaccades, supporting recently discovered functional contributions of individual microsaccade generation to visual performance alterations and covert visual selection, as well as observations that microsaccades optimize eye position during high acuity visually-guided behavior.


Asunto(s)
Neuronas/fisiología , Movimientos Sacádicos/fisiología , Memoria Espacial/fisiología , Colículos Superiores/fisiología , Adulto , Animales , Femenino , Humanos , Macaca mulatta , Masculino , Memoria , Memoria a Corto Plazo , Vías Nerviosas , Colículos Superiores/citología , Adulto Joven
8.
Neuron ; 100(5): 1224-1240.e13, 2018 12 05.
Artículo en Inglés | MEDLINE | ID: mdl-30482688

RESUMEN

Hippocampal ripple oscillations likely support reactivation of memory traces that manifest themselves as temporally organized spiking of sparse neuronal ensembles. However, the network mechanisms concurring to achieve this function are largely unknown. We designed a multi-compartmental model of the CA3-CA1 subfields to generate biophysically realistic ripple dynamics from the cellular level to local field potentials. Simulations broadly parallel in vivo observations and support that ripples emerge from CA1 pyramidal spiking paced by recurrent inhibition. In addition to ripple oscillations, key coordination mechanisms involve concomitant aspects of network activity. Recurrent synaptic interactions in CA1 exhibit slow-gamma band coherence with CA3 input, thus offering a way to coordinate CA1 activities with CA3 inducers. Moreover, CA1 feedback inhibition controls the content of spontaneous replay during CA1 ripples, forming new mnemonic representations through plasticity. These insights are consistent with slow-gamma interactions and interneuronal circuit plasticity observed in vivo, suggesting a multifaceted ripple-related replay phenomenon.


Asunto(s)
Ondas Encefálicas , Hipocampo/fisiología , Modelos Neurológicos , Neuronas/fisiología , Sinapsis/fisiología , Animales , Interneuronas/fisiología , Macaca mulatta , Masculino , Potenciales de la Membrana
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA