Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Cell Rep ; 43(8): 114639, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39167488

RESUMO

A key feature of neurons in the primary visual cortex (V1) of primates is their orientation selectivity. Recent studies using deep neural network models showed that the most exciting input (MEI) for mouse V1 neurons exhibit complex spatial structures that predict non-uniform orientation selectivity across the receptive field (RF), in contrast to the classical Gabor filter model. Using local patches of drifting gratings, we identified heterogeneous orientation tuning in mouse V1 that varied up to 90° across sub-regions of the RF. This heterogeneity correlated with deviations from optimal Gabor filters and was consistent across cortical layers and recording modalities (calcium vs. spikes). In contrast, model-synthesized MEIs for macaque V1 neurons were predominantly Gabor like, consistent with previous studies. These findings suggest that complex spatial feature selectivity emerges earlier in the visual pathway in mice than in primates. This may provide a faster, though less general, method of extracting task-relevant information.


Assuntos
Córtex Visual Primário , Animais , Camundongos , Córtex Visual Primário/fisiologia , Orientação/fisiologia , Camundongos Endogâmicos C57BL , Neurônios/fisiologia , Estimulação Luminosa , Masculino , Campos Visuais/fisiologia , Córtex Visual/fisiologia , Vias Visuais/fisiologia , Primatas
2.
bioRxiv ; 2023 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-37292670

RESUMO

In recent years, most exciting inputs (MEIs) synthesized from encoding models of neuronal activity have become an established method to study tuning properties of biological and artificial visual systems. However, as we move up the visual hierarchy, the complexity of neuronal computations increases. Consequently, it becomes more challenging to model neuronal activity, requiring more complex models. In this study, we introduce a new attention readout for a convolutional data-driven core for neurons in macaque V4 that outperforms the state-of-the-art task-driven ResNet model in predicting neuronal responses. However, as the predictive network becomes deeper and more complex, synthesizing MEIs via straightforward gradient ascent (GA) can struggle to produce qualitatively good results and overfit to idiosyncrasies of a more complex model, potentially decreasing the MEI's model-to-brain transferability. To solve this problem, we propose a diffusion-based method for generating MEIs via Energy Guidance (EGG). We show that for models of macaque V4, EGG generates single neuron MEIs that generalize better across architectures than the state-of-the-art GA while preserving the within-architectures activation and requiring 4.7x less compute time. Furthermore, EGG diffusion can be used to generate other neurally exciting images, like most exciting natural images that are on par with a selection of highly activating natural images, or image reconstructions that generalize better across architectures. Finally, EGG is simple to implement, requires no retraining of the diffusion model, and can easily be generalized to provide other characterizations of the visual system, such as invariances. Thus EGG provides a general and flexible framework to study coding properties of the visual system in the context of natural images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA