Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nature ; 629(8013): 861-868, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38750353

RESUMEN

A central assumption of neuroscience is that long-term memories are represented by the same brain areas that encode sensory stimuli1. Neurons in inferotemporal (IT) cortex represent the sensory percept of visual objects using a distributed axis code2-4. Whether and how the same IT neural population represents the long-term memory of visual objects remains unclear. Here we examined how familiar faces are encoded in the IT anterior medial face patch (AM), perirhinal face patch (PR) and temporal pole face patch (TP). In AM and PR we observed that the encoding axis for familiar faces is rotated relative to that for unfamiliar faces at long latency; in TP this memory-related rotation was much weaker. Contrary to previous claims, the relative response magnitude to familiar versus unfamiliar faces was not a stable indicator of familiarity in any patch5-11. The mechanism underlying the memory-related axis change is likely intrinsic to IT cortex, because inactivation of PR did not affect axis change dynamics in AM. Overall, our results suggest that memories of familiar faces are represented in AM and perirhinal cortex by a distinct long-latency code, explaining how the same cell population can encode both the percept and memory of faces.


Asunto(s)
Reconocimiento Facial , Memoria a Largo Plazo , Reconocimiento en Psicología , Lóbulo Temporal , Animales , Cara , Reconocimiento Facial/fisiología , Macaca mulatta/fisiología , Memoria a Largo Plazo/fisiología , Neuronas/fisiología , Corteza Perirrinal/fisiología , Corteza Perirrinal/citología , Estimulación Luminosa , Reconocimiento en Psicología/fisiología , Lóbulo Temporal/anatomía & histología , Lóbulo Temporal/citología , Lóbulo Temporal/fisiología , Rotación
2.
ArXiv ; 2024 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-38259351

RESUMEN

Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.

3.
bioRxiv ; 2023 Dec 13.
Artículo en Inglés | MEDLINE | ID: mdl-38106108

RESUMEN

A fundamental paradigm in neuroscience is the concept of neural coding through tuning functions 1 . According to this idea, neurons encode stimuli through fixed mappings of stimulus features to firing rates. Here, we report that the tuning of visual neurons can rapidly and coherently change across a population to attend to a whole and its parts. We set out to investigate a longstanding debate concerning whether inferotemporal (IT) cortex uses a specialized code for representing specific types of objects or whether it uses a general code that applies to any object. We found that face cells in macaque IT cortex initially adopted a general code optimized for face detection. But following a rapid, concerted population event lasting < 20 ms, the neural code transformed into a face-specific one with two striking properties: (i) response gradients to principal detection-related dimensions reversed direction, and (ii) new tuning developed to multiple higher feature space dimensions supporting fine face discrimination. These dynamics were face specific and did not occur in response to objects. Overall, these results show that, for faces, face cells shift from detection to discrimination by switching from an object-general code to a face-specific code. More broadly, our results suggest a novel mechanism for neural representation: concerted, stimulus-dependent switching of the neural code used by a cortical area.

4.
Sci Am ; 320(2): 22, 2019 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-38987964
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA