Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 63
Filter
1.
Annu Rev Vis Sci ; 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38950431

ABSTRACT

Inferences made about objects via vision, such as rapid and accurate categorization, are core to primate cognition despite the algorithmic challenge posed by varying viewpoints and scenes. Until recently, the brain mechanisms that support these capabilities were deeply mysterious. However, over the past decade, this scientific mystery has been illuminated by the discovery and development of brain-inspired, image-computable, artificial neural network (ANN) systems that rival primates in these behavioral feats. Apart from fundamentally changing the landscape of artificial intelligence, modified versions of these ANN systems are the current leading scientific hypotheses of an integrated set of mechanisms in the primate ventral visual stream that support core object recognition. What separates brain-mapped versions of these systems from prior conceptual models is that they are sensory computable, mechanistic, anatomically referenced, and testable (SMART). In this article, we review and provide perspective on the brain mechanisms addressed by the current leading SMART models. We review their empirical brain and behavioral alignment successes and failures, discuss the next frontiers for an even more accurate mechanistic understanding, and outline the likely applications.

2.
Neuron ; 2024 May 08.
Article in English | MEDLINE | ID: mdl-38733985

ABSTRACT

A key feature of cortical systems is functional organization: the arrangement of functionally distinct neurons in characteristic spatial patterns. However, the principles underlying the emergence of functional organization in the cortex are poorly understood. Here, we develop the topographic deep artificial neural network (TDANN), the first model to predict several aspects of the functional organization of multiple cortical areas in the primate visual system. We analyze the factors driving the TDANN's success and find that it balances two objectives: learning a task-general sensory representation and maximizing the spatial smoothness of responses according to a metric that scales with cortical surface area. In turn, the representations learned by the TDANN are more brain-like than in spatially unconstrained models. Finally, we provide evidence that the TDANN's functional organization balances performance with between-area connection length. Our results offer a unified principle for understanding the functional organization of the primate ventral visual system.

4.
ArXiv ; 2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38259351

ABSTRACT

Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.

5.
PLoS Comput Biol ; 19(12): e1011713, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38079444

ABSTRACT

A core problem in visual object learning is using a finite number of images of a new object to accurately identify that object in future, novel images. One longstanding, conceptual hypothesis asserts that this core problem is solved by adult brains through two connected mechanisms: 1) the re-representation of incoming retinal images as points in a fixed, multidimensional neural space, and 2) the optimization of linear decision boundaries in that space, via simple plasticity rules applied to a single downstream layer. Though this scheme is biologically plausible, the extent to which it explains learning behavior in humans has been unclear-in part because of a historical lack of image-computable models of the putative neural space, and in part because of a lack of measurements of human learning behaviors in difficult, naturalistic settings. Here, we addressed these gaps by 1) drawing from contemporary, image-computable models of the primate ventral visual stream to create a large set of testable learning models (n = 2,408 models), and 2) using online psychophysics to measure human learning trajectories over a varied set of tasks involving novel 3D objects (n = 371,000 trials), which we then used to develop (and publicly release) empirical benchmarks for comparing learning models to humans. We evaluated each learning model on these benchmarks, and found those based on deep, high-level representations from neural networks were surprisingly aligned with human behavior. While no tested model explained the entirety of replicable human behavior, these results establish that rudimentary plasticity rules, when combined with appropriate visual representations, have high explanatory power in predicting human behavior with respect to this core object learning problem.


Subject(s)
Neural Networks, Computer , Pattern Recognition, Visual , Adult , Animals , Humans , Primates , Brain , Spatial Learning , Models, Neurological , Visual Perception
6.
Behav Brain Sci ; 46: e390, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38054303

ABSTRACT

In the target article, Bowers et al. dispute deep artificial neural network (ANN) models as the currently leading models of human vision without producing alternatives. They eschew the use of public benchmarking platforms to compare vision models with the brain and behavior, and they advocate for a fragmented, phenomenon-specific modeling approach. These are unconstructive to scientific progress. We outline how the Brain-Score community is moving forward to add new model-to-human comparisons to its community-transparent suite of benchmarks.


Subject(s)
Brain , Neural Networks, Computer , Humans
7.
bioRxiv ; 2023 May 18.
Article in English | MEDLINE | ID: mdl-37292946

ABSTRACT

A key feature of many cortical systems is functional organization: the arrangement of neurons with specific functional properties in characteristic spatial patterns across the cortical surface. However, the principles underlying the emergence and utility of functional organization are poorly understood. Here we develop the Topographic Deep Artificial Neural Network (TDANN), the first unified model to accurately predict the functional organization of multiple cortical areas in the primate visual system. We analyze the key factors responsible for the TDANN's success and find that it strikes a balance between two specific objectives: achieving a task-general sensory representation that is self-supervised, and maximizing the smoothness of responses across the cortical sheet according to a metric that scales relative to cortical surface area. In turn, the representations learned by the TDANN are lower dimensional and more brain-like than those in models that lack a spatial smoothness constraint. Finally, we provide evidence that the TDANN's functional organization balances performance with inter-area connection length, and use the resulting models for a proof-of-principle optimization of cortical prosthetic design. Our results thus offer a unified principle for understanding functional organization and a novel view of the functional role of the visual system in particular.

8.
Neural Comput ; 34(8): 1652-1675, 2022 07 14.
Article in English | MEDLINE | ID: mdl-35798321

ABSTRACT

The computational role of the abundant feedback connections in the ventral visual stream is unclear, enabling humans and nonhuman primates to effortlessly recognize objects across a multitude of viewing conditions. Prior studies have augmented feedforward convolutional neural networks (CNNs) with recurrent connections to study their role in visual processing; however, often these recurrent networks are optimized directly on neural data or the comparative metrics used are undefined for standard feedforward networks that lack these connections. In this work, we develop task-optimized convolutional recurrent (ConvRNN) network models that more correctly mimic the timing and gross neuroanatomy of the ventral pathway. Properly chosen intermediate-depth ConvRNN circuit architectures, which incorporate mechanisms of feedforward bypassing and recurrent gating, can achieve high performance on a core recognition task, comparable to that of much deeper feedforward networks. We then develop methods that allow us to compare both CNNs and ConvRNNs to finely grained measurements of primate categorization behavior and neural response trajectories across thousands of stimuli. We find that high-performing ConvRNNs provide a better match to these data than feedforward networks of any depth, predicting the precise timings at which each stimulus is behaviorally decoded from neural activation patterns. Moreover, these ConvRNN circuits consistently produce quantitatively accurate predictions of neural dynamics from V4 and IT across the entire stimulus presentation. In fact, we find that the highest-performing ConvRNNs, which best match neural and behavioral data, also achieve a strong Pareto trade-off between task performance and overall network size. Taken together, our results suggest the functional purpose of recurrence in the ventral pathway is to fit a high-performing network in cortex, attaining computational power through temporal rather than spatial complexity.


Subject(s)
Task Performance and Analysis , Visual Perception , Animals , Humans , Macaca mulatta/physiology , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Visual Pathways/physiology , Visual Perception/physiology
9.
Adv Neural Inf Process Syst ; 35: 22628-22642, 2022.
Article in English | MEDLINE | ID: mdl-38435074

ABSTRACT

Humans learn from visual inputs at multiple timescales, both rapidly and flexibly acquiring visual knowledge over short periods, and robustly accumulating online learning progress over longer periods. Modeling these powerful learning capabilities is an important problem for computational visual cognitive science, and models that could replicate them would be of substantial utility in real-world computer vision settings. In this work, we establish benchmarks for both real-time and life-long continual visual learning. Our real-time learning benchmark measures a model's ability to match the rapid visual behavior changes of real humans over the course of minutes and hours, given a stream of visual inputs. Our life-long learning benchmark evaluates the performance of models in a purely online learning curriculum obtained directly from child visual experience over the course of years of development. We evaluate a spectrum of recent deep self-supervised visual learning algorithms on both benchmarks, finding that none of them perfectly match human performance, though some algorithms perform substantially better than others. Interestingly, algorithms embodying recent trends in self-supervised learning - including BYOL, SwAV and MAE - are substantially worse on our benchmarks than an earlier generation of self-supervised algorithms such as SimCLR and MoCo-v2. We present analysis indicating that the failure of these newer algorithms is primarily due to their inability to handle the kind of sparse low-diversity datastreams that naturally arise in the real world, and that actively leveraging memory through negative sampling - a mechanism eschewed by these newer algorithms - appears useful for facilitating learning in such low-diversity environments. We also illustrate a complementarity between the short and long timescales in the two benchmarks, showing how requiring a single learning algorithm to be locally context-sensitive enough to match real-time learning changes while stable enough to avoid catastrophic forgetting over the long term induces a trade-off that human-like algorithms may have to straddle. Taken together, our benchmarks establish a quantitative way to directly compare learning between neural networks models and human learners, show how choices in the mechanism by which such algorithms handle sample comparison and memory strongly impact their ability to match human learning abilities, and expose an open problem space for identifying more flexible and robust visual self-supervision algorithms.

10.
Nat Commun ; 12(1): 5540, 2021 09 20.
Article in English | MEDLINE | ID: mdl-34545079

ABSTRACT

Cortical regions apparently selective to faces, places, and bodies have provided important evidence for domain-specific theories of human cognition, development, and evolution. But claims of category selectivity are not quantitatively precise and remain vulnerable to empirical refutation. Here we develop artificial neural network-based encoding models that accurately predict the response to novel images in the fusiform face area, parahippocampal place area, and extrastriate body area, outperforming descriptive models and experts. We use these models to subject claims of category selectivity to strong tests, by screening for and synthesizing images predicted to produce high responses. We find that these high-response-predicted images are all unambiguous members of the hypothesized preferred category for each region. These results provide accurate, image-computable encoding models of each category-selective region, strengthen evidence for domain specificity in the brain, and point the way for future research characterizing the functional organization of the brain with unprecedented computational precision.


Subject(s)
Brain/anatomy & histology , Brain/diagnostic imaging , Computer Simulation , Algorithms , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Neural Networks, Computer
11.
Nat Methods ; 18(9): 1112-1116, 2021 09.
Article in English | MEDLINE | ID: mdl-34462591

ABSTRACT

Optogenetic methods have been widely used in rodent brains, but remain relatively under-developed for nonhuman primates such as rhesus macaques, an animal model with a large brain expressing sophisticated sensory, motor and cognitive behaviors. To address challenges in behavioral optogenetics in large brains, we developed Opto-Array, a chronically implantable array of light-emitting diodes for high-throughput optogenetic perturbation. We demonstrated that optogenetic silencing in the macaque primary visual cortex with the help of the Opto-Array results in reliable retinotopic visual deficits in a luminance discrimination task. We separately confirmed that Opto-Array illumination results in local neural silencing, and that behavioral effects are not due to tissue heating. These results demonstrate the effectiveness of the Opto-Array for behavioral optogenetic applications in large brains.


Subject(s)
Brain/physiology , Optogenetics/methods , Prostheses and Implants , Animals , Behavior, Animal , Electronics/methods , Fiber Optic Technology , Macaca mulatta , Male , Visual Cortex
12.
Elife ; 102021 06 11.
Article in English | MEDLINE | ID: mdl-34114566

ABSTRACT

Temporal continuity of object identity is a feature of natural visual input and is potentially exploited - in an unsupervised manner - by the ventral visual stream to build the neural representation in inferior temporal (IT) cortex. Here, we investigated whether plasticity of individual IT neurons underlies human core object recognition behavioral changes induced with unsupervised visual experience. We built a single-neuron plasticity model combined with a previously established IT population-to-recognition-behavior-linking model to predict human learning effects. We found that our model, after constrained by neurophysiological data, largely predicted the mean direction, magnitude, and time course of human performance changes. We also found a previously unreported dependency of the observed human performance change on the initial task difficulty. This result adds support to the hypothesis that tolerant core object recognition in human and non-human primates is instructed - at least in part - by naturally occurring unsupervised temporal contiguity experience.


A bear is a bear, regardless of how far away it is, or the angle at which we view it. And indeed, the ability to recognize objects in different contexts is an important part of our sense of vision. A brain region called the inferior temporal (IT for short) cortex plays a critical role in this feat. In primates, the activity of groups of IT cortical nerve cells correlates with recognition of different objects ­ and conversely, suppressing IT cortical activity impairs object recognition behavior. Because these cells remain selective to an item despite changes of size, position or orientation, the IT cortex is thought to underly the ability to recognise an object regardless of variations in its visual properties. How does this tolerance arise? A property called 'temporal continuity' is thought to be involved ­ in other words, the fact that objects do not blink in and out of existence. Studies in nonhuman primates have shown that temporal continuity can indeed reshape the activity of nerve cells in the IT cortex, while behavioural experiments with humans suggest that it affects the ability to recognize objects. However, these two sets of studies used different visual tasks, so it is still unknown if the cellular processes observed in monkey IT actually underpin the behavioural effects shown in humans. Jia et al. therefore set out to examine the link between the two. In the initial experiments, human volunteers were given, in an unsupervised manner, a set of visual tasks designed similarly to the previous tests in nonhuman primates. The participants were presented with continuous views of the same or different objects at various sizes, and then given tests of object recognition. These manipulations resulted in volunteers showing altered size tolerance over time. Aiming to test which cellular mechanism underpinned this behavioural effect, Jia et al. built a model that simulated the plasticity of individual IT cells and the IT networks, to predict the changes of object recognition observed in the volunteers. A high predictability of the model revealed that the plasticity in IT cortex did indeed account for the behavioral changes in the volunteers. These results shed new light on the role that temporal continuity plays in vision, refining our understanding of the way the IT cortex helps to assess the world around us.


Subject(s)
Neuronal Plasticity/physiology , Neurons/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Animals , Humans , Learning/physiology , Models, Neurological , Photic Stimulation/methods , Recognition, Psychology/physiology , Visual Perception/physiology
13.
Proc Natl Acad Sci U S A ; 118(3)2021 01 19.
Article in English | MEDLINE | ID: mdl-33431673

ABSTRACT

Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today's best supervised methods and that the mapping of these neural network models' hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.


Subject(s)
Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Animals , Child , Datasets as Topic , Humans , Macaca/physiology , Nerve Net/anatomy & histology , Unsupervised Machine Learning , Visual Cortex/anatomy & histology
14.
Neuron ; 109(1): 164-176.e5, 2021 01 06.
Article in English | MEDLINE | ID: mdl-33080226

ABSTRACT

Distributed neural population spiking patterns in macaque inferior temporal (IT) cortex that support core object recognition require additional time to develop for specific, "late-solved" images. This suggests the necessity of recurrent processing in these computations. Which brain circuits are responsible for computing and transmitting these putative recurrent signals to IT? To test whether the ventrolateral prefrontal cortex (vlPFC) is a critical recurrent node in this system, here, we pharmacologically inactivated parts of vlPFC and simultaneously measured IT activity while monkeys performed object discrimination tasks. vlPFC inactivation deteriorated the quality of late-phase (>150 ms from image onset) IT population code and produced commensurate behavioral deficits for late-solved images. Finally, silencing vlPFC caused the monkeys' IT activity and behavior to become more like those produced by feedforward-only ventral stream models. Together with prior work, these results implicate fast recurrent processing through vlPFC as critical to producing behaviorally sufficient object representations in IT.


Subject(s)
Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Prefrontal Cortex/diagnostic imaging , Prefrontal Cortex/physiology , Animals , Eye Movements/physiology , Macaca mulatta , Male , Visual Perception/physiology
15.
Neuron ; 108(3): 413-423, 2020 11 11.
Article in English | MEDLINE | ID: mdl-32918861

ABSTRACT

A potentially organizing goal of the brain and cognitive sciences is to accurately explain domains of human intelligence as executable, neurally mechanistic models. Years of research have led to models that capture experimental results in individual behavioral tasks and individual brain regions. We here advocate for taking the next step: integrating experimental results from many laboratories into suites of benchmarks that, when considered together, push mechanistic models toward explaining entire domains of intelligence, such as vision, language, and motor control. Given recent successes of neurally mechanistic models and the surging availability of neural, anatomical, and behavioral data, we believe that now is the time to create integrative benchmarking platforms that incentivize ambitious, unified models. This perspective discusses the advantages and the challenges of this approach and proposes specific steps to achieve this goal in the domain of visual intelligence with the case study of an integrative benchmarking platform called Brain-Score.


Subject(s)
Benchmarking/methods , Brain/physiology , Intelligence/physiology , Models, Neurological , Neural Networks, Computer , Humans
16.
Nat Commun ; 11(1): 3886, 2020 08 04.
Article in English | MEDLINE | ID: mdl-32753603

ABSTRACT

The ability to recognize written letter strings is foundational to human reading, but the underlying neuronal mechanisms remain largely unknown. Recent behavioral research in baboons suggests that non-human primates may provide an opportunity to investigate this question. We recorded the activity of hundreds of neurons in V4 and the inferior temporal cortex (IT) while naïve macaque monkeys passively viewed images of letters, English words and non-word strings, and tested the capacity of those neuronal representations to support a battery of orthographic processing tasks. We found that simple linear read-outs of IT (but not V4) population responses achieved high performance on all tested tasks, even matching the performance and error patterns of baboons on word classification. These results show that the IT cortex of untrained primates can serve as a precursor of orthographic processing, suggesting that the acquisition of reading in humans relies on the recycling of a brain network evolved for other visual functions.


Subject(s)
Biological Evolution , Macaca mulatta/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Animals , Brain Mapping , Decision Making , Magnetic Resonance Imaging , Male , Photic Stimulation/methods , Reading , Temporal Lobe/diagnostic imaging
17.
Science ; 364(6439)2019 05 03.
Article in English | MEDLINE | ID: mdl-31048462

ABSTRACT

Particular deep artificial neural networks (ANNs) are today's most accurate models of the primate brain's ventral visual stream. Using an ANN-driven image synthesis method, we found that luminous power patterns (i.e., images) can be applied to primate retinae to predictably push the spiking activity of targeted V4 neural sites beyond naturally occurring levels. This method, although not yet perfect, achieves unprecedented independent control of the activity state of entire populations of V4 neural sites, even those with overlapping receptive fields. These results show how the knowledge embedded in today's ANN models might be used to noninvasively set desired internal brain states at neuron-level resolution, and suggest that more accurate ANN models would produce even more accurate control.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Visual Cortex/physiology , Visual Fields/physiology , Animals , Macaca
18.
Nat Neurosci ; 22(6): 974-983, 2019 06.
Article in English | MEDLINE | ID: mdl-31036945

ABSTRACT

Non-recurrent deep convolutional neural networks (CNNs) are currently the best at modeling core object recognition, a behavior that is supported by the densely recurrent primate ventral stream, culminating in the inferior temporal (IT) cortex. If recurrence is critical to this behavior, then primates should outperform feedforward-only deep CNNs for images that require additional recurrent processing beyond the feedforward IT response. Here we first used behavioral methods to discover hundreds of these 'challenge' images. Second, using large-scale electrophysiology, we observed that behaviorally sufficient object identity solutions emerged ~30 ms later in the IT cortex for challenge images compared with primate performance-matched 'control' images. Third, these behaviorally critical late-phase IT response patterns were poorly predicted by feedforward deep CNN activations. Notably, very-deep CNNs and shallower recurrent CNNs better predicted these late IT responses, suggesting that there is a functional equivalence between additional nonlinear transformations and recurrence. Beyond arguing that recurrent circuits are critical for rapid object identification, our results provide strong constraints for future recurrent model development.


Subject(s)
Neural Networks, Computer , Recognition, Psychology/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Animals , Humans , Macaca mulatta
19.
Neuron ; 102(2): 493-505.e5, 2019 04 17.
Article in English | MEDLINE | ID: mdl-30878289

ABSTRACT

Extensive research suggests that the inferior temporal (IT) population supports visual object recognition behavior. However, causal evidence for this hypothesis has been equivocal, particularly beyond the specific case of face-selective subregions of IT. Here, we directly tested this hypothesis by pharmacologically inactivating individual, millimeter-scale subregions of IT while monkeys performed several core object recognition subtasks, interleaved trial-by trial. First, we observed that IT inactivation resulted in reliable contralateral-biased subtask-selective behavioral deficits. Moreover, inactivating different IT subregions resulted in different patterns of subtask deficits, predicted by each subregion's neuronal object discriminability. Finally, the similarity between different inactivation effects was tightly related to the anatomical distance between corresponding inactivation sites. Taken together, these results provide direct evidence that the IT cortex causally supports general core object recognition and that the underlying IT coding dimensions are topographically organized.


Subject(s)
Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Visual Pathways/physiology , Animals , Behavior, Animal , Brain Mapping , GABA-A Receptor Agonists/pharmacology , Macaca mulatta , Male , Muscimol/pharmacology , Neurons/drug effects , Neurons/physiology , Pattern Recognition, Visual/drug effects , Temporal Lobe/drug effects , Visual Pathways/drug effects
20.
Elife ; 72018 11 28.
Article in English | MEDLINE | ID: mdl-30484773

ABSTRACT

Ventral visual stream neural responses are dynamic, even for static image presentations. However, dynamical neural models of visual cortex are lacking as most progress has been made modeling static, time-averaged responses. Here, we studied population neural dynamics during face detection across three cortical processing stages. Remarkably,~30 milliseconds after the initially evoked response, we found that neurons in intermediate level areas decreased their responses to typical configurations of their preferred face parts relative to their response for atypical configurations even while neurons in higher areas achieved and maintained a preference for typical configurations. These hierarchical neural dynamics were inconsistent with standard feedforward circuits. Rather, recurrent models computing prediction errors between stages captured the observed temporal signatures. This model of neural dynamics, which simply augments the standard feedforward model of online vision, suggests that neural responses to static images may encode top-down prediction errors in addition to bottom-up feature estimates.


Subject(s)
Neurons/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Visual Perception/physiology , Animals , Brain Mapping , Face/physiology , Humans , Macaca mulatta/physiology , Models, Neurological , Photic Stimulation , Reaction Time/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...