Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
1.
J Neurosci ; 43(37): 6384-6400, 2023 09 13.
Article in English | MEDLINE | ID: mdl-37591738

ABSTRACT

The structure of neural circuitry plays a crucial role in brain function. Previous studies of brain organization generally had to trade off between coarse descriptions at a large scale and fine descriptions on a small scale. Researchers have now reconstructed tens to hundreds of thousands of neurons at synaptic resolution, enabling investigations into the interplay between global, modular organization, and cell type-specific wiring. Analyzing data of this scale, however, presents unique challenges. To address this problem, we applied novel community detection methods to analyze the synapse-level reconstruction of an adult female Drosophila melanogaster brain containing >20,000 neurons and 10 million synapses. Using a machine-learning algorithm, we find the most densely connected communities of neurons by maximizing a generalized modularity density measure. We resolve the community structure at a range of scales, from large (on the order of thousands of neurons) to small (on the order of tens of neurons). We find that the network is organized hierarchically, and larger-scale communities are composed of smaller-scale structures. Our methods identify well-known features of the fly brain, including its sensory pathways. Moreover, focusing on specific brain regions, we are able to identify subnetworks with distinct connectivity types. For example, manual efforts have identified layered structures in the fan-shaped body. Our methods not only automatically recover this layered structure, but also resolve finer connectivity patterns to downstream and upstream areas. We also find a novel modular organization of the superior neuropil, with distinct clusters of upstream and downstream brain regions dividing the neuropil into several pathways. These methods show that the fine-scale, local network reconstruction made possible by modern experimental methods are sufficiently detailed to identify the organization of the brain across scales, and enable novel predictions about the structure and function of its parts.Significance Statement The Hemibrain is a partial connectome of an adult female Drosophila melanogaster brain containing >20,000 neurons and 10 million synapses. Analyzing the structure of a network of this size requires novel and efficient computational tools. We applied a new community detection method to automatically uncover the modular structure in the Hemibrain dataset by maximizing a generalized modularity measure. This allowed us to resolve the community structure of the fly hemibrain at a range of spatial scales revealing a hierarchical organization of the network, where larger-scale modules are composed of smaller-scale structures. The method also allowed us to identify subnetworks with distinct cell and connectivity structures, such as the layered structures in the fan-shaped body, and the modular organization of the superior neuropil. Thus, network analysis methods can be adopted to the connectomes being reconstructed using modern experimental methods to reveal the organization of the brain across scales. This supports the view that such connectomes will allow us to uncover the organizational structure of the brain, which can ultimately lead to a better understanding of its function.


Subject(s)
Connectome , Pentaerythritol Tetranitrate , Female , Animals , Drosophila , Drosophila melanogaster , Brain , Neurons
2.
PLoS Comput Biol ; 19(3): e1010932, 2023 03.
Article in English | MEDLINE | ID: mdl-36972288

ABSTRACT

Machine learning models have difficulty generalizing to data outside of the distribution they were trained on. In particular, vision models are usually vulnerable to adversarial attacks or common corruptions, to which the human visual system is robust. Recent studies have found that regularizing machine learning models to favor brain-like representations can improve model robustness, but it is unclear why. We hypothesize that the increased model robustness is partly due to the low spatial frequency preference inherited from the neural representation. We tested this simple hypothesis with several frequency-oriented analyses, including the design and use of hybrid images to probe model frequency sensitivity directly. We also examined many other publicly available robust models that were trained on adversarial images or with data augmentation, and found that all these robust models showed a greater preference to low spatial frequency information. We show that preprocessing by blurring can serve as a defense mechanism against both adversarial attacks and common corruptions, further confirming our hypothesis and demonstrating the utility of low spatial frequency information in robust object recognition.


Subject(s)
Deep Learning , Neural Networks, Computer , Humans , Visual Perception , Machine Learning , Head
3.
J Neurosci ; 42(27): 5451-5462, 2022 07 06.
Article in English | MEDLINE | ID: mdl-35641186

ABSTRACT

Sensory evidence accumulation is considered a hallmark of decision-making in noisy environments. Integration of sensory inputs has been traditionally studied using passive stimuli, segregating perception from action. Lessons learned from this approach, however, may not generalize to ethological behaviors like navigation, where there is an active interplay between perception and action. We designed a sensory-based sequential decision task in virtual reality in which humans and monkeys navigated to a memorized location by integrating optic flow generated by their own joystick movements. A major challenge in such closed-loop tasks is that subjects' actions will determine future sensory input, causing ambiguity about whether they rely on sensory input rather than expectations based solely on a learned model of the dynamics. To test whether subjects integrated optic flow over time, we used three independent experimental manipulations, unpredictable optic flow perturbations, which pushed subjects off their trajectory; gain manipulation of the joystick controller, which changed the consequences of actions; and manipulation of the optic flow density, which changed the information borne by sensory evidence. Our results suggest that both macaques (male) and humans (female/male) relied heavily on optic flow, thereby demonstrating a critical role for sensory evidence accumulation during naturalistic action-perception closed-loop tasks.SIGNIFICANCE STATEMENT The temporal integration of evidence is a fundamental component of mammalian intelligence. Yet, it has traditionally been studied using experimental paradigms that fail to capture the closed-loop interaction between actions and sensations inherent in real-world continuous behaviors. These conventional paradigms use binary decision tasks and passive stimuli with statistics that remain stationary over time. Instead, we developed a naturalistic visuomotor visual navigation paradigm that mimics the causal structure of real-world sensorimotor interactions and probed the extent to which participants integrate sensory evidence by adding task manipulations that reveal complementary aspects of the computation.


Subject(s)
Optic Flow , Animals , Female , Humans , Male , Mammals , Movement
4.
PLoS Comput Biol ; 14(9): e1006371, 2018 09.
Article in English | MEDLINE | ID: mdl-30248091

ABSTRACT

Studies of neuron-behaviour correlation and causal manipulation have long been used separately to understand the neural basis of perception. Yet these approaches sometimes lead to drastically conflicting conclusions about the functional role of brain areas. Theories that focus only on choice-related neuronal activity cannot reconcile those findings without additional experiments involving large-scale recordings to measure interneuronal correlations. By expanding current theories of neural coding and incorporating results from inactivation experiments, we demonstrate here that it is possible to infer decoding weights of different brain areas at a coarse scale without precise knowledge of the correlation structure. We apply this technique to neural data collected from two different cortical areas in macaque monkeys trained to perform a heading discrimination task. We identify two opposing decoding schemes, each consistent with data depending on the nature of correlated noise. Our theory makes specific testable predictions to distinguish these scenarios experimentally without requiring measurement of the underlying noise correlations.


Subject(s)
Brain/physiology , Motion Perception/physiology , Neurons/physiology , Algorithms , Animals , Choice Behavior , Computer Simulation , Macaca mulatta , Models, Neurological , Motion , Normal Distribution
5.
J Neurophysiol ; 120(5): 2430-2452, 2018 11 01.
Article in English | MEDLINE | ID: mdl-30365390

ABSTRACT

When the brain has determined the position of a moving object, because of anatomical and processing delays the object will have already moved to a new location. Given the statistical regularities present in natural motion, the brain may have acquired compensatory mechanisms to minimize the mismatch between the perceived and real positions of moving objects. A well-known visual illusion-the flash lag effect-points toward such a possibility. Although many psychophysical models have been suggested to explain this illusion, their predictions have not been tested at the neural level, particularly in a species of animal known to perceive the illusion. To this end, we recorded neural responses to flashed and moving bars from primary visual cortex (V1) of awake, fixating macaque monkeys. We found that the response latency to moving bars of varying speed, motion direction, and luminance was shorter than that to flashes, in a manner that is consistent with psychophysical results. At the level of V1, our results support the differential latency model positing that flashed and moving bars have different latencies. As we found a neural correlate of the illusion in passively fixating monkeys, our results also suggest that judging the instantaneous position of the moving bar at the time of flash-as required by the postdiction/motion-biasing model-may not be necessary for observing a neural correlate of the illusion. Our results also suggest that the brain may have evolved mechanisms to process moving stimuli faster and closer to real time compared with briefly appearing stationary stimuli. NEW & NOTEWORTHY We report several observations in awake macaque V1 that provide support for the differential latency model of the flash lag illusion. We find that the equal latency of flash and moving stimuli as assumed by motion integration/postdiction models does not hold in V1. We show that in macaque V1, motion processing latency depends on stimulus luminance, speed and motion direction in a manner consistent with several psychophysical properties of the flash lag illusion.


Subject(s)
Illusions , Motion Perception , Visual Cortex/physiology , Animals , Macaca mulatta , Male , Neurons/physiology , Reaction Time , Visual Cortex/cytology , Wakefulness
7.
J Neurophysiol ; 116(3): 1449-67, 2016 09 01.
Article in English | MEDLINE | ID: mdl-27334948

ABSTRACT

Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: "congruent" and "opposite" cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.


Subject(s)
Models, Neurological , Motion Perception/physiology , Neurons/physiology , Animals , Attention/physiology , Discrimination, Psychological/physiology , Likelihood Functions , Linear Models , Optic Flow/physiology , Orientation/physiology , Proprioception/physiology , Reinforcement, Psychology , Space Perception/physiology
8.
Nat Neurosci ; 27(4): 772-781, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38443701

ABSTRACT

Until now, it has been difficult to examine the neural bases of foraging in naturalistic environments because previous approaches have relied on restrained animals performing trial-based foraging tasks. Here we allowed unrestrained monkeys to freely interact with concurrent reward options while we wirelessly recorded population activity in the dorsolateral prefrontal cortex. The animals decided when and where to forage based on whether their prediction of reward was fulfilled or violated. This prediction was not solely based on a history of reward delivery, but also on the understanding that waiting longer improves the chance of reward. The task variables were continuously represented in a subspace of the high-dimensional population activity, and this compressed representation predicted the animal's subsequent choices better than the true task variables and as well as the raw neural activity. Our results indicate that monkeys' foraging strategies are based on a cortical model of reward dynamics as animals freely explore their environment.


Subject(s)
Prefrontal Cortex , Reward , Animals , Macaca mulatta , Choice Behavior
9.
bioRxiv ; 2024 Jun 08.
Article in English | MEDLINE | ID: mdl-38895233

ABSTRACT

In daily life, we must recognize others' emotions so we can respond appropriately. This ability may rely, at least in part, on neural responses similar to those associated with our own emotions. We hypothesized that the insula, a cortical region near the junction of the temporal, parietal, and frontal lobes, may play a key role in this process. We recorded local field potential (LFP) activity in human neurosurgical patients performing two tasks, one focused on identifying their own emotional response and one on identifying facial emotional responses in others. We found matching patterns of gamma- and high-gamma band activity for the two tasks in the insula. Three other regions (MTL, ACC, and OFC) clearly encoded both self- and other-emotions, but used orthogonal activity patterns to do so. These results support the hypothesis that the insula plays a particularly important role in mediating between experienced vs. observed emotions.

10.
Nat Commun ; 14(1): 1832, 2023 04 01.
Article in English | MEDLINE | ID: mdl-37005470

ABSTRACT

Success in many real-world tasks depends on our ability to dynamically track hidden states of the world. We hypothesized that neural populations estimate these states by processing sensory history through recurrent interactions which reflect the internal model of the world. To test this, we recorded brain activity in posterior parietal cortex (PPC) of monkeys navigating by optic flow to a hidden target location within a virtual environment, without explicit position cues. In addition to sequential neural dynamics and strong interneuronal interactions, we found that the hidden state - monkey's displacement from the goal - was encoded in single neurons, and could be dynamically decoded from population activity. The decoded estimates predicted navigation performance on individual trials. Task manipulations that perturbed the world model induced substantial changes in neural interactions, and modified the neural representation of the hidden state, while representations of sensory and motor variables remained stable. The findings were recapitulated by a task-optimized recurrent neural network model, suggesting that task demands shape the neural interactions in PPC, leading them to embody a world model that consolidates information and tracks task-relevant hidden states.


Subject(s)
Cues , Neurons , Animals , Male , Neurons/physiology , Macaca mulatta , Parietal Lobe/physiology
11.
bioRxiv ; 2023 Mar 16.
Article in English | MEDLINE | ID: mdl-36993218

ABSTRACT

A defining characteristic of intelligent systems, whether natural or artificial, is the ability to generalize and infer behaviorally relevant latent causes from high-dimensional sensory input, despite significant variations in the environment. To understand how brains achieve generalization, it is crucial to identify the features to which neurons respond selectively and invariantly. However, the high-dimensional nature of visual inputs, the non-linearity of information processing in the brain, and limited experimental time make it challenging to systematically characterize neuronal tuning and invariances, especially for natural stimuli. Here, we extended "inception loops" - a paradigm that iterates between large-scale recordings, neural predictive models, and in silico experiments followed by in vivo verification - to systematically characterize single neuron invariances in the mouse primary visual cortex. Using the predictive model we synthesized Diverse Exciting Inputs (DEIs), a set of inputs that differ substantially from each other while each driving a target neuron strongly, and verified these DEIs' efficacy in vivo. We discovered a novel bipartite invariance: one portion of the receptive field encoded phase-invariant texture-like patterns, while the other portion encoded a fixed spatial pattern. Our analysis revealed that the division between the fixed and invariant portions of the receptive fields aligns with object boundaries defined by spatial frequency differences present in highly activating natural images. These findings suggest that bipartite invariance might play a role in segmentation by detecting texture-defined object boundaries, independent of the phase of the texture. We also replicated these bipartite DEIs in the functional connectomics MICrONs data set, which opens the way towards a circuit-level mechanistic understanding of this novel type of invariance. Our study demonstrates the power of using a data-driven deep learning approach to systematically characterize neuronal invariances. By applying this method across the visual hierarchy, cell types, and sensory modalities, we can decipher how latent variables are robustly extracted from natural scenes, leading to a deeper understanding of generalization.

12.
bioRxiv ; 2023 Mar 14.
Article in English | MEDLINE | ID: mdl-36993321

ABSTRACT

A key role of sensory processing is integrating information across space. Neuronal responses in the visual system are influenced by both local features in the receptive field center and contextual information from the surround. While center-surround interactions have been extensively studied using simple stimuli like gratings, investigating these interactions with more complex, ecologically-relevant stimuli is challenging due to the high dimensionality of the stimulus space. We used large-scale neuronal recordings in mouse primary visual cortex to train convolutional neural network (CNN) models that accurately predicted center-surround interactions for natural stimuli. These models enabled us to synthesize surround stimuli that strongly suppressed or enhanced neuronal responses to the optimal center stimulus, as confirmed by in vivo experiments. In contrast to the common notion that congruent center and surround stimuli are suppressive, we found that excitatory surrounds appeared to complete spatial patterns in the center, while inhibitory surrounds disrupted them. We quantified this effect by demonstrating that CNN-optimized excitatory surround images have strong similarity in neuronal response space with surround images generated by extrapolating the statistical properties of the center, and with patches of natural scenes, which are known to exhibit high spatial correlations. Our findings cannot be explained by theories like redundancy reduction or predictive coding previously linked to contextual modulation in visual cortex. Instead, we demonstrated that a hierarchical probabilistic model incorporating Bayesian inference, and modulating neuronal responses based on prior knowledge of natural scene statistics, can explain our empirical results. We replicated these center-surround effects in the multi-area functional connectomics MICrONS dataset using natural movies as visual stimuli, which opens the way towards understanding circuit level mechanism, such as the contributions of lateral and feedback recurrent connections. Our data-driven modeling approach provides a new understanding of the role of contextual interactions in sensory processing and can be adapted across brain areas, sensory modalities, and species.

13.
bioRxiv ; 2023 Apr 21.
Article in English | MEDLINE | ID: mdl-36993435

ABSTRACT

Understanding the brain's perception algorithm is a highly intricate problem, as the inherent complexity of sensory inputs and the brain's nonlinear processing make characterizing sensory representations difficult. Recent studies have shown that functional models-capable of predicting large-scale neuronal activity in response to arbitrary sensory input-can be powerful tools for characterizing neuronal representations by enabling high-throughput in silico experiments. However, accurately modeling responses to dynamic and ecologically relevant inputs like videos remains challenging, particularly when generalizing to new stimulus domains outside the training distribution. Inspired by recent breakthroughs in artificial intelligence, where foundation models-trained on vast quantities of data-have demonstrated remarkable capabilities and generalization, we developed a "foundation model" of the mouse visual cortex: a deep neural network trained on large amounts of neuronal responses to ecological videos from multiple visual cortical areas and mice. The model accurately predicted neuronal responses not only to natural videos but also to various new stimulus domains, such as coherent moving dots and noise patterns, underscoring its generalization abilities. The foundation model could also be adapted to new mice with minimal natural movie training data. We applied the foundation model to the MICrONS dataset: a study of the brain that integrates structure with function at unprecedented scale, containing nanometer-scale morphology, connectivity with >500,000,000 synapses, and function of >70,000 neurons within a ~1mm3 volume spanning multiple areas of the mouse visual cortex. This accurate functional model of the MICrONS data opens the possibility for a systematic characterization of the relationship between circuit structure and function. By precisely capturing the response properties of the visual cortex and generalizing to new stimulus domains and mice, foundation models can pave the way for a deeper understanding of visual computation.

14.
Biol Psychiatry ; 94(6): 445-453, 2023 09 15.
Article in English | MEDLINE | ID: mdl-36736418

ABSTRACT

BACKGROUND: Disorders of mood and cognition are prevalent, disabling, and notoriously difficult to treat. Fueling this challenge in treatment is a significant gap in our understanding of their neurophysiological basis. METHODS: We recorded high-density neural activity from intracranial electrodes implanted in depression-relevant prefrontal cortical regions in 3 human subjects with severe depression. Neural recordings were labeled with depression severity scores across a wide dynamic range using an adaptive assessment that allowed sampling with a temporal frequency greater than that possible with typical rating scales. We modeled these data using regularized regression techniques with region selection to decode depression severity from the prefrontal recordings. RESULTS: Across prefrontal regions, we found that reduced depression severity is associated with decreased low-frequency neural activity and increased high-frequency activity. When constraining our model to decode using a single region, spectral changes in the anterior cingulate cortex best predicted depression severity in all 3 subjects. Relaxing this constraint revealed unique, individual-specific sets of spatiospectral features predictive of symptom severity, reflecting the heterogeneous nature of depression. CONCLUSIONS: The ability to decode depression severity from neural activity increases our fundamental understanding of how depression manifests in the human brain and provides a target neural signature for personalized neuromodulation therapies.


Subject(s)
Brain , Depression , Humans , Brain/physiology , Prefrontal Cortex , Brain Mapping/methods , Gyrus Cinguli
15.
IEEE Access ; 10: 58071-58080, 2022.
Article in English | MEDLINE | ID: mdl-36339794

ABSTRACT

Neurons in the brain are complex machines with distinct functional compartments that interact nonlinearly. In contrast, neurons in artificial neural networks abstract away this complexity, typically down to a scalar activation function of a weighted sum of inputs. Here we emulate more biologically realistic neurons by learning canonical activation functions with two input arguments, analogous to basal and apical dendrites. We use a network-in-network architecture where each neuron is modeled as a multilayer perceptron with two inputs and a single output. This inner perceptron is shared by all units in the outer network. Remarkably, the resultant nonlinearities often produce soft XOR functions, consistent with recent experimental observations about interactions between inputs in human cortical neurons. When hyperparameters are optimized, networks with these nonlinearities learn faster and perform better than conventional ReLU nonlinearities with matched parameter counts, and they are more robust to natural and adversarial perturbations.

16.
Elife ; 112022 02 18.
Article in English | MEDLINE | ID: mdl-35179488

ABSTRACT

Path integration is a sensorimotor computation that can be used to infer latent dynamical states by integrating self-motion cues. We studied the influence of sensory observation (visual/vestibular) and latent control dynamics (velocity/acceleration) on human path integration using a novel motion-cueing algorithm. Sensory modality and control dynamics were both varied randomly across trials, as participants controlled a joystick to steer to a memorized target location in virtual reality. Visual and vestibular steering cues allowed comparable accuracies only when participants controlled their acceleration, suggesting that vestibular signals, on their own, fail to support accurate path integration in the absence of sustained acceleration. Nevertheless, performance in all conditions reflected a failure to fully adapt to changes in the underlying control dynamics, a result that was well explained by a bias in the dynamics estimation. This work demonstrates how an incorrect internal model of control dynamics affects navigation in volatile environments in spite of continuous sensory feedback.


Subject(s)
Cues , Motion Perception , Space Perception , Vestibule, Labyrinth , Adolescent , Adult , Brain Mapping , Feedback, Sensory , Female , Humans , Male , Virtual Reality , Young Adult
17.
Front Artif Intell ; 5: 890016, 2022.
Article in English | MEDLINE | ID: mdl-35903397

ABSTRACT

Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.

18.
Nat Commun ; 12(1): 6557, 2021 11 16.
Article in English | MEDLINE | ID: mdl-34785652

ABSTRACT

Sensory data about most natural task-relevant variables are entangled with task-irrelevant nuisance variables. The neurons that encode these relevant signals typically constitute a nonlinear population code. Here we present a theoretical framework for quantifying how the brain uses or decodes its nonlinear information. Our theory obeys fundamental mathematical limitations on information content inherited from the sensory periphery, describing redundant codes when there are many more cortical neurons than primary sensory neurons. The theory predicts that if the brain uses its nonlinear population codes optimally, then more informative patterns should be more correlated with choices. More specifically, the theory predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices that reveals the decoding efficiency. This relationship holds for optimal feedforward networks of modest complexity, when experiments are performed under natural nuisance variation. We analyze recordings from primary visual cortex of monkeys discriminating the distribution from which oriented stimuli were drawn, and find these data are consistent with the hypothesis of near-optimal nonlinear decoding.


Subject(s)
Primary Visual Cortex/metabolism , Algorithms , Animals , Brain/metabolism , Models, Neurological , Models, Theoretical , Neurons/metabolism
19.
PLoS Biol ; 5(12): e331, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18162043

ABSTRACT

Humans can distinguish visual stimuli that differ by features the size of only a few photoreceptors. This is possible despite the incessant image motion due to fixational eye movements, which can be many times larger than the features to be distinguished. To perform well, the brain must identify the retinal firing patterns induced by the stimulus while discounting similar patterns caused by spontaneous retinal activity. This is a challenge since the trajectory of the eye movements, and consequently, the stimulus position, are unknown. We derive a decision rule for using retinal spike trains to discriminate between two stimuli, given that their retinal image moves with an unknown random walk trajectory. This algorithm dynamically estimates the probability of the stimulus at different retinal locations, and uses this to modulate the influence of retinal spikes acquired later. Applied to a simple orientation-discrimination task, the algorithm performance is consistent with human acuity, whereas naive strategies that neglect eye movements perform much worse. We then show how a simple, biologically plausible neural network could implement this algorithm using a local, activity-dependent gain and lateral interactions approximately matched to the statistics of eye movements. Finally, we discuss evidence that such a network could be operating in the primary visual cortex.


Subject(s)
Eye Movements/physiology , Nerve Net/physiology , Visual Acuity/physiology , Algorithms , Humans , Markov Chains , Models, Biological , Psychophysics
20.
J Vis ; 10(14)2010 Dec 31.
Article in English | MEDLINE | ID: mdl-21196508

ABSTRACT

To understand the computations of our visual system, it is important to understand also the natural environment it evolved to interpret. Unfortunately, existing models of the visual environment are either unrealistic or too complex for mathematical description. Here we describe a naturalistic image model and present a mathematical solution for the statistical relationships between the image features and model variables. The world described by this model is composed of independent, opaque, textured objects, which occlude each other. This simple structure allows us to calculate the joint probability distribution of image values sampled at multiple arbitrarily located points, without approximation. This result can be converted into probabilistic relationships between observable image features as well as between the unobservable properties that caused these features, including object boundaries and relative depth. We show that the image model is sufficient to explain a wide range of natural scene properties. Finally, we discuss the implications of this description of natural scenes for the study of vision.


Subject(s)
Form Perception/physiology , Models, Theoretical , Photic Stimulation/methods , Probability , Visual Pathways/physiology , Contrast Sensitivity/physiology , Depth Perception/physiology , Humans
SELECTION OF CITATIONS
SEARCH DETAIL