Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 71
Filter
1.
Cell ; 171(7): 1663-1677.e16, 2017 Dec 14.
Article in English | MEDLINE | ID: mdl-29224779

ABSTRACT

Social behaviors are crucial to all mammals. Although the prelimbic cortex (PL, part of medial prefrontal cortex) has been implicated in social behavior, it is not clear which neurons are relevant or how they contribute. We found that PL contains anatomically and molecularly distinct subpopulations that target three downstream regions that have been implicated in social behavior: the nucleus accumbens (NAc), amygdala, and ventral tegmental area. Activation of NAc-projecting PL neurons (PL-NAc), but not the other subpopulations, decreased the preference for a social target. To determine what information PL-NAc neurons convey, we selectively recorded from them and found that individual neurons were active during social investigation, but only in specific spatial locations. Spatially specific manipulation of these neurons bidirectionally regulated the formation of a social-spatial association. Thus, the unexpected combination of social and spatial information within the PL-NAc may contribute to social behavior by supporting social-spatial learning.


Subject(s)
Limbic System , Neurons/cytology , Nucleus Accumbens/cytology , Prefrontal Cortex/cytology , Social Behavior , Spatial Behavior , Amygdala/physiology , Animals , Learning , Mice , Neural Pathways , Neurons/physiology , Nucleus Accumbens/physiology , Prefrontal Cortex/physiology , Ventral Tegmental Area/physiology
2.
Nature ; 629(8014): 1100-1108, 2024 May.
Article in English | MEDLINE | ID: mdl-38778103

ABSTRACT

The rich variety of behaviours observed in animals arises through the interplay between sensory processing and motor control. To understand these sensorimotor transformations, it is useful to build models that predict not only neural responses to sensory input1-5 but also how each neuron causally contributes to behaviour6,7. Here we demonstrate a novel modelling approach to identify a one-to-one mapping between internal units in a deep neural network and real neurons by predicting the behavioural changes that arise from systematic perturbations of more than a dozen neuronal cell types. A key ingredient that we introduce is 'knockout training', which involves perturbing the network during training to match the perturbations of the real neurons during behavioural experiments. We apply this approach to model the sensorimotor transformations of Drosophila melanogaster males during a complex, visually guided social behaviour8-11. The visual projection neurons at the interface between the optic lobe and central brain form a set of discrete channels12, and prior work indicates that each channel encodes a specific visual feature to drive a particular behaviour13,14. Our model reaches a different conclusion: combinations of visual projection neurons, including those involved in non-social behaviours, drive male interactions with the female, forming a rich population code for behaviour. Overall, our framework consolidates behavioural effects elicited from various neural perturbations into a single, unified model, providing a map from stimulus to neuronal cell type to behaviour, and enabling future incorporation of wiring diagrams of the brain15 into the model.


Subject(s)
Brain , Drosophila melanogaster , Models, Neurological , Neurons , Optic Lobe, Nonmammalian , Social Behavior , Visual Perception , Animals , Female , Male , Drosophila melanogaster/physiology , Drosophila melanogaster/cytology , Neurons/classification , Neurons/cytology , Neurons/physiology , Optic Lobe, Nonmammalian/cytology , Optic Lobe, Nonmammalian/physiology , Visual Perception/physiology , Nerve Net/cytology , Nerve Net/physiology , Brain/cytology , Brain/physiology
3.
Nat Methods ; 19(4): 470-478, 2022 04.
Article in English | MEDLINE | ID: mdl-35347320

ABSTRACT

Population recordings of calcium activity are a major source of insight into neural function. Large datasets require automated processing, but this can introduce errors that are difficult to detect. Here we show that popular time course-estimation algorithms often contain substantial misattribution errors affecting 10-20% of transients. Misattribution, in which fluorescence is ascribed to the wrong cell, arises when overlapping cells and processes are imperfectly defined or not identified. To diagnose misattribution, we develop metrics and visualization tools for evaluating large datasets. To correct time courses, we introduce a robust estimator that explicitly accounts for contaminating signals. In one hippocampal dataset, removing contamination reduced the number of place cells by 15%, and 19% of place fields shifted by over 10 cm. Our methods are compatible with other cell-finding techniques, empowering users to diagnose and correct a potentially widespread problem that could alter scientific conclusions.


Subject(s)
Calcium , Neurons , Algorithms , Calcium/metabolism , Calcium Signaling , Hippocampus/metabolism , Neurons/metabolism
4.
Neural Comput ; 36(3): 437-474, 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38363661

ABSTRACT

Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as mixtures of linear regressions (MLR). While it is well known that active learning confers no advantage for linear-gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a hidden Markov model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMMs and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful approach for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.

5.
Neural Comput ; 36(2): 175-226, 2024 Jan 18.
Article in English | MEDLINE | ID: mdl-38101329

ABSTRACT

Neural decoding methods provide a powerful tool for quantifying the information content of neural population codes and the limits imposed by correlations in neural activity. However, standard decoding methods are prone to overfitting and scale poorly to high-dimensional settings. Here, we introduce a novel decoding method to overcome these limitations. Our approach, the gaussian process multiclass decoder (GPMD), is well suited to decoding a continuous low-dimensional variable from high-dimensional population activity and provides a platform for assessing the importance of correlations in neural population codes. The GPMD is a multinomial logistic regression model with a gaussian process prior over the decoding weights. The prior includes hyperparameters that govern the smoothness of each neuron's decoding weights, allowing automatic pruning of uninformative neurons during inference. We provide a variational inference method for fitting the GPMD to data, which scales to hundreds or thousands of neurons and performs well even in data sets with more neurons than trials. We apply the GPMD to recordings from primary visual cortex in three species: monkey, ferret, and mouse. Our decoder achieves state-of-the-art accuracy on all three data sets and substantially outperforms independent Bayesian decoding, showing that knowledge of the correlation structure is essential for optimal decoding in all three species.


Subject(s)
Ferrets , Neurons , Animals , Mice , Bayes Theorem , Neurons/physiology
6.
Neural Comput ; 35(6): 995-1027, 2023 05 12.
Article in English | MEDLINE | ID: mdl-37037043

ABSTRACT

An important problem in systems neuroscience is to characterize how a neuron integrates sensory inputs across space and time. The linear receptive field provides a mathematical characterization of this weighting function and is commonly used to quantify neural response properties and classify cell types. However, estimating receptive fields is difficult in settings with limited data and correlated or high-dimensional stimuli. To overcome these difficulties, we propose a hierarchical model designed to flexibly parameterize low-rank receptive fields. The model includes gaussian process priors over spatial and temporal components of the receptive field, encouraging smoothness in space and time. We also propose a new temporal prior, temporal relevance determination, which imposes a variable degree of smoothness as a function of time lag. We derive a scalable algorithm for variational Bayesian inference for both spatial and temporal receptive field components and hyperparameters. The resulting estimator scales to high-dimensional settings in which full-rank maximum likelihood or a posteriori estimates are intractable. We evaluate our approach on neural data from rat retina and primate cortex and show that it substantially outperforms a variety of existing estimators. Our modeling approach will have useful extensions to a variety of other high-dimensional inference problems with smooth or low-rank structure.


Subject(s)
Neurons , Retina , Animals , Rats , Bayes Theorem , Neurons/physiology , Algorithms
7.
PLoS Comput Biol ; 18(9): e1010421, 2022 09.
Article in English | MEDLINE | ID: mdl-36170268

ABSTRACT

Imaging neural activity in a behaving animal presents unique challenges in part because motion from an animal's movement creates artifacts in fluorescence intensity time-series that are difficult to distinguish from neural signals of interest. One approach to mitigating these artifacts is to image two channels simultaneously: one that captures an activity-dependent fluorophore, such as GCaMP, and another that captures an activity-independent fluorophore such as RFP. Because the activity-independent channel contains the same motion artifacts as the activity-dependent channel, but no neural signals, the two together can be used to identify and remove the artifacts. However, existing approaches for this correction, such as taking the ratio of the two channels, do not account for channel-independent noise in the measured fluorescence. Here, we present Two-channel Motion Artifact Correction (TMAC), a method which seeks to remove artifacts by specifying a generative model of the two channel fluorescence that incorporates motion artifact, neural activity, and noise. We use Bayesian inference to infer latent neural activity under this model, thus reducing the motion artifact present in the measured fluorescence traces. We further present a novel method for evaluating ground-truth performance of motion correction algorithms by comparing the decodability of behavior from two types of neural recordings; a recording that had both an activity-dependent fluorophore and an activity-independent fluorophore (GCaMP and RFP) and a recording where both fluorophores were activity-independent (GFP and RFP). A successful motion correction method should decode behavior from the first type of recording, but not the second. We use this metric to systematically compare five models for removing motion artifacts from fluorescent time traces. We decode locomotion from a GCaMP expressing animal 20x more accurately on average than from control when using TMAC inferred activity and outperforms all other methods of motion correction tested, the best of which were ~8x more accurate than control.


Subject(s)
Algorithms , Artifacts , Animals , Bayes Theorem , Motion , Movement
8.
Neural Comput ; 34(9): 1871-1892, 2022 08 16.
Article in English | MEDLINE | ID: mdl-35896161

ABSTRACT

A large body of work has suggested that neural populations exhibit low-dimensional dynamics during behavior. However, there are a variety of different approaches for modeling low-dimensional neural population activity. One approach involves latent linear dynamical system (LDS) models, in which population activity is described by a projection of low-dimensional latent variables with linear dynamics. A second approach involves low-rank recurrent neural networks (RNNs), in which population activity arises directly from a low-dimensional projection of past activity. Although these two modeling approaches have strong similarities, they arise in different contexts and tend to have different domains of application. Here we examine the precise relationship between latent LDS models and linear low-rank RNNs. When can one model class be converted to the other, and vice versa? We show that latent LDS models can only be converted to RNNs in specific limit cases, due to the non-Markovian property of latent LDS models. Conversely, we show that linear RNNs can be mapped onto LDS models, with latent dimensionality at most twice the rank of the RNN. A surprising consequence of our results is that a partially observed RNN is better represented by an LDS model than by an RNN consisting of only observed units.


Subject(s)
Neural Networks, Computer , Linear Models
9.
Nature ; 535(7611): 285-8, 2016 07 14.
Article in English | MEDLINE | ID: mdl-27376476

ABSTRACT

During decision making, neurons in multiple brain regions exhibit responses that are correlated with decisions. However, it remains uncertain whether or not various forms of decision-related activity are causally related to decision making. Here we address this question by recording and reversibly inactivating the lateral intraparietal (LIP) and middle temporal (MT) areas of rhesus macaques performing a motion direction discrimination task. Neurons in area LIP exhibited firing rate patterns that directly resembled the evidence accumulation process posited to govern decision making, with strong correlations between their response fluctuations and the animal's choices. Neurons in area MT, in contrast, exhibited weak correlations between their response fluctuations and choices, and had firing rate patterns consistent with their sensory role in motion encoding. The behavioural impact of pharmacological inactivation of each area was inversely related to their degree of decision-related activity: while inactivation of neurons in MT profoundly impaired psychophysical performance, inactivation in LIP had no measurable impact on decision-making performance, despite having silenced the very clusters that exhibited strong decision-related activity. Although LIP inactivation did not impair psychophysical behaviour, it did influence spatial selection and oculomotor metrics in a free-choice control task. The absence of an effect on perceptual decision making was stable over trials and sessions and was robust to changes in stimulus type and task geometry, arguing against several forms of compensation. Thus, decision-related signals in LIP do not appear to be critical for computing perceptual decisions, and may instead reflect secondary processes. Our findings highlight a dissociation between decision correlation and causation, showing that strong neuron-decision correlations do not necessarily offer direct access to the neural computations underlying decisions.


Subject(s)
Decision Making/physiology , Macaca mulatta/anatomy & histology , Macaca mulatta/physiology , Models, Neurological , Animals , Choice Behavior/physiology , Discrimination, Psychological , Eye Movements/physiology , Female , Macaca mulatta/psychology , Male , Motion Perception/physiology , Neurons/physiology , Parietal Lobe/cytology , Parietal Lobe/physiology , Photic Stimulation , Psychophysiology , Temporal Lobe/cytology , Temporal Lobe/physiology
10.
Neuroimage ; 245: 118580, 2021 12 15.
Article in English | MEDLINE | ID: mdl-34740792

ABSTRACT

A key problem in functional magnetic resonance imaging (fMRI) is to estimate spatial activity patterns from noisy high-dimensional signals. Spatial smoothing provides one approach to regularizing such estimates. However, standard smoothing methods ignore the fact that correlations in neural activity may fall off at different rates in different brain areas, or exhibit discontinuities across anatomical or functional boundaries. Moreover, such methods do not exploit the fact that widely separated brain regions may exhibit strong correlations due to bilateral symmetry or the network organization of brain regions. To capture this non-stationary spatial correlation structure, we introduce the brain kernel, a continuous covariance function for whole-brain activity patterns. We define the brain kernel in terms of a continuous nonlinear mapping from 3D brain coordinates to a latent embedding space, parametrized with a Gaussian process (GP). The brain kernel specifies the prior covariance between voxels as a function of the distance between their locations in embedding space. The GP mapping warps the brain nonlinearly so that highly correlated voxels are close together in latent space, and uncorrelated voxels are far apart. We estimate the brain kernel using resting-state fMRI data, and we develop an exact, scalable inference method based on block coordinate descent to overcome the challenges of high dimensionality (10-100K voxels). Finally, we illustrate the brain kernel's usefulness with applications to brain decoding and factor analysis with multiple task-based fMRI datasets.


Subject(s)
Brain Mapping/methods , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Humans , Imaging, Three-Dimensional
11.
PLoS Comput Biol ; 16(11): e1008261, 2020 11.
Article in English | MEDLINE | ID: mdl-33216741

ABSTRACT

An important problem in computational neuroscience is to understand how networks of spiking neurons can carry out various computations underlying behavior. Balanced spiking networks (BSNs) provide a powerful framework for implementing arbitrary linear dynamical systems in networks of integrate-and-fire neurons. However, the classic BSN model requires near-instantaneous transmission of spikes between neurons, which is biologically implausible. Introducing realistic synaptic delays leads to an pathological regime known as "ping-ponging", in which different populations spike maximally in alternating time bins, causing network output to overshoot the target solution. Here we document this phenomenon and provide a novel solution: we show that a network can have realistic synaptic delays while maintaining accuracy and stability if neurons are endowed with conditionally Poisson firing. Formally, we propose two alternate formulations of Poisson balanced spiking networks: (1) a "local" framework, which replaces the hard integrate-and-fire spiking rule within each neuron by a "soft" threshold function, such that firing probability grows as a smooth nonlinear function of membrane potential; and (2) a "population" framework, which reformulates the BSN objective function in terms of expected spike counts over the entire population. We show that both approaches offer improved robustness, allowing for accurate implementation of network dynamics with realistic synaptic delays between neurons. Both Poisson frameworks preserve the coding accuracy and robustness to neuron loss of the original model and, moreover, produce positive correlations between similarly tuned neurons, a feature of real neural populations that is not found in the deterministic BSN. This work unifies balanced spiking networks with Poisson generalized linear models and suggests several promising avenues for future research.


Subject(s)
Action Potentials/physiology , Poisson Distribution , Computer Simulation , Models, Neurological , Neurons/physiology , Probability
12.
Proc Natl Acad Sci U S A ; 115(44): E10486-E10494, 2018 10 30.
Article in English | MEDLINE | ID: mdl-30322919

ABSTRACT

Much study of the visual system has focused on how humans and monkeys integrate moving stimuli over space and time. Such assessments of spatiotemporal integration provide fundamental grounding for the interpretation of neurophysiological data, as well as how the resulting neural signals support perceptual decisions and behavior. However, the insights supported by classical characterizations of integration performed in humans and rhesus monkeys are potentially limited with respect to both generality and detail: Standard tasks require extensive amounts of training, involve abstract stimulus-response mappings, and depend on combining data across many trials and/or sessions. It is thus of concern that the integration observed in classical tasks involves the recruitment of brain circuits that might not normally subsume natural behaviors, and that quantitative analyses have limited power for characterizing single-trial or single-session processes. Here we bridge these gaps by showing that three primate species (humans, macaques, and marmosets) track the focus of expansion of an optic flow field continuously and without substantial training. This flow-tracking behavior was volitional and reflected substantial temporal integration. Most strikingly, gaze patterns exhibited lawful and nuanced dependencies on random perturbations in the stimulus, such that repetitions of identical flow movies elicited remarkably similar eye movements over long and continuous time periods. These results demonstrate the generality of spatiotemporal integration in natural vision, and offer a means for studying integration outside of artificial tasks while maintaining lawful and highly reliable behavior.


Subject(s)
Callithrix/physiology , Eye Movements/physiology , Macaca mulatta/physiology , Motion Perception/physiology , Animals , Humans , Male , Photic Stimulation/methods , Young Adult
13.
J Neurophysiol ; 123(2): 682-694, 2020 02 01.
Article in English | MEDLINE | ID: mdl-31852399

ABSTRACT

Motion discrimination is a well-established model system for investigating how sensory signals are used to form perceptual decisions. Classic studies relating single-neuron activity in the middle temporal area (MT) to perceptual decisions have suggested that a simple linear readout could underlie motion discrimination behavior. A theoretically optimal readout, in contrast, would take into account the correlations between neurons and the sensitivity of individual neurons at each time point. However, it remains unknown how sophisticated the readout needs to be to support actual motion-discrimination behavior or to approach optimal performance. In this study, we evaluated the performance of various neurally plausible decoders, trained to discriminate motion direction from small ensembles of simultaneously recorded MT neurons. We found that decoding the stimulus without knowledge of the interneuronal correlations was sufficient to match an optimal (correlation aware) decoder. Additionally, a decoder could match the psychophysical performance of the animals with flat integration of up to half the stimulus and inherited temporal dynamics from the time-varying MT responses. These results demonstrate that simple, linear decoders operating on small ensembles of neurons can match both psychophysical performance and optimal sensitivity without taking correlations into account and that such simple read-out mechanisms can exhibit complex temporal properties inherited from the sensory dynamics themselves.NEW & NOTEWORTHY Motion perception depends on the ability to decode the activity of neurons in the middle temporal area. Theoretically optimal decoding requires knowledge of the sensitivity of neurons and interneuronal correlations. We report that a simple correlation-blind decoder performs as well as the optimal decoder for coarse motion discrimination. Additionally, the decoder could match the psychophysical performance with moderate temporal integration and dynamics inherited from sensory responses.


Subject(s)
Discrimination, Psychological/physiology , Electrophysiological Phenomena/physiology , Models, Biological , Motion Perception/physiology , Neurons/physiology , Neurophysiology/methods , Temporal Lobe/physiology , Animals , Behavior, Animal/physiology , Decision Making , Female , Macaca mulatta , Male , Pattern Recognition, Visual/physiology , Space Perception/physiology
14.
Nat Methods ; 14(4): 420-426, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28319111

ABSTRACT

Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large-scale recording of neural activity in vivo. Here, we introduce volumetric two-photon imaging of neurons using stereoscopy (vTwINS), a volumetric calcium imaging method that uses an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced 'image pairs' in the resulting 2D image, and the separation distance between projections is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a modified orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrated vTwINS by imaging neural population activity in the mouse primary visual cortex and hippocampus. Our results demonstrated that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame rate.


Subject(s)
Imaging, Three-Dimensional/methods , Microscopy, Fluorescence, Multiphoton/methods , Neurons/physiology , Visual Cortex/cytology , Algorithms , Animals , Calcium/analysis , Calcium/metabolism , Female , Hippocampus/cytology , Hippocampus/physiology , Male , Mice, Transgenic , Microscopy, Confocal/instrumentation , Microscopy, Confocal/methods , Microscopy, Fluorescence, Multiphoton/instrumentation , Molecular Imaging/methods , Visual Cortex/physiology
15.
PLoS Comput Biol ; 15(5): e1006299, 2019 05.
Article in English | MEDLINE | ID: mdl-31125335

ABSTRACT

The activity of neural populations in the brains of humans and animals can exhibit vastly different spatial patterns when faced with different tasks or environmental stimuli. The degrees of similarity between these neural activity patterns in response to different events are used to characterize the representational structure of cognitive states in a neural population. The dominant methods of investigating this similarity structure first estimate neural activity patterns from noisy neural imaging data using linear regression, and then examine the similarity between the estimated patterns. Here, we show that this approach introduces spurious bias structure in the resulting similarity matrix, in particular when applied to fMRI data. This problem is especially severe when the signal-to-noise ratio is low and in cases where experimental conditions cannot be fully randomized in a task. We propose Bayesian Representational Similarity Analysis (BRSA), an alternative method for computing representational similarity, in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data. By marginalizing over the unknown activity patterns, we can directly estimate this covariance structure from imaging data. This method offers significant reductions in bias and allows estimation of neural representational similarity with previously unattained levels of precision at low signal-to-noise ratio, without losing the possibility of deriving an interpretable distance measure from the estimated similarity. The method is closely related to Pattern Component Model (PCM), but instead of modeling the estimated neural patterns as in PCM, BRSA models the imaging data directly and is suited for analyzing data in which the order of task conditions is not fully counterbalanced. The probabilistic framework allows for jointly analyzing data from a group of participants. The method can also simultaneously estimate a signal-to-noise ratio map that shows where the learned representational structure is supported more strongly. Both this map and the learned covariance matrix can be used as a structured prior for maximum a posteriori estimation of neural activity patterns, which can be further used for fMRI decoding. Our method therefore paves the way towards a more unified and principled analysis of neural representations underlying fMRI signals. We make our tool freely available in Brain Imaging Analysis Kit (BrainIAK).


Subject(s)
Brain Mapping/methods , Image Processing, Computer-Assisted/methods , Adult , Algorithms , Bayes Theorem , Bias , Brain/physiology , Female , Humans , Linear Models , Magnetic Resonance Imaging , Male , Models, Neurological , Neurons , Photic Stimulation
17.
J Physiol ; 602(9): 1921, 2024 May.
Article in English | MEDLINE | ID: mdl-38628075
18.
Neural Comput ; 30(4): 1012-1045, 2018 04.
Article in English | MEDLINE | ID: mdl-29381442

ABSTRACT

Neurons in many brain areas exhibit high trial-to-trial variability, with spike counts that are overdispersed relative to a Poisson distribution. Recent work (Goris, Movshon, & Simoncelli, 2014 ) has proposed to explain this variability in terms of a multiplicative interaction between a stochastic gain variable and a stimulus-dependent Poisson firing rate, which produces quadratic relationships between spike count mean and variance. Here we examine this quadratic assumption and propose a more flexible family of models that can account for a more diverse set of mean-variance relationships. Our model contains additive gaussian noise that is transformed nonlinearly to produce a Poisson spike rate. Different choices of the nonlinear function can give rise to qualitatively different mean-variance relationships, ranging from sublinear to linear to quadratic. Intriguingly, a rectified squaring nonlinearity produces a linear mean-variance function, corresponding to responses with a constant Fano factor. We describe a computationally efficient method for fitting this model to data and demonstrate that a majority of neurons in a V1 population are better described by a model with a nonquadratic relationship between mean and variance. Finally, we demonstrate a practical use of our model via an application to Bayesian adaptive stimulus selection in closed-loop neurophysiology experiments, which shows that accounting for overdispersion can lead to dramatic improvements in adaptive tuning curve estimation.


Subject(s)
Action Potentials/physiology , Brain/cytology , Models, Neurological , Neurons/physiology , Algorithms , Humans , Normal Distribution , Stochastic Processes
19.
J Vis ; 18(12): 4, 2018 11 01.
Article in English | MEDLINE | ID: mdl-30458512

ABSTRACT

Psychometric functions (PFs) quantify how external stimuli affect behavior, and they play an important role in building models of sensory and cognitive processes. Adaptive stimulus-selection methods seek to select stimuli that are maximally informative about the PF given data observed so far in an experiment and thereby reduce the number of trials required to estimate the PF. Here we develop new adaptive stimulus-selection methods for flexible PF models in tasks with two or more alternatives. We model the PF with a multinomial logistic regression mixture model that incorporates realistic aspects of psychophysical behavior, including lapses and multiple alternatives for the response. We propose an information-theoretic criterion for stimulus selection and develop computationally efficient methods for inference and stimulus selection based on adaptive Markov-chain Monte Carlo sampling. We apply these methods to data from macaque monkeys performing a multi-alternative motion-discrimination task and show in simulated experiments that our method can achieve a substantial speed-up over random designs. These advances will reduce the amount of data needed to build accurate models of multi-alternative PFs and can be extended to high-dimensional PFs that would be infeasible to characterize with standard methods.


Subject(s)
Models, Psychological , Motion Perception/physiology , Psychometrics , Algorithms , Animals , Generalization, Stimulus , Logistic Models , Macaca , Markov Chains , Monte Carlo Method , Psychophysics
20.
J Vis ; 18(3): 23, 2018 03 01.
Article in English | MEDLINE | ID: mdl-29677339

ABSTRACT

People make surprising but reliable perceptual errors. Here, we provide a unified explanation for systematic errors in the perception of three-dimensional (3-D) motion. To do so, we characterized the binocular retinal motion signals produced by objects moving through arbitrary locations in 3-D. Next, we developed a Bayesian model, treating 3-D motion perception as optimal inference given sensory noise in the measurement of retinal motion. The model predicts a set of systematic perceptual errors, which depend on stimulus distance, contrast, and eccentricity. We then used a virtual-reality headset as well as a standard 3-D desktop stereoscopic display to test these predictions in a series of perceptual experiments. As predicted, we found evidence that errors in 3-D motion perception depend on the contrast, viewing distance, and eccentricity of a stimulus. These errors include a lateral bias in perceived motion direction and a surprising tendency to misreport approaching motion as receding and vice versa. In sum, we present a Bayesian model that provides a parsimonious account for a range of systematic misperceptions of motion in naturalistic environments.


Subject(s)
Bayes Theorem , Motion Perception/physiology , Retina/physiology , Vision, Binocular/physiology , Adult , Female , Humans , Imaging, Three-Dimensional , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL