Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
J Neurosci ; 42(50): 9343-9355, 2022 12 14.
Article in English | MEDLINE | ID: mdl-36396403

ABSTRACT

The Pearson correlation coefficient squared, r 2, is an important tool used in the analysis of neural data to quantify the similarity between neural tuning curves. Yet this metric is biased by trial-to-trial variability; as trial-to-trial variability increases, measured correlation decreases. Major lines of research are confounded by this bias, including those involving the study of invariance of neural tuning across conditions and the analysis of the similarity of tuning across neurons. To address this, we extend an estimator, [Formula: see text], that was recently developed for estimating model-to-neuron correlation, in which a noisy signal is compared with a noise-free prediction, to the case of neuron-to-neuron correlation, in which two noisy signals are compared with each other. We compare the performance of our novel estimator to a prior method developed by Spearman, commonly used in other fields but widely overlooked in neuroscience, and find that our method has less bias. We then apply our estimator to demonstrate how it avoids drastic confounds introduced by trial-to-trial variability using data collected in two prior studies (macaque, both sexes) that examined two different forms of invariance in the neural encoding of visual inputs-translation invariance and fill-outline invariance. Our results quantify for the first time the gradual falloff with spatial offset of translation-invariant shape selectivity within visual cortical neuronal receptive fields and offer a principled method to compare invariance in noisy biological systems to that in noise-free models.SIGNIFICANCE STATEMENT Quantifying the similarity between two sets of averaged neural responses is fundamental to the analysis of neural data. A ubiquitous metric of similarity, the correlation coefficient, is attenuated by trial-to-trial variability that arises from many irrelevant factors. Spearman recognized this problem and proposed corrected methods that have been extended over a century. We show this method has large asymptotic biases that can be overcome using a novel estimator. Despite the frequent use of the correlation coefficient in neuroscience, consensus on how to address this fundamental statistical issue has not been reached. We provide an accurate estimator of the correlation coefficient and apply it to gain insight into visual invariance.


Subject(s)
Visual Cortex , Male , Female , Animals , Visual Cortex/physiology , Neurons/physiology , Visual Fields , Bias , Models, Neurological
2.
J Neurosci ; 41(26): 5638-5651, 2021 06 30.
Article in English | MEDLINE | ID: mdl-34001625

ABSTRACT

Signal correlation (rs) is commonly defined as the correlation between the tuning curves of two neurons and is widely used as a metric of tuning similarity. It is fundamental to how populations of neurons represent stimuli and has been central to many studies of neural coding. Yet the classic estimate, Pearson's correlation coefficient, [Formula: see text], between the average responses of two neurons to a set of stimuli suffers from confounding biases. The estimate [Formula: see text] can be downwardly biased by trial-to-trial variability and also upwardly biased by trial-to-trial correlation between neurons, and these biases can hide important aspects of neural coding. Here we provide analytic results on the source of these biases and explore them for ranges of parameters that are relevant for electrophysiological experiments. We then provide corrections for these biases that we validate in simulation. Furthermore, we apply these corrected estimators to make the following novel experimental observation in cortical area MT: pairs of nearby neurons that are strongly tuned for motion direction tend to have high signal correlation, and pairs that are weakly tuned tend to have low signal correlation. We dismiss a trivial explanation for this and find that an analogous trend holds for orientation tuning in the primary visual cortex. We also consider the potential consequences for encoding whereby the association of signal correlation and tuning strength naturally regularizes the dimensionality of downstream computations.SIGNIFICANCE STATEMENT Fundamental to how cortical neurons encode information about the environment is their functional similarity, that is, the redundancy in what they encode and their shared noise. These properties have been extensively studied theoretically and experimentally throughout the nervous system, but here we show that a common estimator of functional similarity has confounding biases. We characterize these biases and provide estimators that do not suffer from them. Using our improved estimators, we demonstrate a novel result, that is, there is a positive relationship between tuning curve similarity and amplitude for nearby neurons in the visual cortical motion area MT. We provide a simple stochastic model explaining this relationship and discuss how it would naturally regularize the dimensionality of neural encoding.


Subject(s)
Models, Neurological , Neurons/physiology , Visual Cortex/physiology , Animals , Bias , Humans , Macaca mulatta , Motion Perception/physiology
3.
PLoS Comput Biol ; 17(8): e1009212, 2021 08.
Article in English | MEDLINE | ID: mdl-34347786

ABSTRACT

The correlation coefficient squared, r2, is commonly used to validate quantitative models on neural data, yet it is biased by trial-to-trial variability: as trial-to-trial variability increases, measured correlation to a model's predictions decreases. As a result, models that perfectly explain neural tuning can appear to perform poorly. Many solutions to this problem have been proposed, but no consensus has been reached on which is the least biased estimator. Some currently used methods substantially overestimate model fit, and the utility of even the best performing methods is limited by the lack of confidence intervals and asymptotic analysis. We provide a new estimator, [Formula: see text], that outperforms all prior estimators in our testing, and we provide confidence intervals and asymptotic guarantees. We apply our estimator to a variety of neural data to validate its utility. We find that neural noise is often so great that confidence intervals of the estimator cover the entire possible range of values ([0, 1]), preventing meaningful evaluation of the quality of a model's predictions. This leads us to propose the use of the signal-to-noise ratio (SNR) as a quality metric for making quantitative comparisons across neural recordings. Analyzing a variety of neural data sets, we find that up to ∼ 40% of some state-of-the-art neural recordings do not pass even a liberal SNR criterion. Moving toward more reliable estimates of correlation, and quantitatively comparing quality across recording modalities and data sets, will be critical to accelerating progress in modeling biological phenomena.


Subject(s)
Models, Neurological , Analysis of Variance , Animals , Bias , Computational Biology , Computer Simulation , Confidence Intervals , Databases, Factual/statistics & numerical data , Humans , Motion Perception/physiology , Neurons/physiology , Signal-To-Noise Ratio , Visual Cortex/physiology , Visual Perception/physiology
4.
bioRxiv ; 2024 Feb 09.
Article in English | MEDLINE | ID: mdl-37961285

ABSTRACT

A long-standing goal of neuroscience is to obtain a causal model of the nervous system. This would allow neuroscientists to explain animal behavior in terms of the dynamic interactions between neurons. The recently reported whole-brain fly connectome [1-7] specifies the synaptic paths by which neurons can affect each other but not whether, or how, they do affect each other in vivo. To overcome this limitation, we introduce a novel combined experimental and statistical strategy for efficiently learning a causal model of the fly brain, which we refer to as the "effectome". Specifically, we propose an estimator for a dynamical systems model of the fly brain that uses stochastic optogenetic perturbation data to accurately estimate causal effects and the connectome as a prior to drastically improve estimation efficiency. We then analyze the connectome to propose circuits that have the greatest total effect on the dynamics of the fly nervous system. We discover that, fortunately, the dominant circuits significantly involve only relatively small populations of neurons-thus imaging, stimulation, and neuronal identification are feasible. Intriguingly, we find that this approach also re-discovers known circuits and generates testable hypotheses about their dynamics. Overall, our analyses of the connectome provide evidence that global dynamics of the fly brain are generated by a large collection of small and often anatomically localized circuits operating, largely, independently of each other. This in turn implies that a causal model of a brain, a principal goal of systems neuroscience, can be feasibly obtained in the fly.

5.
Elife ; 72018 12 20.
Article in English | MEDLINE | ID: mdl-30570484

ABSTRACT

Deep networks provide a potentially rich interconnection between neuroscientific and artificial approaches to understanding visual intelligence, but the relationship between artificial and neural representations of complex visual form has not been elucidated at the level of single-unit selectivity. Taking the approach of an electrophysiologist to characterizing single CNN units, we found many units exhibit translation-invariant boundary curvature selectivity approaching that of exemplar neurons in the primate mid-level visual area V4. For some V4-like units, particularly in middle layers, the natural images that drove them best were qualitatively consistent with selectivity for object boundaries. Our results identify a novel image-computable model for V4 boundary curvature selectivity and suggest that such a representation may begin to emerge within an artificial network trained for image categorization, even though boundary information was not provided during training. This raises the possibility that single-unit selectivity in CNNs will become a guide for understanding sensory cortex.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Animals , Computer Simulation , Macaca mulatta , Neurons/physiology , Photic Stimulation
6.
Psychol Aging ; 32(7): 675-680, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28956940

ABSTRACT

Young and older adults studied a list of words and then took 2 successive tests of item recognition, an easy test consisting of studied words and unrelated lures and a hard test pitting studied words against semantically related lures. When the easy test was first, participants in both age groups adopted a more stringent criterion on the harder test. When the hard test was first, no criterion shift was seen. Older adults can assess the consequences for accuracy of maintaining a lenient criterion when discrimination becomes more difficult and can take appropriate action to control errors under these conditions. (PsycINFO Database Record


Subject(s)
Aging/psychology , Recognition, Psychology , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Middle Aged , Semantics , Young Adult
7.
Curr Biol ; 24(7): 748-52, 2014 Mar 31.
Article in English | MEDLINE | ID: mdl-24631242

ABSTRACT

The present study demonstrates, for the first time, a specific enhancement of auditory spatial cue discrimination due to eye gaze. Whereas the region of sharpest visual acuity, called the fovea, can be directed at will by moving one's eyes, auditory spatial information is derived primarily from head-related acoustic cues. Past auditory studies have found better discrimination in front of the head [1-3] but have not manipulated subjects' gaze, thus overlooking potential oculomotor influences. Electrophysiological studies have shown that the inferior colliculus, a critical auditory midbrain nucleus, shows visual and oculomotor responses [4-6] and modulations of auditory activity [7-9], and that auditory neurons in the superior colliculus show shifting receptive fields [10-13]. How the auditory system leverages this crossmodal information at the behavioral level remains unknown. Here we directed subjects' gaze (with an eccentric dot) or auditory attention (with lateralized noise) while they performed an auditory spatial cue discrimination task. We found that directing gaze toward a sound significantly enhances discrimination of both interaural level and time differences, whereas directing auditory spatial attention does not. These results show that oculomotor information variably enhances auditory spatial resolution even when the head remains stationary, revealing a distinct behavioral benefit possibly arising from auditory-oculomotor interactions at an earlier level of processing than previously demonstrated.


Subject(s)
Attention , Auditory Perception/physiology , Eye Movements , Adult , Cues , Female , Humans , Male , Photic Stimulation , Sound Localization
SELECTION OF CITATIONS
SEARCH DETAIL