ABSTRACT
Lateral intraparietal (LIP) neurons represent formation of perceptual decisions involving eye movements. In circuit models for these decisions, neural ensembles that encode actions compete to form decisions. Consequently, representation and readout of the decision variables (DVs) are implemented similarly for decisions with identical competing actions, irrespective of input and task context differences. Further, DVs are encoded as partially potentiated action plans through balance of activity of action-selective ensembles. Here, we test those core principles. We show that in a novel face-discrimination task, LIP firing rates decrease with supporting evidence, contrary to conventional motion-discrimination tasks. These opposite response patterns arise from similar mechanisms in which decisions form along curved population-response manifolds misaligned with action representations. These manifolds rotate in state space based on context, indicating distinct optimal readouts for different tasks. We show similar manifolds in lateral and medial prefrontal cortices, suggesting similar representational geometry across decision-making circuits.
Subject(s)
Decision Making , Motion Perception/physiology , Parietal Lobe/physiology , Animals , Behavior, Animal , Judgment , Macaca mulatta , Male , Models, Neurological , Neurons/physiology , Photic Stimulation , Prefrontal Cortex/physiology , Psychophysics , Task Performance and Analysis , Time FactorsABSTRACT
The curse of dimensionality plagues models of reinforcement learning and decision making. The process of abstraction solves this by constructing variables describing features shared by different instances, reducing dimensionality and enabling generalization in novel situations. Here, we characterized neural representations in monkeys performing a task described by different hidden and explicit variables. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training, which requires a particular geometry of neural representations. Neural ensembles in prefrontal cortex, hippocampus, and simulated neural networks simultaneously represented multiple variables in a geometry reflecting abstraction but that still allowed a linear classifier to decode a large number of other variables (high shattering dimensionality). Furthermore, this geometry changed in relation to task events and performance. These findings elucidate how the brain and artificial systems represent variables in an abstract format while preserving the advantages conferred by high shattering dimensionality.
Subject(s)
Hippocampus/anatomy & histology , Prefrontal Cortex/anatomy & histology , Animals , Behavior, Animal , Brain Mapping , Computer Simulation , Hippocampus/physiology , Learning , Macaca mulatta , Male , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Prefrontal Cortex/physiology , Reinforcement, Psychology , Task Performance and AnalysisABSTRACT
Functional neuroimaging studies indicate that the human brain can represent concepts and their relational structure in memory using coding schemes typical of spatial navigation. However, whether we can read out the internal representational geometries of conceptual spaces solely from human behavior remains unclear. Here, we report that the relational structure between concepts in memory might be reflected in spontaneous eye movements during verbal fluency tasks: When we asked participants to randomly generate numbers, their eye movements correlated with distances along the left-to-right one-dimensional geometry of the number space (mental number line), while they scaled with distance along the ring-like two-dimensional geometry of the color space (color wheel) when they randomly generated color names. Moreover, when participants randomly produced animal names, eye movements correlated with low-dimensional similarity in word frequencies. These results suggest that the representational geometries used to internally organize conceptual spaces might be read out from gaze behavior.
Subject(s)
Eye Movements , Spatial Navigation , Humans , Brain , Movement , Functional NeuroimagingABSTRACT
Real-world tasks require coordination of working memory, decision-making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here, we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. In task-optimized recurrent neural networks, we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from the prefrontal cortex during working memory tasks. Our experiments revealed that human behavior is consistent with contingency representations and not with traditional sensory models of working memory. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision-making.
Subject(s)
Computer Simulation , Memory, Short-Term , Models, Neurological , Neural Networks, Computer , Humans , Prefrontal Cortex/physiologyABSTRACT
The anterior cingulate cortex (ACC) is believed to be involved in many cognitive processes, including linking goals to actions and tracking decision-relevant contextual information. ACC neurons robustly encode expected outcomes, but how this relates to putative functions of ACC remains unknown. Here, we approach this question from the perspective of population codes by analyzing neural spiking data in the ventral and dorsal banks of the ACC in two male monkeys trained to perform a stimulus-motor mapping task to earn rewards or avoid losses. We found that neural populations favor a low dimensional representational geometry that emphasizes the valence of potential outcomes while also facilitating the independent, abstract representation of multiple task-relevant variables. Valence encoding persisted throughout the trial, and realized outcomes were primarily encoded in a relative sense, such that cue valence acted as a context for outcome encoding. This suggests that the population coding we observe could be a mechanism that allows feedback to be interpreted in a context-dependent manner. Together, our results point to a prominent role for ACC in context setting and relative interpretation of outcomes, facilitated by abstract, or untangled, representations of task variables.SIGNIFICANCE STATEMENT The ability to interpret events in light of the current context is a critical facet of higher-order cognition. The ACC is suggested to be important for tracking contextual information, whereas alternate views hold that its function is more related to the motor system and linking goals to appropriate actions. We evaluated these possibilities by analyzing geometric properties of neural population activity in monkey ACC when contexts were determined by the valence of potential outcomes and found that this information was represented as a dominant, abstract concept. Ensuing outcomes were then coded relative to these contexts, suggesting an important role for these representations in context-dependent evaluation. Such mechanisms may be critical for the abstract reasoning and generalization characteristic of biological intelligence.
Subject(s)
Gyrus Cinguli , Reward , Animals , Male , Gyrus Cinguli/physiology , Neurons/physiology , Macaca mulattaABSTRACT
Fine-grained activity patterns, as measured with functional magnetic resonance imaging (fMRI), are thought to reflect underlying neural representations. Multivariate analysis techniques, such as representational similarity analysis (RSA), can be used to test models of brain representation by quantifying the representational geometry (the collection of pair-wise dissimilarities between activity patterns). One important caveat, however, is that non-linearities in the coupling between neural activity and the fMRI signal may lead to significant distortions in the representational geometry estimated from fMRI activity patterns. Here we tested the stability of representational dissimilarity measures in primary sensory-motor (S1 and M1) and early visual regions (V1/V2) across a large range of activation levels. Participants were visually cued with different letters to perform single finger presses with one of the 5 fingers at a rate of 0.3-2.6â¯Hz. For each stimulation frequency, we quantified the difference between the 5 activity patterns in M1, S1, and V1/V2. We found that the representational geometry remained relatively stable, even though the average activity increased over a large dynamic range. These results indicate that the representational geometry of fMRI activity patterns can be reliably assessed, largely independent of the average activity in the region. This has important methodological implications for RSA and other multivariate analysis approaches that use the representational geometry to make inferences about brain representations.
Subject(s)
Brain Mapping/methods , Image Processing, Computer-Assisted/methods , Motor Activity/physiology , Motor Cortex/physiology , Pattern Recognition, Automated/methods , Psychomotor Performance/physiology , Somatosensory Cortex/physiology , Visual Cortex/physiology , Visual Perception/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Motor Cortex/diagnostic imaging , Somatosensory Cortex/diagnostic imaging , Visual Cortex/diagnostic imaging , Young AdultABSTRACT
Working memory (WM) provides the stability necessary for high-level cognition. Influential theories typically assume that WM depends on the persistence of stable neural representations, yet increasing evidence suggests that neural states are highly dynamic. Here we apply multivariate pattern analysis to explore the population dynamics in primate lateral prefrontal cortex (PFC) during three variants of the classic memory-guided saccade task (recorded in four animals). We observed the hallmark of dynamic population coding across key phases of a working memory task: sensory processing, memory encoding, and response execution. Throughout both these dynamic epochs and the memory delay period, however, the neural representational geometry remained stable. We identified two characteristics that jointly explain these dynamics: (1) time-varying changes in the subpopulation of neurons coding for task variables (i.e., dynamic subpopulations); and (2) time-varying selectivity within neurons (i.e., dynamic selectivity). These results indicate that even in a very simple memory-guided saccade task, PFC neurons display complex dynamics to support stable representations for WM.SIGNIFICANCE STATEMENT Flexible, intelligent behavior requires the maintenance and manipulation of incoming information over various time spans. For short time spans, this faculty is labeled "working memory" (WM). Dominant models propose that WM is maintained by stable, persistent patterns of neural activity in prefrontal cortex (PFC). However, recent evidence suggests that neural activity in PFC is dynamic, even while the contents of WM remain stably represented. Here, we explored the neural dynamics in PFC during a memory-guided saccade task. We found evidence for dynamic population coding in various task epochs, despite striking stability in the neural representational geometry of WM. Furthermore, we identified two distinct cellular mechanisms that contribute to dynamic population coding.
Subject(s)
Memory, Short-Term/physiology , Mental Recall/physiology , Models, Neurological , Nerve Net/physiology , Prefrontal Cortex/physiology , Saccades/physiology , Animals , Computer Simulation , Macaca mulatta , MaleABSTRACT
Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity.
Subject(s)
Brain/physiology , Judgment/physiology , Pattern Recognition, Visual/physiology , Adult , Female , Humans , Magnetoencephalography , Male , Photic Stimulation , Young AdultABSTRACT
Our visual system consciously processes only a subset of the incoming information. Selective attention allows us to prioritize relevant inputs, and can be allocated to features, locations, and objects. Recent advances in feature-based attention suggest that several selection principles are shared across these domains and that many differences between the effects of attention on perceptual processing can be explained by differences in the underlying representational structures. Moving forward, it can thus be useful to assess how attention changes the structure of the representational spaces over which it operates, which include the spatial organization, feature maps, and object-based coding in visual cortex. This will ultimately add to our understanding of how attention changes the flow of visual information processing more broadly.
Subject(s)
Attention , Visual Perception , Humans , Attention/physiology , Visual Perception/physiology , Visual Cortex/physiologyABSTRACT
How do humans and other animals learn new tasks? A wave of brain recording studies has investigated how neural representations change during task learning, with a focus on how tasks can be acquired and coded in ways that minimise mutual interference. We review recent work that has explored the geometry and dimensionality of neural task representations in neocortex, and computational models that have exploited these findings to understand how the brain may partition knowledge between tasks. We discuss how ideas from machine learning, including those that combine supervised and unsupervised learning, are helping neuroscientists understand how natural tasks are learned and coded in biological brains.
Subject(s)
Machine Learning , Neural Networks, Computer , Animals , Humans , BrainABSTRACT
Traditional contrastive analysis has been the foundation of consciousness science, but its limitations due to the lack of a reliable method for measuring states of consciousness have prompted the exploration of alternative approaches. Structuralist theories have gained attention as an alternative that focuses on the structural properties of phenomenal experience and seeks to identify their neural encoding via structural similarities between quality spaces and neural state spaces. However, the intertwining of philosophical assumptions about structuralism and structuralist methodology may pose a challenge to those who are skeptical of the former. In this paper, I offer an analysis and defense of structuralism as a methodological approach in consciousness science, which is partly independent of structuralist assumptions on the nature of consciousness. By doing so, I aim to make structuralist methodology more accessible to a broader scientific and philosophical audience. I situate methodological structuralism in the context of questions concerning mental representation, psychophysical measurement, holism, and functional relevance of neural processes. At last, I analyze the relationship between the structural approach and the distinction between conscious and unconscious states.
ABSTRACT
Humans can navigate flexibly to meet their goals. Here, we asked how the neural representation of allocentric space is distorted by goal-directed behavior. Participants navigated an agent to two successive goal locations in a grid world environment comprising four interlinked rooms, with a contextual cue indicating the conditional dependence of one goal location on another. Examining the neural geometry by which room and context were encoded in fMRI signals, we found that map-like representations of the environment emerged in both hippocampus and neocortex. Cognitive maps in hippocampus and orbitofrontal cortices were compressed so that locations cued as goals were coded together in neural state space, and these distortions predicted successful learning. This effect was captured by a computational model in which current and prospective locations are jointly encoded in a place code, providing a theory of how goals warp the neural representation of space in macroscopic neural signals.
Subject(s)
Neocortex , Spatial Navigation , Humans , Goals , Prospective Studies , Hippocampus , Prefrontal Cortex , Space PerceptionABSTRACT
The neural code of faces has been intensively studied in the macaque face patch system. Although the majority of previous studies used complete faces as stimuli, faces are often seen partially in daily life. Here, we investigated how face-selective cells represent two types of incomplete faces: face fragments and occluded faces, with the location of the fragment/occluder and the facial features systematically varied. Contrary to popular belief, we found that the preferred face regions identified with two stimulus types are dissociated in many face cells. This dissociation can be explained by the nonlinear integration of information from different face parts and is closely related to a curved representation of face completeness in the state space, which allows a clear discrimination between different stimulus types. Furthermore, identity-related facial features are represented in a subspace orthogonal to the nonlinear dimension of face completeness, supporting a condition-general code of facial identity.
Subject(s)
Face , Macaca , Animals , Photic Stimulation/methods , Magnetic Resonance Imaging/methodsABSTRACT
Human fMRI studies have documented extensively that the content of visual working memory (VWM) can be reliably decoded from fMRI voxel response patterns during the delay period in both the occipito-temporal cortex (OTC), including early visual areas (EVC), and the posterior parietal cortex (PPC).1,2,3,4 Further work has revealed that VWM signal in OTC is largely sustained by feedback from associative areas such as prefrontal cortex (PFC) and PPC.4,5,6,7,8,9 It is unclear, however, if feedback during VWM simply restores sensory representations initially formed in OTC or if it can reshape the representational content of OTC during VWM delay. Taking advantage of a recent finding showing that object representational geometry differs between OTC and PPC in perception,10 here we find that, during VWM delay, the object representational geometry in OTC becomes more aligned with that of PPC during perception than with itself during perception. This finding supports the role of feedback in shaping the content of VWM in OTC, with the VWM content of OTC more determined by information retained in PPC than by the sensory information initially encoded in OTC.
Subject(s)
Brain Mapping , Memory, Short-Term , Humans , Memory, Short-Term/physiology , Temporal Lobe/physiology , Parietal Lobe/physiology , Prefrontal Cortex/diagnostic imaging , Prefrontal Cortex/physiology , Magnetic Resonance Imaging , Visual Perception/physiologyABSTRACT
The ability to stably maintain visual information over brief delays is central to cognitive functioning. One possible way to achieve robust working memory maintenance is by having multiple concurrent mnemonic representations across multiple cortical loci. For example, early visual cortex might contribute to storage by representing information in a "sensory-like" format, while intraparietal sulcus uses a format transformed away from sensory driven responses. As an explicit test of mnemonic code transformations along the visual hierarchy, we quantitatively modeled the progression of veridical-to-categorical orientation representations in human participants. Participants directly viewed, or held in mind, an oriented grating pattern, and the similarity between fMRI activation patterns for different orientations was calculated throughout retinotopic cortex. During direct perception, similarity was clustered around cardinal orientations, while during working memory the obliques were represented more similarly. We modeled these similarity patterns based on the known distribution of orientation information in the natural world: The "veridical" model uses an efficient coding framework to capture hypothesized representations during visual perception. The "categorical" model assumes that different "psychological distances" between orientations result in orientation categorization relative to cardinal axes. During direct perception, the veridical model explained the data well in early visual areas, while the categorical model did worse. During working memory, the veridical model only explained some of the data, while the categorical model gradually gained explanatory power for increasingly anterior retinotopic regions. These findings suggest that directly viewed images are represented veridically, but once visual information is no longer tethered to the sensory world, there is a gradual progression to more categorical mnemonic formats along the visual hierarchy.
ABSTRACT
Objective. Enable neural control of individual prosthetic fingers for participants with upper-limb paralysis.Approach. Two tetraplegic participants were each implanted with a 96-channel array in the left posterior parietal cortex (PPC). One of the participants was additionally implanted with a 96-channel array near the hand knob of the left motor cortex (MC). Across tens of sessions, we recorded neural activity while the participants attempted to move individual fingers of the right hand. Offline, we classified attempted finger movements from neural firing rates using linear discriminant analysis with cross-validation. The participants then used the neural classifier online to control individual fingers of a brain-machine interface (BMI). Finally, we characterized the neural representational geometry during individual finger movements of both hands.Main Results. The two participants achieved 86% and 92% online accuracy during BMI control of the contralateral fingers (chance = 17%). Offline, a linear decoder achieved ten-finger decoding accuracies of 70% and 66% using respective PPC recordings and 75% using MC recordings (chance = 10%). In MC and in one PPC array, a factorized code linked corresponding finger movements of the contralateral and ipsilateral hands.Significance. This is the first study to decode both contralateral and ipsilateral finger movements from PPC. Online BMI control of contralateral fingers exceeded that of previous finger BMIs. PPC and MC signals can be used to control individual prosthetic fingers, which may contribute to a hand restoration strategy for people with tetraplegia.
Subject(s)
Motor Cortex , Humans , Fingers , Movement , Hand , Parietal LobeABSTRACT
Motor planning plays a critical role in producing fast and accurate movement. Yet, the neural processes that occur in human primary motor and somatosensory cortex during planning, and how they relate to those during movement execution, remain poorly understood. Here, we used 7T functional magnetic resonance imaging and a delayed movement paradigm to study single finger movement planning and execution. The inclusion of no-go trials and variable delays allowed us to separate what are typically overlapping planning and execution brain responses. Although our univariate results show widespread deactivation during finger planning, multivariate pattern analysis revealed finger-specific activity patterns in contralateral primary somatosensory cortex (S1), which predicted the planned finger action. Surprisingly, these activity patterns were as informative as those found in contralateral primary motor cortex (M1). Control analyses ruled out the possibility that the detected information was an artifact of subthreshold movements during the preparatory delay. Furthermore, we observed that finger-specific activity patterns during planning were highly correlated to those during execution. These findings reveal that motor planning activates the specific S1 and M1 circuits that are engaged during the execution of a finger press, while activity in both regions is overall suppressed. We propose that preparatory states in S1 may improve movement control through changes in sensory processing or via direct influence of spinal motor neurons.
Subject(s)
Brain/physiology , Motor Cortex/physiology , Psychomotor Performance/physiology , Somatosensory Cortex/physiology , Adult , Brain Mapping/methods , Female , Fingers/physiology , Humans , Magnetic Resonance Imaging/methods , Male , Movement/physiology , Young AdultABSTRACT
How do neural populations code for multiple, potentially conflicting tasks? Here we used computational simulations involving neural networks to define "lazy" and "rich" coding solutions to this context-dependent decision-making problem, which trade off learning speed for robustness. During lazy learning the input dimensionality is expanded by random projections to the network hidden layer, whereas in rich learning hidden units acquire structured representations that privilege relevant over irrelevant features. For context-dependent decision-making, one rich solution is to project task representations onto low-dimensional and orthogonal manifolds. Using behavioral testing and neuroimaging in humans and analysis of neural signals from macaque prefrontal cortex, we report evidence for neural coding patterns in biological brains whose dimensionality and neural geometry are consistent with the rich learning regime.
Subject(s)
Neural Networks, Computer , Task Performance and Analysis , Brain , Learning , Prefrontal CortexABSTRACT
Animals adaptively integrate sensation, planning, and action to navigate toward goal locations in ever-changing environments, but the functional organization of cortex supporting these processes remains unclear. We characterized encoding in approximately 90,000 neurons across the mouse posterior cortex during a virtual navigation task with rule switching. The encoding of task and behavioral variables was highly distributed across cortical areas but differed in magnitude, resulting in three spatial gradients for visual cue, spatial position plus dynamics of choice formation, and locomotion, with peaks respectively in visual, retrosplenial, and parietal cortices. Surprisingly, the conjunctive encoding of these variables in single neurons was similar throughout the posterior cortex, creating high-dimensional representations in all areas instead of revealing computations specialized for each area. We propose that, for guiding navigation decisions, the posterior cortex operates in parallel rather than hierarchically, and collectively generates a state representation of the behavior and environment, with each area specialized in handling distinct information modalities.
Subject(s)
Neocortex , Spatial Navigation , Animals , Locomotion/physiology , Mice , Neurons/physiology , Parietal Lobe/physiology , Spatial Navigation/physiologyABSTRACT
We live our lives surrounded by symbols (e.g., road signs, logos, but especially words and numbers), and throughout our life we use them to evoke, communicate and reflect upon ideas and things that are not currently present to our senses. Symbols are represented in our brains at different levels of complexity: at the first and most simple level, as physical entities, in the corresponding primary and secondary sensory cortices. The crucial property of symbols, however, is that, despite the simplicity of their surface forms, they have the power of evoking higher order multifaceted representations that are implemented in distributed neural networks spanning a large portion of the cortex. The rich internal states that reflect our knowledge of the meaning of symbols are what we call semantic representations. In this review paper, we summarize our current knowledge of both the cognitive and neural substrates of semantic representations, focusing on concrete words (i.e., nouns or verbs referring to concrete objects and actions), which, together with numbers, are the most-studied and well defined classes of symbols. Following a systematic descriptive approach, we will organize this literature review around two key questions: what is the content of semantic representations? And, how are semantic representations implemented in the brain, in terms of localization and dynamics? While highlighting the main current opposing perspectives on these topics, we propose that a fruitful way to make substantial progress in this domain would be to adopt a geometrical view of semantic representations as points in high dimensional space, and to operationally partition the space of concrete word meaning into motor-perceptual and conceptual dimensions. By giving concrete examples of the kinds of research that can be done within this perspective, we illustrate how we believe this framework will foster theoretical speculations as well as empirical research.