Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Top Cogn Sci ; 16(1): 71-73, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38205906

RESUMO

The International Conference on Cognitive Modelling is dedicated to understanding how the complex processes of the mind can be explained in terms of detailed inner processing. In this issue, we present four representative papers of this field of research from our 20th meeting, ICCM 2022. This meeting was our first hybrid meeting, with a virtual version happening July 11-15, 2022, and an in-person event from July 23-27, 2022, held in Toronto, Canada. The four papers presented here were the top-ranked papers across both the virtual and in-person events. Three of the papers develop novel computational theories about low-level components within the mind and how those components result in high-level phenomena such as motivation, anhedonia, and attention. The final paper demonstrates the use of cognitive modeling to develop novel explanations of a paired associate learning task, and uses those insights to develop and explain human performance in a more complex version of that task.


Assuntos
Cognição , Humanos , Congressos como Assunto
2.
PLoS Comput Biol ; 19(9): e1011427, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37682986

RESUMO

Brain models typically focus either on low-level biological detail or on qualitative behavioral effects. In contrast, we present a biologically-plausible spiking-neuron model of associative learning and recognition that accounts for both human behavior and low-level brain activity across the whole task. Based on cognitive theories and insights from machine-learning analyses of M/EEG data, the model proceeds through five processing stages: stimulus encoding, familiarity judgement, associative retrieval, decision making, and motor response. The results matched human response times and source-localized MEG data in occipital, temporal, prefrontal, and precentral brain regions; as well as a classic fMRI effect in prefrontal cortex. This required two main conceptual advances: a basal-ganglia-thalamus action-selection system that relies on brief thalamic pulses to change the functional connectivity of the cortex, and a new unsupervised learning rule that causes very strong pattern separation in the hippocampus. The resulting model shows how low-level brain activity can result in goal-directed cognitive behavior in humans.


Assuntos
Encéfalo , Reconhecimento Psicológico , Humanos , Neuroimagem , Córtex Pré-Frontal , Tempo de Reação
3.
Front Comput Neurosci ; 17: 1148284, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37362059

RESUMO

A variety of advanced machine learning and deep learning algorithms achieve state-of-the-art performance on various temporal processing tasks. However, these methods are heavily energy inefficient-they run mainly on the power hungry CPUs and GPUs. Computing with Spiking Networks, on the other hand, has shown to be energy efficient on specialized neuromorphic hardware, e.g., Loihi, TrueNorth, SpiNNaker, etc. In this work, we present two architectures of spiking models, inspired from the theory of Reservoir Computing and Legendre Memory Units, for the Time Series Classification (TSC) task. Our first spiking architecture is closer to the general Reservoir Computing architecture and we successfully deploy it on Loihi; the second spiking architecture differs from the first by the inclusion of non-linearity in the readout layer. Our second model (trained with Surrogate Gradient Descent method) shows that non-linear decoding of the linearly extracted temporal features through spiking neurons not only achieves promising results, but also offers low computation-overhead by significantly reducing the number of neurons compared to the popular LSM based models-more than 40x reduction with respect to the recent spiking model we compare with. We experiment on five TSC datasets and achieve new SoTA spiking results (-as much as 28.607% accuracy improvement on one of the datasets), thereby showing the potential of our models to address the TSC tasks in a green energy-efficient manner. In addition, we also do energy profiling and comparison on Loihi and CPU to support our claims.

5.
Brain Sci ; 13(2)2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-36831788

RESUMO

The Neural Engineering Framework (Eliasmith & Anderson, 2003) is a long-standing method for implementing high-level algorithms constrained by low-level neurobiological details. In recent years, this method has been expanded to incorporate more biological details and applied to new tasks. This paper brings together these ongoing research strands, presenting them in a common framework. We expand on the NEF's core principles of (a) specifying the desired tuning curves of neurons in different parts of the model, (b) defining the computational relationships between the values represented by the neurons in different parts of the model, and (c) finding the synaptic connection weights that will cause those computations and tuning curves. In particular, we show how to extend this to include complex spatiotemporal tuning curves, and then apply this approach to produce functional computational models of grid cells, time cells, path integration, sparse representations, probabilistic representations, and symbolic representations in the brain.

6.
Front Neurosci ; 16: 983950, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36340782

RESUMO

This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb's plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike timing dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 ± 0.76% (Mean ± SD) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 ± 0.41% for 400 output neurons, 90.56 ± 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for spatial pattern recognition tasks. Future work will consider more complicated tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters.

7.
Neural Comput ; 34(11): 2205-2231, 2022 10 07.
Artigo em Inglês | MEDLINE | ID: mdl-36112910

RESUMO

Many animal behaviors require orientation and steering with respect to the environment. For insects, a key brain area involved in spatial orientation and navigation is the central complex. Activity in this neural circuit has been shown to track the insect's current heading relative to its environment and has also been proposed to be the substrate of path integration. However, it remains unclear how the output of the central complex is integrated into motor commands. Central complex output neurons project to the lateral accessory lobes (LAL), from which descending neurons project to thoracic motor centers. Here, we present a computational model of a simple neural network that has been described anatomically and physiologically in the LALs of male silkworm moths, in the context of odor-mediated steering. We present and analyze two versions of this network, one rate based and one based on spiking neurons. The modeled network consists of an inhibitory local interneuron and a bistable descending neuron (flip-flop) that both receive input in the LAL. The flip-flop neuron projects onto neck motor neurons to induce steering. We show that this simple computational model not only replicates the basic parameters of male silkworm moth behavior in a simulated odor plume but can also take input from a computational model of path integration in the central complex and use it to steer back to a point of origin. Furthermore, we find that increasing the level of detail within the model improves the realism of the model's behavior, leading to the emergence of looping behavior as an orientation strategy. Our results suggest that descending neurons originating in the LALs, such as flip-flop neurons, are sufficient to mediate multiple steering behaviors. This study is therefore a first step to close the gap between orientation circuits in the central complex and downstream motor centers.


Assuntos
Neurônios , Olfato , Animais , Encéfalo/fisiologia , Insetos/fisiologia , Masculino , Neurônios/fisiologia , Percepção Espacial/fisiologia
8.
Top Cogn Sci ; 14(4): 825-827, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36162312

RESUMO

The International Conference on Cognitive Modeling brings together researchers from around the world whose main goal is to build computational systems that reflect the internal processes of the mind. In this issue, we present the five best representative papers on this work from our 19th meeting, ICCM 2021, which was held virtually from July 3 to July 9, 2021. Three of these papers provide new techniques for refining computational models, giving better methods for taking empirical data and producing accurate computational models of the cognitive systems that produce them. The other two papers focus on explanation: using models to elucidate the underlying processes affecting cognition in such diverse domains as logical reasoning and the effects of caffeine.


Assuntos
Cafeína , Cognição , Humanos
9.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2575-2585, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34255637

RESUMO

Differentiable neural computers (DNCs) extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks, such as graph traversal. However, such models are difficult to train, requiring long training times and large datasets. In this work, we achieve some of the computational capabilities of DNCs with a model that can be trained very efficiently, namely, an echo state network with an explicit memory without interference. This extension enables echo state networks to recognize all regular languages, including those that contractive echo state networks provably cannot recognize. Furthermore, we demonstrate experimentally that our model performs comparably to its fully trained deep version on several typical benchmark tasks for DNCs.


Assuntos
Memória , Redes Neurais de Computação , Computadores , Idioma
10.
Neural Comput ; 33(8): 2033-2067, 2021 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-34310679

RESUMO

While neural networks are highly effective at learning task-relevant representations from data, they typically do not learn representations with the kind of symbolic structure that is hypothesized to support high-level cognitive processes, nor do they naturally model such structures within problem domains that are continuous in space and time. To fill these gaps, this work exploits a method for defining vector representations that bind discrete (symbol-like) entities to points in continuous topological spaces in order to simulate and predict the behavior of a range of dynamical systems. These vector representations are spatial semantic pointers (SSPs), and we demonstrate that they can (1) be used to model dynamical systems involving multiple objects represented in a symbol-like manner and (2) be integrated with deep neural networks to predict the future of physical trajectories. These results help unify what have traditionally appeared to be disparate approaches in machine learning.

11.
Top Cogn Sci ; 13(3): 515-533, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34146453

RESUMO

Neurophysiology and neuroanatomy constrain the set of possible computations that can be performed in a brain circuit. While detailed data on brain microcircuits is sometimes available, cognitive modelers are seldom in a position to take these constraints into account. One reason for this is the intrinsic complexity of accounting for biological mechanisms when describing cognitive function. In this paper, we present multiple extensions to the neural engineering framework (NEF), which simplify the integration of low-level constraints such as Dale's principle and spatially constrained connectivity into high-level, functional models. We focus on a model of eyeblink conditioning in the cerebellum, and, in particular, on systematically constructing temporal representations in the recurrent granule-Golgi microcircuit. We analyze how biological constraints impact these representations and demonstrate that our overall model is capable of reproducing key properties of eyeblink conditioning. Furthermore, since our techniques facilitate variation of neurophysiological parameters, we gain insights into why certain neurophysiological parameters may be as observed in nature. While eyeblink conditioning is a somewhat primitive form of learning, we argue that the same methods apply for more cognitive models as well. We implemented our extensions to the NEF in an open-source software library named "NengoBio" and hope that this work inspires similar attempts to bridge low-level biological detail and high-level function.


Assuntos
Piscadela , Cerebelo , Cognição , Humanos , Aprendizagem , Rede Nervosa
12.
Top Cogn Sci ; 13(3): 464-466, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34189843

RESUMO

The International Conference on Cognitive Modeling brings together researchers from around the world whose main goal is to build computational systems that reflect the internal processes of the mind. In this issue, we present the four best representative papers on this work from our 18th meeting, ICCM 2020, which was also the first meeting to be held virtually. Two of these papers develop novel techniques for building larger and more complex models using Reinforcement Learning and Learning By Instruction, respectively. The other two show how cognitive models connect to neuroscience, drawing on details of the hippocampus and cerebellum to constrain and explain the cognitive processes involved in memory and conditioning.


Assuntos
Cognição , Aprendizagem , Neurociências , Humanos
13.
Front Comput Neurosci ; 14: 573554, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33262697

RESUMO

Our understanding of the neurofunctional mechanisms of speech production and their pathologies is still incomplete. In this paper, a comprehensive model of speech production based on the Neural Engineering Framework (NEF) is presented. This model is able to activate sensorimotor plans based on cognitive-functional processes (i.e., generation of the intention of an utterance, selection of words and syntactic frames, generation of the phonological form and motor plan; feedforward mechanism). Since the generation of different states of the utterance are tied to different levels in the speech production hierarchy, it is shown that different forms of speech errors as well as speech disorders can arise at different levels in the production hierarchy or are linked to different levels and different modules in the speech production model. In addition, the influence of the inner feedback mechanisms on normal as well as on disordered speech is examined in terms of the model. The model uses a small number of core concepts provided by the NEF, and we show that these are sufficient to create this neurobiologically detailed model of the complex process of speech production in a manner that is, we believe, clear, efficient, and understandable.

14.
Top Cogn Sci ; 12(3): 957-959, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32716107

RESUMO

Cognitive modeling involves the creation of computer simulations that emulate the internal processes of the mind. This set of papers are the five best representatives of the papers presented at the 17th International Conference on Cognitive Modeling, ICCM 2019. While they represent a diversity of techniques and tasks, they all also share a striking similarity: They make strong statements about the importance of accounting for individual differences.


Assuntos
Ciência Cognitiva , Simulação por Computador , Processos Mentais , Modelos Teóricos , Congressos como Assunto , Humanos , Individualidade
15.
PLoS Comput Biol ; 16(6): e1007936, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32516337

RESUMO

In this paper, we present a functional spiking-neuron model of human working memory (WM). This model combines neural firing for encoding of information with activity-silent maintenance. While it used to be widely assumed that information in WM is maintained through persistent recurrent activity, recent studies have shown that information can be maintained without persistent firing; instead, information can be stored in activity-silent states. A candidate mechanism underlying this type of storage is short-term synaptic plasticity (STSP), by which the strength of connections between neurons rapidly changes to encode new information. To demonstrate that STSP can lead to functional behavior, we integrated STSP by means of calcium-mediated synaptic facilitation in a large-scale spiking-neuron model and added a decision mechanism. The model was used to simulate a recent study that measured behavior and EEG activity of participants in three delayed-response tasks. In these tasks, one or two visual gratings had to be maintained in WM, and compared to subsequent probes. The original study demonstrated that WM contents and its priority status could be decoded from neural activity elicited by a task-irrelevant stimulus displayed during the activity-silent maintenance period. In support of our model, we show that it can perform these tasks, and that both its behavior as well as its neural representations are in agreement with the human data. We conclude that information in WM can be effectively maintained in activity-silent states by means of calcium-mediated STSP.


Assuntos
Potenciais de Ação , Cálcio/metabolismo , Modelos Biológicos , Plasticidade Neuronal , Neurônios/metabolismo , Humanos
16.
Front Neurorobot ; 13: 84, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31680925

RESUMO

Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding. Assuming that future positions of vehicles are influenced not only by their own past positions and dynamics (e.g., velocity and acceleration) but also by the behavior of the other traffic participants in the vehicle's surroundings, our motivation is 3-fold: we hypothesize that our structured vector-representation will be able to capture these relations and mutual influence between multiple traffic participants. Furthermore, the dimension of the encoding vectors remains fixed while being independent of the number of other vehicles encoded in addition to the target vehicle. Finally, a VSA-based encoding allows us to combine symbol-like processing with the advantages of neural network learning. In this work, we use our vector representation as input for a long short-term memory (LSTM) network for sequence to sequence prediction of vehicle positions. In an extensive evaluation, we compare this approach to other LSTM-based benchmark systems using alternative data encoding schemes, simple feed-forward neural networks as well as a simple linear prediction model for reference. We analyze advantages and drawbacks of the presented methods and identify specific driving situations where our approach performs best. We use characteristics specifying such situations as a foundation for an online-learning mixture-of-experts prototype, which chooses at run time between several available predictors depending on the current driving situation to achieve the best possible forecast.

17.
Front Comput Neurosci ; 12: 41, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29928197

RESUMO

Background: Parkinson's disease affects many motor processes including speech. Besides drug treatment, deep brain stimulation (DBS) in the subthalamic nucleus (STN) and globus pallidus internus (GPi) has developed as an effective therapy. Goal: We present a neural model that simulates a syllable repetition task and evaluate its performance when varying the level of dopamine in the striatum, and the level of activity reduction in the STN or GPi. Method: The Neural Engineering Framework (NEF) is used to build a model of syllable sequencing through a cortico-basal ganglia-thalamus-cortex circuit. The model is able to simulate a failing substantia nigra pars compacta (SNc), as occurs in Parkinson's patients. We simulate syllable sequencing parameterized by (i) the tonic dopamine level in the striatum and (ii) average neural activity in STN or GPi. Results: With decreased dopamine levels, the model produces syllable sequencing errors in the form of skipping and swapping syllables, repeating the same syllable, breaking and restarting in the middle of a sequence, and cessation ("freezing") of sequences. We also find that reducing (inhibiting) activity in either STN or GPi reduces the occurrence of syllable sequencing errors. Conclusion: The model predicts that inhibiting activity in STN or GPi can reduce syllable sequencing errors in Parkinson's patients. Since DBS also reduces syllable sequencing errors in Parkinson's patients, we therefore suggest that STN or GPi inhibition is one mechanism through which DBS reduces syllable sequencing errors in Parkinson's patients.

18.
Front Psychol ; 8: 99, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28210234

RESUMO

Generating associations is important for cognitive tasks including language acquisition and creative problem solving. It remains an open question how the brain represents and processes associations. The Remote Associates Test (RAT) is a task, originally used in creativity research, that is heavily dependent on generating associations in a search for the solutions to individual RAT problems. In this work we present a model that solves the test. Compared to earlier modeling work on the RAT, our hybrid (i.e., non-developmental) model is implemented in a spiking neural network by means of the Neural Engineering Framework (NEF), demonstrating that it is possible for spiking neurons to be organized to store the employed representations and to manipulate them. In particular, the model shows that distributed representations can support sophisticated linguistic processing. The model was validated on human behavioral data including the typical length of response sequences and similarity relationships in produced responses. These data suggest two cognitive processes that are involved in solving the RAT: one process generates potential responses and a second process filters the responses.

19.
Top Cogn Sci ; 9(1): 117-134, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28001002

RESUMO

We use a spiking neural network model of working memory (WM) capable of performing the spatial delayed response task (DRT) to investigate two drugs that affect WM: guanfacine (GFC) and phenylephrine (PHE). In this model, the loss of information over time results from changes in the spiking neural activity through recurrent connections. We reproduce the standard forgetting curve and then show that this curve changes in the presence of GFC and PHE, whose application is simulated by manipulating functional, neural, and biophysical properties of the model. In particular, applying GFC causes increased activity in neurons that are sensitive to the information currently being remembered, while applying PHE leads to decreased activity in these same neurons. Interestingly, these differential effects emerge from network-level interactions because GFC and PHE affect all neurons equally. We compare our model to both electrophysiological data from neurons in monkey dorsolateral prefrontal cortex and to behavioral evidence from monkeys performing the DRT.


Assuntos
Guanfacina/farmacologia , Neurônios/efeitos dos fármacos , Fenilefrina/farmacologia , Córtex Pré-Frontal/efeitos dos fármacos , Animais , Haplorrinos , Humanos , Memória de Curto Prazo/efeitos dos fármacos , Modelos Neurológicos , Neurônios/fisiologia , Córtex Pré-Frontal/fisiologia
20.
Proc Biol Sci ; 283(1843)2016 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-27903878

RESUMO

We present a spiking neuron model of the motor cortices and cerebellum of the motor control system. The model consists of anatomically organized spiking neurons encompassing premotor, primary motor, and cerebellar cortices. The model proposes novel neural computations within these areas to control a nonlinear three-link arm model that can adapt to unknown changes in arm dynamics and kinematic structure. We demonstrate the mathematical stability of both forms of adaptation, suggesting that this is a robust approach for common biological problems of changing body size (e.g. during growth), and unexpected dynamic perturbations (e.g. when moving through different media, such as water or mud). To demonstrate the plausibility of the proposed neural mechanisms, we show that the model accounts for data across 19 studies of the motor control system. These data include a mix of behavioural and neural spiking activity, across subjects performing adaptive and static tasks. Given this proposed characterization of the biological processes involved in motor control of the arm, we provide several experimentally testable predictions that distinguish our model from previous work.


Assuntos
Braço/fisiologia , Cerebelo/fisiologia , Modelos Neurológicos , Córtex Motor/fisiologia , Humanos , Neurônios/fisiologia , Dinâmica não Linear
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA