Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Eur J Neurosci ; 59(11): 3093-3116, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38616566

RESUMO

The amygdala (AMY) is widely implicated in fear learning and fear behaviour, but it remains unclear how the many biological components present within AMY interact to achieve these abilities. Building on previous work, we hypothesize that individual AMY nuclei represent different quantities and that fear conditioning arises from error-driven learning on the synapses between AMY nuclei. We present a computational model of AMY that (a) recreates the divisions and connections between AMY nuclei and their constituent pyramidal and inhibitory neurons; (b) accommodates scalable high-dimensional representations of external stimuli; (c) learns to associate complex stimuli with the presence (or absence) of an aversive stimulus; (d) preserves feature information when mapping inputs to salience estimates, such that these estimates generalize to similar stimuli; and (e) induces a diverse profile of neural responses within each nucleus. Our model predicts (1) defensive responses and neural activities in several experimental conditions, (2) the consequence of artificially ablating particular nuclei and (3) the tendency to generalize defensive responses to novel stimuli. We test these predictions by comparing model outputs to neural and behavioural data from animals and humans. Despite the relative simplicity of our model, we find significant overlap between simulated and empirical data, which supports our claim that the model captures many of the neural mechanisms that support fear conditioning. We conclude by comparing our model to other computational models and by characterizing the theoretical relationship between pattern separation and fear generalization in healthy versus anxious individuals.


Assuntos
Tonsila do Cerebelo , Extinção Psicológica , Medo , Generalização Psicológica , Modelos Neurológicos , Medo/fisiologia , Tonsila do Cerebelo/fisiologia , Extinção Psicológica/fisiologia , Humanos , Animais , Generalização Psicológica/fisiologia , Condicionamento Clássico/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia
2.
Front Neurosci ; 17: 1190515, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37476829

RESUMO

To navigate in new environments, an animal must be able to keep track of its position while simultaneously creating and updating an internal map of features in the environment, a problem formulated as simultaneous localization and mapping (SLAM) in the field of robotics. This requires integrating information from different domains, including self-motion cues, sensory, and semantic information. Several specialized neuron classes have been identified in the mammalian brain as being involved in solving SLAM. While biology has inspired a whole class of SLAM algorithms, the use of semantic information has not been explored in such work. We present a novel, biologically plausible SLAM model called SSP-SLAM-a spiking neural network designed using tools for large scale cognitive modeling. Our model uses a vector representation of continuous spatial maps, which can be encoded via spiking neural activity and bound with other features (continuous and discrete) to create compressed structures containing semantic information from multiple domains (e.g., spatial, temporal, visual, conceptual). We demonstrate that the dynamics of these representations can be implemented with a hybrid oscillatory-interference and continuous attractor network of head direction cells. The estimated self-position from this network is used to learn an associative memory between semantically encoded landmarks and their positions, i.e., an environment map, which is used for loop closure. Our experiments demonstrate that environment maps can be learned accurately and their use greatly improves self-position estimation. Furthermore, grid cells, place cells, and object vector cells are observed by this model. We also run our path integrator network on the NengoLoihi neuromorphic emulator to demonstrate feasibility for a full neuromorphic implementation for energy efficient SLAM.

3.
Brain Sci ; 13(2)2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-36831788

RESUMO

The Neural Engineering Framework (Eliasmith & Anderson, 2003) is a long-standing method for implementing high-level algorithms constrained by low-level neurobiological details. In recent years, this method has been expanded to incorporate more biological details and applied to new tasks. This paper brings together these ongoing research strands, presenting them in a common framework. We expand on the NEF's core principles of (a) specifying the desired tuning curves of neurons in different parts of the model, (b) defining the computational relationships between the values represented by the neurons in different parts of the model, and (c) finding the synaptic connection weights that will cause those computations and tuning curves. In particular, we show how to extend this to include complex spatiotemporal tuning curves, and then apply this approach to produce functional computational models of grid cells, time cells, path integration, sparse representations, probabilistic representations, and symbolic representations in the brain.

4.
PLoS Comput Biol ; 18(9): e1010461, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-36074765

RESUMO

Improving biological plausibility and functional capacity are two important goals for brain models that connect low-level neural details to high-level behavioral phenomena. We develop a method called "oracle-supervised Neural Engineering Framework" (osNEF) to train biologically-detailed spiking neural networks that realize a variety of cognitively-relevant dynamical systems. Specifically, we train networks to perform computations that are commonly found in cognitive systems (communication, multiplication, harmonic oscillation, and gated working memory) using four distinct neuron models (leaky-integrate-and-fire neurons, Izhikevich neurons, 4-dimensional nonlinear point neurons, and 4-compartment, 6-ion-channel layer-V pyramidal cell reconstructions) connected with various synaptic models (current-based synapses, conductance-based synapses, and voltage-gated synapses). We show that osNEF networks exhibit the target dynamics by accounting for nonlinearities present within the neuron models: performance is comparable across all four systems and all four neuron models, with variance proportional to task and neuron model complexity. We also apply osNEF to build a model of working memory that performs a delayed response task using a combination of pyramidal cells and inhibitory interneurons connected with NMDA and GABA synapses. The baseline performance and forgetting rate of the model are consistent with animal data from delayed match-to-sample tasks (DMTST): we observe a baseline performance of 95% and exponential forgetting with time constant τ = 8.5s, while a recent meta-analysis of DMTST performance across species observed baseline performances of 58 - 99% and exponential forgetting with time constants of τ = 2.4 - 71s. These results demonstrate that osNEF can train functional brain models using biologically-detailed components and open new avenues for investigating the relationship between biophysical mechanisms and functional capabilities.


Assuntos
Modelos Neurológicos , Neurônios , Potenciais de Ação/fisiologia , Animais , Neurônios/fisiologia , Células Piramidais/fisiologia , Sinapses/fisiologia
5.
Neural Comput ; 33(8): 2033-2067, 2021 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-34310679

RESUMO

While neural networks are highly effective at learning task-relevant representations from data, they typically do not learn representations with the kind of symbolic structure that is hypothesized to support high-level cognitive processes, nor do they naturally model such structures within problem domains that are continuous in space and time. To fill these gaps, this work exploits a method for defining vector representations that bind discrete (symbol-like) entities to points in continuous topological spaces in order to simulate and predict the behavior of a range of dynamical systems. These vector representations are spatial semantic pointers (SSPs), and we demonstrate that they can (1) be used to model dynamical systems involving multiple objects represented in a symbol-like manner and (2) be integrated with deep neural networks to predict the future of physical trajectories. These results help unify what have traditionally appeared to be disparate approaches in machine learning.

6.
Top Cogn Sci ; 13(3): 515-533, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34146453

RESUMO

Neurophysiology and neuroanatomy constrain the set of possible computations that can be performed in a brain circuit. While detailed data on brain microcircuits is sometimes available, cognitive modelers are seldom in a position to take these constraints into account. One reason for this is the intrinsic complexity of accounting for biological mechanisms when describing cognitive function. In this paper, we present multiple extensions to the neural engineering framework (NEF), which simplify the integration of low-level constraints such as Dale's principle and spatially constrained connectivity into high-level, functional models. We focus on a model of eyeblink conditioning in the cerebellum, and, in particular, on systematically constructing temporal representations in the recurrent granule-Golgi microcircuit. We analyze how biological constraints impact these representations and demonstrate that our overall model is capable of reproducing key properties of eyeblink conditioning. Furthermore, since our techniques facilitate variation of neurophysiological parameters, we gain insights into why certain neurophysiological parameters may be as observed in nature. While eyeblink conditioning is a somewhat primitive form of learning, we argue that the same methods apply for more cognitive models as well. We implemented our extensions to the NEF in an open-source software library named "NengoBio" and hope that this work inspires similar attempts to bridge low-level biological detail and high-level function.


Assuntos
Piscadela , Cerebelo , Cognição , Humanos , Aprendizagem , Rede Nervosa
7.
Neural Comput ; 33(1): 96-128, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33080158

RESUMO

Nonlinear interactions in the dendritic tree play a key role in neural computation. Nevertheless, modeling frameworks aimed at the construction of large-scale, functional spiking neural networks, such as the Neural Engineering Framework, tend to assume a linear superposition of postsynaptic currents. In this letter, we present a series of extensions to the Neural Engineering Framework that facilitate the construction of networks incorporating Dale's principle and nonlinear conductance-based synapses. We apply these extensions to a two-compartment LIF neuron that can be seen as a simple model of passive dendritic computation. We show that it is possible to incorporate neuron models with input-dependent nonlinearities into the Neural Engineering Framework without compromising high-level function and that nonlinear postsynaptic currents can be systematically exploited to compute a wide variety of multivariate, band-limited functions, including the Euclidean norm, controlled shunting, and nonnegative multiplication. By avoiding an additional source of spike noise, the function approximation accuracy of a single layer of two-compartment LIF neurons is on a par with or even surpasses that of two-layer spiking neural networks up to a certain target function bandwidth.


Assuntos
Potenciais de Ação , Dendritos , Modelos Neurológicos , Redes Neurais de Computação , Dinâmica não Linear , Potenciais de Ação/fisiologia , Dendritos/fisiologia , Humanos
8.
Psychol Rev ; 128(1): 104-124, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32816508

RESUMO

We present the context-unified encoding (CUE) model, a large-scale spiking neural network model of human memory. It combines and integrates activity-based short-term memory (STM) with weight-based long-term memory. The implementation with spiking neurons ensures biological plausibility and allows for predictions on the neural level. At the same time, the model produces behavioral outputs that have been matched to human data from serial and free recall experiments. In particular, well-known results such as primacy, recency, transposition error gradients, and forward recall bias have been reproduced with good quantitative matches. Additionally, the model accounts for the Hebb repetition effect. The CUE model combines and extends the ordinal serial encoding model, a spiking neuron model of STM, and the temporal context model, a mathematical memory model matching free recall data. To implement the modification of the required association matrices, a novel learning rule, the association matrix learning rule, is derived that allows for one-shot learning without catastrophic forgetting. Its biological plausibility is discussed and it is shown that it accounts for changes in neural firing observed in human recordings from an association learning experiment. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Potenciais de Ação , Memória de Longo Prazo , Memória de Curto Prazo , Modelos Neurológicos , Neurônios , Humanos , Rememoração Mental , Rede Nervosa
9.
Front Neurorobot ; 14: 568359, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33162886

RESUMO

In this paper we demonstrate how the Nengo neural modeling and simulation libraries enable users to quickly develop robotic perception and action neural networks for simulation on neuromorphic hardware using tools they are already familiar with, such as Keras and Python. We identify four primary challenges in building robust, embedded neurorobotic systems, including: (1) developing infrastructure for interfacing with the environment and sensors; (2) processing task specific sensory signals; (3) generating robust, explainable control signals; and (4) compiling neural networks to run on target hardware. Nengo helps to address these challenges by: (1) providing the NengoInterfaces library, which defines a simple but powerful API for users to interact with simulations and hardware; (2) providing the NengoDL library, which lets users use the Keras and TensorFlow API to develop Nengo models; (3) implementing the Neural Engineering Framework, which provides white-box methods for implementing known functions and circuits; and (4) providing multiple backend libraries, such as NengoLoihi, that enable users to compile the same model to different hardware. We present two examples using Nengo to develop neural networks that run on CPUs and GPUs as well as Intel's neuromorphic chip, Loihi, to demonstrate two variations on this workflow. The first example is an implementation of an end-to-end spiking neural network in Nengo that controls a rover simulated in Mujoco. The network integrates a deep convolutional network that processes visual input from cameras mounted on the rover to track a target, and a control system implementing steering and drive functions in connection weights to guide the rover to the target. The second example uses Nengo as a smaller component in a system that has addressed some but not all of those challenges. Specifically it is used to augment a force-based operational space controller with neural adaptive control to improve performance during a reaching task using a real-world Kinova Jaco2 robotic arm. The code and implementation details are provided, with the intent of enabling other researchers to build and run their own neurorobotic systems.

10.
Neural Comput ; 31(5): 849-869, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30883282

RESUMO

We present a new binding operation, vector-derived transformation binding (VTB), for use in vector symbolic architectures (VSA). The performance of VTB is compared to circular convolution, used in holographic reduced representations (HRRs), in terms of list and stack encoding capacity. A special focus is given to the possibility of a neural implementation by the means of the Neural Engineering Framework (NEF). While the scaling of required neural resources is slightly worse for VTB, it is found to be on par with circular convolution for list encoding and better for encoding of stacks. Furthermore, VTB influences the vector length less, which also benefits a neural implementation. Consequently, we argue that VTB is an improvement over HRRs for neurally implemented VSAs.


Assuntos
Redes Neurais de Computação
11.
Front Comput Neurosci ; 12: 41, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29928197

RESUMO

Background: Parkinson's disease affects many motor processes including speech. Besides drug treatment, deep brain stimulation (DBS) in the subthalamic nucleus (STN) and globus pallidus internus (GPi) has developed as an effective therapy. Goal: We present a neural model that simulates a syllable repetition task and evaluate its performance when varying the level of dopamine in the striatum, and the level of activity reduction in the STN or GPi. Method: The Neural Engineering Framework (NEF) is used to build a model of syllable sequencing through a cortico-basal ganglia-thalamus-cortex circuit. The model is able to simulate a failing substantia nigra pars compacta (SNc), as occurs in Parkinson's patients. We simulate syllable sequencing parameterized by (i) the tonic dopamine level in the striatum and (ii) average neural activity in STN or GPi. Results: With decreased dopamine levels, the model produces syllable sequencing errors in the form of skipping and swapping syllables, repeating the same syllable, breaking and restarting in the middle of a sequence, and cessation ("freezing") of sequences. We also find that reducing (inhibiting) activity in either STN or GPi reduces the occurrence of syllable sequencing errors. Conclusion: The model predicts that inhibiting activity in STN or GPi can reduce syllable sequencing errors in Parkinson's patients. Since DBS also reduces syllable sequencing errors in Parkinson's patients, we therefore suggest that STN or GPi inhibition is one mechanism through which DBS reduces syllable sequencing errors in Parkinson's patients.

12.
Neural Comput ; 30(3): 569-609, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29220306

RESUMO

Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.


Assuntos
Potenciais de Ação , Modelos Neurológicos , Neurônios/fisiologia , Sinapses/fisiologia , Potenciais de Ação/fisiologia , Animais , Biomimética , Encéfalo/fisiologia , Simulação por Computador , Redes Neurais de Computação , Dinâmica não Linear , Fatores de Tempo
13.
PLoS One ; 12(7): e0180234, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28683111

RESUMO

We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain's general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model's behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Reforço Psicológico , Encéfalo/fisiologia , Simulação por Computador , Humanos , Recompensa
14.
Front Neuroinform ; 11: 33, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28522970

RESUMO

One critical factor limiting the size of neural cognitive models is the time required to simulate such models. To reduce simulation time, specialized hardware is often used. However, such hardware can be costly, not readily available, or require specialized software implementations that are difficult to maintain. Here, we present an algorithm that optimizes the computational graph of the Nengo neural network simulator, allowing simulations to run more quickly on commodity hardware. This is achieved by merging identical operations into single operations and restructuring the accessed data in larger blocks of sequential memory. In this way, a time speed-up of up to 6.8 is obtained. While this does not beat the specialized OpenCL implementation of Nengo, this optimization is available on any platform that can run Python. In contrast, the OpenCL implementation supports fewer platforms and can be difficult to install.

15.
Front Psychol ; 8: 99, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28210234

RESUMO

Generating associations is important for cognitive tasks including language acquisition and creative problem solving. It remains an open question how the brain represents and processes associations. The Remote Associates Test (RAT) is a task, originally used in creativity research, that is heavily dependent on generating associations in a search for the solutions to individual RAT problems. In this work we present a model that solves the test. Compared to earlier modeling work on the RAT, our hybrid (i.e., non-developmental) model is implemented in a spiking neural network by means of the Neural Engineering Framework (NEF), demonstrating that it is possible for spiking neurons to be organized to store the employed representations and to manipulate them. In particular, the model shows that distributed representations can support sophisticated linguistic processing. The model was validated on human behavioral data including the typical length of response sequences and similarity relationships in produced responses. These data suggest two cognitive processes that are involved in solving the RAT: one process generates potential responses and a second process filters the responses.

16.
Front Psychol ; 8: 2335, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29387031

RESUMO

Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's "inferential role." We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition.

17.
Top Cogn Sci ; 9(1): 117-134, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28001002

RESUMO

We use a spiking neural network model of working memory (WM) capable of performing the spatial delayed response task (DRT) to investigate two drugs that affect WM: guanfacine (GFC) and phenylephrine (PHE). In this model, the loss of information over time results from changes in the spiking neural activity through recurrent connections. We reproduce the standard forgetting curve and then show that this curve changes in the presence of GFC and PHE, whose application is simulated by manipulating functional, neural, and biophysical properties of the model. In particular, applying GFC causes increased activity in neurons that are sensitive to the information currently being remembered, while applying PHE leads to decreased activity in these same neurons. Interestingly, these differential effects emerge from network-level interactions because GFC and PHE affect all neurons equally. We compare our model to both electrophysiological data from neurons in monkey dorsolateral prefrontal cortex and to behavioral evidence from monkeys performing the DRT.


Assuntos
Guanfacina/farmacologia , Neurônios/efeitos dos fármacos , Fenilefrina/farmacologia , Córtex Pré-Frontal/efeitos dos fármacos , Animais , Haplorrinos , Humanos , Memória de Curto Prazo/efeitos dos fármacos , Modelos Neurológicos , Neurônios/fisiologia , Córtex Pré-Frontal/fisiologia
18.
Top Cogn Sci ; 9(1): 6-20, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28019687

RESUMO

The ability to improve in speed and accuracy as a result of repeating some task is an important hallmark of intelligent biological systems. Although gradual behavioral improvements from practice have been modeled in spiking neural networks, few such models have attempted to explain cognitive development of a task as complex as addition. In this work, we model the progression from a counting-based strategy for addition to a recall-based strategy. The model consists of two networks working in parallel: a slower basal ganglia loop and a faster cortical network. The slow network methodically computes the count from one digit given another, corresponding to the addition of two digits, whereas the fast network gradually "memorizes" the output from the slow network. The faster network eventually learns how to add the same digits that initially drove the behavior of the slower network. Performance of this model is demonstrated by simulating a fully spiking neural network that includes basal ganglia, thalamus, and various cortical areas. Consequently, the model incorporates various neuroanatomical data, in terms of brain areas used for calculation and makes psychologically testable predictions related to frequency of rehearsal. Furthermore, the model replicates developmental progression through addition strategies in terms of reaction times and accuracy, and naturally explains observed symptoms of dyscalculia.


Assuntos
Desenvolvimento Infantil/fisiologia , Modelos Neurológicos , Criança , Pré-Escolar , Simulação por Computador , Humanos , Matemática
19.
Proc Biol Sci ; 283(1843)2016 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-27903878

RESUMO

We present a spiking neuron model of the motor cortices and cerebellum of the motor control system. The model consists of anatomically organized spiking neurons encompassing premotor, primary motor, and cerebellar cortices. The model proposes novel neural computations within these areas to control a nonlinear three-link arm model that can adapt to unknown changes in arm dynamics and kinematic structure. We demonstrate the mathematical stability of both forms of adaptation, suggesting that this is a robust approach for common biological problems of changing body size (e.g. during growth), and unexpected dynamic perturbations (e.g. when moving through different media, such as water or mud). To demonstrate the plausibility of the proposed neural mechanisms, we show that the model accounts for data across 19 studies of the motor control system. These data include a mix of behavioural and neural spiking activity, across subjects performing adaptive and static tasks. Given this proposed characterization of the biological processes involved in motor control of the arm, we provide several experimentally testable predictions that distinguish our model from previous work.


Assuntos
Braço/fisiologia , Cerebelo/fisiologia , Modelos Neurológicos , Córtex Motor/fisiologia , Humanos , Neurônios/fisiologia , Dinâmica não Linear
20.
Front Comput Neurosci ; 10: 51, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27303287

RESUMO

Production and comprehension of speech are closely interwoven. For example, the ability to detect an error in one's own speech, halt speech production, and finally correct the error can be explained by assuming an inner speech loop which continuously compares the word representations induced by production to those induced by perception at various cognitive levels (e.g., conceptual, word, or phonological levels). Because spontaneous speech errors are relatively rare, a picture naming and halt paradigm can be used to evoke them. In this paradigm, picture presentation (target word initiation) is followed by an auditory stop signal (distractor word) for halting speech production. The current study seeks to understand the neural mechanisms governing self-detection of speech errors by developing a biologically inspired neural model of the inner speech loop. The neural model is based on the Neural Engineering Framework (NEF) and consists of a network of about 500,000 spiking neurons. In the first experiment we induce simulated speech errors semantically and phonologically. In the second experiment, we simulate a picture naming and halt task. Target-distractor word pairs were balanced with respect to variation of phonological and semantic similarity. The results of the first experiment show that speech errors are successfully detected by a monitoring component in the inner speech loop. The results of the second experiment show that the model correctly reproduces human behavioral data on the picture naming and halt task. In particular, the halting rate in the production of target words was lower for phonologically similar words than for semantically similar or fully dissimilar distractor words. We thus conclude that the neural architecture proposed here to model the inner speech loop reflects important interactions in production and perception at phonological and semantic levels.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...