RESUMO
Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative, hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning to implementation of inhibition and control, along with neuroanatomical properties including areal structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling.
Assuntos
Encéfalo/fisiologia , Cognição/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Humanos , Plasticidade Neuronal/fisiologiaRESUMO
Cognitive theory has decomposed human mental abilities into cognitive (sub) systems, and cognitive neuroscience succeeded in disclosing a host of relationships between cognitive systems and specific structures of the human brain. However, an explanation of why specific functions are located in specific brain loci had still been missing, along with a neurobiological model that makes concrete the neuronal circuits that carry thoughts and meaning. Brain theory, in particular the Hebb-inspired neurocybernetic proposals by Braitenberg, now offers an avenue toward explaining brain-mind relationships and to spell out cognition in terms of neuron circuits in a neuromechanistic sense. Central to this endeavor is the theoretical construct of an elementary functional neuronal unit above the level of individual neurons and below that of whole brain areas and systems: the distributed neuronal assembly (DNA) or thought circuit (TC). It is shown that DNA/TC theory of cognition offers an integrated explanatory perspective on brain mechanisms of perception, action, language, attention, memory, decision and conceptual thought. We argue that DNAs carry all of these functions and that their inner structure (e.g., core and halo subcomponents), and their functional activation dynamics (e.g., ignition and reverberation processes) answer crucial localist questions, such as why memory and decisions draw on prefrontal areas although memory formation is normally driven by information in the senses and in the motor system. We suggest that the ability of building DNAs/TCs spread out over different cortical areas is the key mechanism for a range of specifically human sensorimotor, linguistic and conceptual capacities and that the cell assembly mechanism of overlap reduction is crucial for differentiating a vocabulary of actions, symbols and concepts.
Assuntos
Encéfalo , Cognição/fisiologia , Modelos Neurológicos , Neurobiologia , Neurônios/fisiologia , Pensamento/fisiologia , Encéfalo/citologia , Encéfalo/fisiologia , HumanosRESUMO
Neural populations across cortical layers perform different computational tasks. However, it is not known whether information in different layers is encoded using a common neural code or whether it depends on the specific layer. Here we studied the laminar distribution of information in a large-scale computational model of cat primary visual cortex. We analyzed the amount of information about the input stimulus conveyed by the different representations of the cortical responses. In particular, we compared the information encoded in four possible neural codes: (1) the information carried by the firing rate of individual neurons; (2) the information carried by spike patterns within a time window; (3) the rate-and-phase information carried by the firing rate labelled by the phase of the Local Field Potentials (LFP); (4) the pattern-and-phase information carried by the spike patterns tagged with the LFP phase. We found that there is substantially more information in the rate-and-phase code compared with the firing rate alone for low LFP frequency bands (less than 30 Hz). When comparing how information is encoded across layers, we found that the extra information contained in a rate-and-phase code may reach 90 % in Layer 4, while in other layers it reaches only 60 %, compared to the information carried by the firing rate alone. These results suggest that information processing in primary sensory cortices could rely on different coding strategies across different layers.
Assuntos
Simulação por Computador , Teoria da Informação , Modelos Neurológicos , Neurônios/fisiologia , Córtex Visual/citologia , Vias Visuais/fisiologia , Potenciais de Ação/fisiologia , Animais , Gatos , Estimulação Luminosa , Retina/citologia , Retina/fisiologiaRESUMO
The neurobiological nature of semantic knowledge, i.e., the encoding and storage of conceptual information in the human brain, remains a poorly understood and hotly debated subject. Clinical data on semantic deficits and neuroimaging evidence from healthy individuals have suggested multiple cortical regions to be involved in the processing of meaning. These include semantic hubs (most notably, anterior temporal lobe, ATL) that take part in semantic processing in general as well as sensorimotor areas that process specific aspects/categories according to their modality. Biologically inspired neurocomputational models can help elucidate the exact roles of these regions in the functioning of the semantic system and, importantly, in its breakdown in neurological deficits. We used a neuroanatomically constrained computational model of frontotemporal cortices implicated in word acquisition and processing, and adapted it to simulate and explain the effects of semantic dementia (SD) on word processing abilities. SD is a devastating, yet insufficiently understood progressive neurodegenerative disease, characterised by semantic knowledge deterioration that is hypothesised to be specifically related to neural damage in the ATL. The behaviour of our brain-based model is in full accordance with clinical data-namely, word comprehension performance decreases as SD lesions in ATL progress, whereas word repetition abilities remain less affected. Furthermore, our model makes predictions about lesion- and category-specific effects of SD: our simulation results indicate that word processing should be more impaired for object- than for action-related words, and that degradation of white matter should produce more severe consequences than the same proportion of grey matter decay. In sum, the present results provide a neuromechanistic explanatory account of cortical-level language impairments observed during the onset and progress of semantic dementia.
Assuntos
Demência Frontotemporal , Doenças Neurodegenerativas , Humanos , Demência Frontotemporal/patologia , Doenças Neurodegenerativas/patologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/patologia , Encéfalo/diagnóstico por imagem , Semântica , Desempenho Psicomotor , Imageamento por Ressonância Magnética/métodosRESUMO
Stimulus-specific adaptation (SSA) occurs when the spike rate of a neuron decreases with repetitions of the same stimulus, but recovers when a different stimulus is presented. It has been suggested that SSA in single auditory neurons may provide information to change detection mechanisms evident at other scales (e.g., mismatch negativity in the event related potential), and participate in the control of attention and the formation of auditory streams. This article presents a spiking-neuron model that accounts for SSA in terms of the convergence of depressing synapses that convey feature-specific inputs. The model is anatomically plausible, comprising just a few homogeneously connected populations, and does not require organised feature maps. The model is calibrated to match the SSA measured in the cortex of the awake rat, as reported in one study. The effect of frequency separation, deviant probability, repetition rate and duration upon SSA are investigated. With the same parameter set, the model generates responses consistent with a wide range of published data obtained in other auditory regions using other stimulus configurations, such as block, sequential and random stimuli. A new stimulus paradigm is introduced, which generalises the oddball concept to Markov chains, allowing the experimenter to vary the tone probabilities and the rate of switching independently. The model predicts greater SSA for higher rates of switching. Finally, the issue of whether rarity or novelty elicits SSA is addressed by comparing the responses of the model to deviants in the context of a sequence of a single standard or many standards. The results support the view that synaptic adaptation alone can explain almost all aspects of SSA reported to date, including its purported novelty component, and that non-trivial networks of depressing synapses can intensify this novelty response.
Assuntos
Adaptação Fisiológica/fisiologia , Cadeias de Markov , Modelos Neurológicos , Neurônios/fisiologia , Estimulação Acústica , Animais , Córtex Auditivo/fisiologia , Biologia Computacional , Potenciais Evocados Auditivos/fisiologia , Ratos , Sinapses/fisiologia , Vigília/fisiologiaRESUMO
Many neurons that initially respond to a stimulus stop responding if the stimulus is presented repeatedly but recover their response if a different stimulus is presented. This phenomenon is referred to as stimulus-specific adaptation (SSA). SSA has been investigated extensively using oddball experiments, which measure the responses of a neuron to sequences of stimuli. Neurons that exhibit SSA respond less vigorously to common stimuli, and the metric typically used to quantify this difference is the SSA index (SI). This article presents the first detailed analysis of the SI metric by examining the question: How should a system (e.g., a neuron) respond to stochastic input if it is to maximize the SI of its output? Questions like this one are particularly relevant to those wishing to construct computational models of SSA. If an artificial neural network receives stimulus information at a particular rate and must respond within a fixed time, what is the highest SI one can reasonably expect? We demonstrate that the optimum, average SI is constrained by the information in the input source, the length and encoding of the memory, and the assumptions concerning how the task is decomposed.
Assuntos
Adaptação Fisiológica/fisiologia , Encéfalo/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais , HumanosRESUMO
In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.
Assuntos
Potenciais de Ação , Modelos Teóricos , Plasticidade Neuronal , Neurônios/fisiologiaRESUMO
We present a model which stems from a well-established model of object recognition, HMAX, and show how this feedforward system can include feedback, using a recently proposed architecture which reconciles biased competition and predictive coding approaches. Simulation results show successful feedforward object recognition, including cases of occluded and illusory images. Recognition is both position and size invariant. The model also provides a functional interpretation of the role of feedback connectivity in accounting for several observed effects such as enhancement, suppression and refinement of activity in lower areas. The model can qualitatively replicate responses in early visual cortex to occluded and illusory contours; and fMRI data showing that high-level object recognition reduces activity in lower areas. A Gestalt-like mechanism based on collinearity, co-orientation and good continuation principles is proposed to explain illusory contour formation which allows the system to adapt a single high-level object prototype to illusory Kanizsa figures of different sizes, shapes and positions. Overall the model provides a biophysiologically plausible interpretation, supported by current experimental evidence, of the interaction between top-down global feedback and bottom-up local evidence in the context of hierarchical object perception.
Assuntos
Retroalimentação , Modelos Teóricos , Humanos , Imageamento por Ressonância MagnéticaRESUMO
If, as is widely believed, perception is based upon the responses of neurons that are tuned to stimulus features, then precisely what features are encoded and how do neurons in the system come to be sensitive to those features? Here we show differential responses to ripple stimuli can arise through exposure to formative stimuli in a recurrently connected model of the thalamocortical system which exhibits delays, lateral and recurrent connections, and learning in the form of spike timing dependent plasticity.
Assuntos
Córtex Auditivo/anatomia & histologia , Modelos Anatômicos , Tálamo/anatomia & histologia , HumanosRESUMO
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has previously been shown to allow a neuron to learn a repeated spatio-temporal pattern among its afferents and respond at its onset. In this study we reconfirm these previous results and additionally adduce that such learning is dependent on background activity. Furthermore, we found that the onset learning is unstable when in a noisy framework. Specifically, if the level of background activity changes during learning the response latency of a neuron may increase and with the adding of additional noise the distribution of response latencies degrades. Consequently, we present preliminary insights into the neuron's encoding: viz. that a neuron may encode the coincidence of spikes from a subsection of a stimulus' afferents, but the temporal precision of the onset response depends on some background activity, which must be similar to that present during learning.
Assuntos
Potenciais de Ação , Aprendizagem , Humanos , Modelos Teóricos , Plasticidade Neuronal , Neurônios/fisiologiaRESUMO
In blind people, the visual cortex takes on higher cognitive functions, including language. Why this functional reorganisation mechanistically emerges at the neuronal circuit level is still unclear. Here, we use a biologically constrained network model implementing features of anatomical structure, neurophysiological function and connectivity of fronto-temporal-occipital areas to simulate word-meaning acquisition in visually deprived and undeprived brains. We observed that, only under visual deprivation, distributed word-related neural circuits 'grew into' the deprived visual areas, which therefore adopted a linguistic-semantic role. Three factors are crucial for explaining this deprivation-related growth: changes in the network's activity balance brought about by the absence of uncorrelated sensory input, the connectivity structure of the network, and Hebbian correlation learning. In addition, the blind model revealed long-lasting spiking neural activity compared to the sighted model during word recognition, which is a neural correlate of enhanced verbal working memory. The present neurocomputational model offers a neurobiological account for neural changes following sensory deprivation, thus closing the gap between cellular-level mechanisms, system-level linguistic and semantic function.
Assuntos
Cegueira/fisiopatologia , Idioma , Modelos Neurológicos , Córtex Visual/fisiopatologia , Humanos , AprendizagemRESUMO
Meaningful familiar stimuli and senseless unknown materials lead to different patterns of brain activation. A late major neurophysiological response indexing 'sense' is the negative component of event-related potential peaking at around 400 ms (N400), an event-related potential that emerges in attention-demanding tasks and is larger for senseless materials (e.g. meaningless pseudowords) than for matched meaningful stimuli (words). However, the mismatch negativity (latency 100-250 ms), an early automatic brain response elicited under distraction, is larger to words than to pseudowords, thus exhibiting the opposite pattern to that seen for the N400. So far, no theoretical account has been able to reconcile and explain these findings by means of a single, mechanistic neural model. We implemented a neuroanatomically grounded neural network model of the left perisylvian language cortex and simulated: (i) brain processes of early language acquisition and (ii) cortical responses to familiar word and senseless pseudoword stimuli. We found that variation of the area-specific inhibition (the model correlate of attention) modulated the simulated brain response to words and pseudowords, producing either an N400- or a mismatch negativity-like response depending on the amount of inhibition (i.e. available attentional resources). Our model: (i) provides a unifying explanatory account, at cortical level, of experimental observations that, so far, had not been given a coherent interpretation within a single framework; (ii) demonstrates the viability of purely Hebbian, associative learning in a multilayered neural network architecture; and (iii) makes clear predictions on the effects of attention on latency and magnitude of event-related potentials to lexical items. Such predictions have been confirmed by recent experimental evidence.
Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Idioma , Aprendizagem/fisiologia , Redes Neurais de Computação , Potenciais Evocados Auditivos/fisiologia , HumanosRESUMO
It is clear that the topological structure of a neural network somehow determines the activity of the neurons within it. In the present work, we ask to what extent it is possible to examine the structural features of a network and learn something about its activity? Specifically, we consider how the centrality (the importance of a node in a network) of a neuron correlates with its firing rate. To investigate, we apply an array of centrality measures, including In-Degree, Closeness, Betweenness, Eigenvector, Katz, PageRank, Hyperlink-Induced Topic Search (HITS) and NeuronRank to Leaky-Integrate and Fire neural networks with different connectivity schemes. We find that Katz centrality is the best predictor of firing rate given the network structure, with almost perfect correlation in all cases studied, which include purely excitatory and excitatory-inhibitory networks, with either homogeneous connections or a small-world structure. We identify the properties of a network which will cause this correlation to hold. We argue that the reason Katz centrality correlates so highly with neuronal activity compared to other centrality measures is because it nicely captures disinhibition in neural networks. In addition, we argue that these theoretical findings are applicable to neuroscientists who apply centrality measures to functional brain networks, as well as offer a neurophysiological justification to high level cognitive models which use certain centrality measures.
Assuntos
Potenciais de Ação/fisiologia , Mapeamento Encefálico , Modelos Neurológicos , Vias Neurais/fisiologia , Neurônios/fisiologia , Humanos , Inibição NeuralRESUMO
One of the most controversial debates in cognitive neuroscience concerns the cortical locus of semantic knowledge and processing in the human brain. Experimental data revealed the existence of various cortical regions relevant for meaning processing, ranging from semantic hubs generally involved in semantic processing to modality-preferential sensorimotor areas involved in the processing of specific conceptual categories. Why and how the brain uses such complex organization for conceptualization can be investigated using biologically constrained neurocomputational models. Here, we improve pre-existing neurocomputational models of semantics by incorporating spiking neurons and a rich connectivity structure between the model 'areas' to mimic important features of the underlying neural substrate. Semantic learning and symbol grounding in action and perception were simulated by associative learning between co-activated neuron populations in frontal, temporal and occipital areas. As a result of Hebbian learning of the correlation structure of symbol, perception and action information, distributed cell assembly circuits emerged across various cortices of the network. These semantic circuits showed category-specific topographical distributions, reaching into motor and visual areas for action- and visually-related words, respectively. All types of semantic circuits included large numbers of neurons in multimodal connector hub areas, which is explained by cortical connectivity structure and the resultant convergence of phonological and semantic information on these zones. Importantly, these semantic hub areas exhibited some category-specificity, which was less pronounced than that observed in primary and secondary modality-preferential cortices. The present neurocomputational model integrates seemingly divergent experimental results about conceptualization and explains both semantic hubs and category-specific areas as an emergent process causally determined by two major factors: neuroanatomical connectivity structure and correlated neuronal activation during language learning.
RESUMO
We present here a learning system using the iCub humanoid robot and the SpiNNaker neuromorphic chip to solve the real-world task of object-specific attention. Integrating spiking neural networks with robots introduces considerable complexity for questionable benefit if the objective is simply task performance. But, we suggest, in a cognitive robotics context, where the goal is understanding how to compute, such an approach may yield useful insights to neural architecture as well as learned behavior, especially if dedicated neural hardware is available. Recent advances in cognitive robotics and neuromorphic processing now make such systems possible. Using a scalable, structured, modular approach, we build a spiking neural network where the effects and impact of learning can be predicted and tested, and the network can be scaled or extended to new tasks automatically. We introduce several enhancements to a basic network and show how they can be used to direct performance toward behaviorally relevant goals. Results show that using a simple classical spike-timing-dependent plasticity (STDP) rule on selected connections, we can get the robot (and network) to progress from poor task-specific performance to good performance. Behaviorally relevant STDP appears to contribute strongly to positive learning: "do this" but less to negative learning: "don't do that." In addition, we observe that the effect of structural enhancements tends to be cumulative. The overall system suggests that it is by being able to exploit combinations of effects, rather than any one effect or property in isolation, that spiking networks can achieve compelling, task-relevant behavior.
Assuntos
Cognição , Aprendizagem , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Robótica , Potenciais de Ação/fisiologia , Atenção , Humanos , Motivação , Estimulação Luminosa , Robótica/instrumentaçãoRESUMO
Neural computations are modelled in various ways, but still there is no clear understanding of how the brain performs its computational tasks. This paper presents new results about analysis of neural processes in terms of activity pattern computations. It is shown that it is possible to extract from high-resolution EEG data a first order Markov approximation of a neural communication system employing pattern computations, which is significantly different from similar purely random systems. In our view this result shows that it is likely that neural activity patterns measurable at the macro-level by EEG are correlated with underlying neural computations.
Assuntos
Córtex Cerebral/fisiologia , Animais , Gatos , Biologia Computacional , Eletroencefalografia , Modelos Neurológicos , Biologia de SistemasRESUMO
It has been argued that information processing in the cortex is optimised with regard to certain information theoretic principles. We have, for instance, recently shown that spike-timing dependent plasticity can improve an information-theoretic measure called spatio-temporal stochastic interaction which captures how strongly a set of neurons cooperates in space and time. Systems with high stochastic interaction reveal Poisson spike trains but nonetheless occupy only a strongly reduced area in their global phase space, they reveal repetiting but complex global activation patterns, and they can be interpreted as computational systems operating on selected sets of collective patterns or "global states" in a rule-like manner. In the present work we investigate stochastic interaction in high-resolution EEG-data from cat auditory cortex. Using Kohonen maps to reduce the high-dimensional dynamics of the system, we are able to detect repetiting system states and estimate the stochastic interaction in the data, which turns out to be fairly high. This suggests an organised cooperation in the underlying neural networks which cause the data and may reflect generic intrinsic computational capabilities of the cortex.
Assuntos
Córtex Auditivo/fisiologia , Eletroencefalografia/métodos , Processos Estocásticos , Animais , GatosRESUMO
Neuroimaging and patient studies show that different areas of cortex respectively specialize for general and selective, or category-specific, semantic processing. Why are there both semantic hubs and category-specificity, and how come that they emerge in different cortical regions? Can the activation time-course of these areas be predicted and explained by brain-like network models? In this present work, we extend a neurocomputational model of human cortical function to simulate the time-course of cortical processes of understanding meaningful concrete words. The model implements frontal and temporal cortical areas for language, perception, and action along with their connectivity. It uses Hebbian learning to semantically ground words in aspects of their referential object- and action-related meaning. Compared with earlier proposals, the present model incorporates additional neuroanatomical links supported by connectivity studies and downscaled synaptic weights in order to control for functional between-area differences purely due to the number of in- or output links of an area. We show that learning of semantic relationships between words and the objects and actions these symbols are used to speak about, leads to the formation of distributed circuits, which all include neuronal material in connector hub areas bridging between sensory and motor cortical systems. Therefore, these connector hub areas acquire a role as semantic hubs. By differentially reaching into motor or visual areas, the cortical distributions of the emergent 'semantic circuits' reflect aspects of the represented symbols' meaning, thus explaining category-specificity. The improved connectivity structure of our model entails a degree of category-specificity even in the 'semantic hubs' of the model. The relative time-course of activation of these areas is typically fast and near-simultaneous, with semantic hubs central to the network structure activating before modality-preferential areas carrying semantic information.
Assuntos
Mapeamento Encefálico , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/fisiologia , Aprendizagem/fisiologia , Modelos Neurológicos , Vias Neurais/fisiologia , Semântica , Análise de Variância , Simulação por Computador , Feminino , Humanos , MasculinoRESUMO
Generating associations is important for cognitive tasks including language acquisition and creative problem solving. It remains an open question how the brain represents and processes associations. The Remote Associates Test (RAT) is a task, originally used in creativity research, that is heavily dependent on generating associations in a search for the solutions to individual RAT problems. In this work we present a model that solves the test. Compared to earlier modeling work on the RAT, our hybrid (i.e., non-developmental) model is implemented in a spiking neural network by means of the Neural Engineering Framework (NEF), demonstrating that it is possible for spiking neurons to be organized to store the employed representations and to manipulate them. In particular, the model shows that distributed representations can support sophisticated linguistic processing. The model was validated on human behavioral data including the typical length of response sequences and similarity relationships in produced responses. These data suggest two cognitive processes that are involved in solving the RAT: one process generates potential responses and a second process filters the responses.
RESUMO
This paper demonstrates how associative neural networks as standard models for Hebbian cell assemblies can be extended to implement language processes in large-scale brain simulations. To this end the classical auto- and hetero-associative paradigms of attractor nets and synfire chains (SFCs) are combined and complemented by conditioned associations as a third principle which allows for the implementation of complex graph-like transition structures between assemblies. We show example simulations of a multiple area network for object-naming, which categorises objects in a visual hierarchy and generates different specific syntactic motor sequences ("words") in response. The formation of cell assemblies due to ongoing plasticity in a multiple area network for word learning is studied afterwards. Simulations show how assemblies can form by means of percolating activity across auditory and motor-related language areas, a process supported by rhythmic, synchronized propagating waves through the network. Simulations further reproduce differences in own EEG&MEG experiments between responses to word- versus non-word stimuli in human subjects.