Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Entropy (Basel) ; 26(6)2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-38920492

RESUMEN

Given the rapid advancement of artificial intelligence, understanding the foundations of intelligent behaviour is increasingly important. Active inference, regarded as a general theory of behaviour, offers a principled approach to probing the basis of sophistication in planning and decision-making. This paper examines two decision-making schemes in active inference based on "planning" and "learning from experience". Furthermore, we also introduce a mixed model that navigates the data complexity trade-off between these strategies, leveraging the strengths of both to facilitate balanced decision-making. We evaluate our proposed model in a challenging grid-world scenario that requires adaptability from the agent. Additionally, our model provides the opportunity to analyse the evolution of various parameters, offering valuable insights and contributing to an explainable framework for intelligent decision-making.

2.
Neural Comput ; 33(6): 1433-1468, 2021 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-34496387

RESUMEN

For many years, a combination of principal component analysis (PCA) and independent component analysis (ICA) has been used for blind source separation (BSS). However, it remains unclear why these linear methods work well with real-world data that involve nonlinear source mixtures. This work theoretically validates that a cascade of linear PCA and ICA can solve a nonlinear BSS problem accurately-when the sensory inputs are generated from hidden sources via nonlinear mappings with sufficient dimensionality. Our proposed theorem, termed the asymptotic linearization theorem, theoretically guarantees that applying linear PCA to the inputs can reliably extract a subspace spanned by the linear projections from every hidden source as the major components-and thus projecting the inputs onto their major eigenspace can effectively recover a linear transformation of the hidden sources. Then subsequent application of linear ICA can separate all the true independent hidden sources accurately. Zero-element-wise-error nonlinear BSS is asymptotically attained when the source dimensionality is large and the input dimensionality is sufficiently larger than the source dimensionality. Our proposed theorem is validated analytically and numerically. Moreover, the same computation can be performed by using Hebbian-like plasticity rules, implying the biological plausibility of this nonlinear BSS strategy. Our results highlight the utility of linear PCA and ICA for accurately and reliably recovering nonlinearly mixed sources and suggest the importance of employing sensors with sufficient dimensionality to identify true hidden sources of real-world data.

3.
Neural Comput ; 32(11): 2085-2121, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32946704

RESUMEN

This letter considers a class of biologically plausible cost functions for neural networks, where the same cost function is minimized by both neural activity and plasticity. We show that such cost functions can be cast as a variational bound on model evidence under an implicit generative model. Using generative models based on partially observed Markov decision processes (POMDP), we show that neural activity and plasticity perform Bayesian inference and learning, respectively, by maximizing model evidence. Using mathematical and numerical analyses, we establish the formal equivalence between neural network cost functions and variational free energy under some prior beliefs about latent states that generate inputs. These prior beliefs are determined by particular constants (e.g., thresholds) that define the cost function. This means that the Bayes optimal encoding of latent or hidden states is achieved when the network's implicit priors match the process that generates its inputs. This equivalence is potentially important because it suggests that any hyperparameter of a neural network can itself be optimized-by minimization with respect to variational free energy. Furthermore, it enables one to characterize a neural network formally, in terms of its prior beliefs.


Asunto(s)
Modelos Neurológicos , Modelos Teóricos , Redes Neurales de la Computación , Animales , Humanos , Cadenas de Markov
4.
Neural Comput ; 32(11): 2187-2211, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32946715

RESUMEN

Recent remarkable advances in experimental techniques have provided a background for inferring neuronal couplings from point process data that include a great number of neurons. Here, we propose a systematic procedure for pre- and postprocessing generic point process data in an objective manner to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch-Pitts model. The procedure has two steps: (1) determining time bin size for transforming the point process data into discrete-time binary data and (2) screening relevant couplings from the estimated couplings. For the first step, we decide the optimal time bin size by introducing the null hypothesis that all neurons would fire independently, then choosing a time bin size so that the null hypothesis is rejected with the strict criteria. The likelihood associated with the null hypothesis is analytically evaluated and used for the rejection process. For the second postprocessing step, after a certain estimator of coupling is obtained based on the preprocessed data set (any estimator can be used with the proposed procedure), the estimate is compared with many other estimates derived from data sets obtained by randomizing the original data set in the time direction. We accept the original estimate as relevant only if its absolute value is sufficiently larger than those of randomized data sets. These manipulations suppress false positive couplings induced by statistical noise. We apply this inference procedure to spiking data from synthetic and in vitro neuronal networks. The results show that the proposed procedure identifies the presence or absence of synaptic couplings fairly well, including their signs, for the synthetic and experimental data. In particular, the results support that we can infer the physical connections of underlying systems in favorable situations, even when using a simple statistical model.


Asunto(s)
Modelos Neurológicos , Modelos Estadísticos , Neuronas/fisiología , Animales , Simulación por Computador , Humanos
5.
Neural Comput ; 31(12): 2390-2431, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31614100

RESUMEN

To exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating. This induces an interesting problem: When receiving sensory input generated by a particular conspecific, how does an animal know which internal model to update? We consider a theoretical and neurobiologically plausible solution that enables inference and learning of the processes that generate sensory inputs (e.g., listening and understanding) and reproduction of those inputs (e.g., talking or singing), under multiple generative models. This is based on recent advances in theoretical neurobiology-namely, active inference and post hoc (online) Bayesian model selection. In brief, this scheme fits sensory inputs under each generative model. Model parameters are then updated in proportion to the probability that each model could have generated the input (i.e., model evidence). The proposed scheme is demonstrated using a series of (real zebra finch) birdsongs, where each song is generated by several different birds. The scheme is implemented using physiologically plausible models of birdsong production. We show that generalized Bayesian filtering, combined with model selection, leads to successful learning across generative models, each possessing different parameters. These results highlight the utility of having multiple internal models when making inferences in social environments with multiple sources of sensory information.


Asunto(s)
Percepción Auditiva/fisiología , Inteligencia Emocional , Aprendizaje/fisiología , Modelos Neurológicos , Percepción Social , Animales , Teorema de Bayes , Pinzones
6.
Entropy (Basel) ; 20(7)2018 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-33265602

RESUMEN

The mutual information between the state of a neural network and the state of the external world represents the amount of information stored in the neural network that is associated with the external world. In contrast, the surprise of the sensory input indicates the unpredictability of the current input. In other words, this is a measure of inference ability, and an upper bound of the surprise is known as the variational free energy. According to the free-energy principle (FEP), a neural network continuously minimizes the free energy to perceive the external world. For the survival of animals, inference ability is considered to be more important than simply memorized information. In this study, the free energy is shown to represent the gap between the amount of information stored in the neural network and that available for inference. This concept involves both the FEP and the infomax principle, and will be a useful measure for quantifying the amount of information available for inference.

7.
Biochem Biophys Res Commun ; 486(2): 539-544, 2017 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-28322793

RESUMEN

Synapse elimination and neurite pruning are essential processes for the formation of neuronal circuits. These regressive events depend on neural activity and occur in the early postnatal days known as the critical period, but what makes this temporal specificity is not well understood. One possibility is that the neural activities during the developmentally regulated shift of action of GABA inhibitory transmission lead to the critical period. Moreover, it has been reported that the shifting action of the inhibitory transmission on immature neurons overlaps with synapse elimination and neurite pruning and that increased inhibitory transmission by drug treatment could induce temporal shift of the critical period. However, the relationship among these phenomena remains unclear because it is difficult to experimentally show how the developmental shift of inhibitory transmission influences neural activities and whether the activities promote synapse elimination and neurite pruning. In this study, we modeled synapse elimination in neuronal circuits using the modified Izhikevich's model with functional shifting of GABAergic transmission. The simulation results show that synaptic pruning within a specified period like the critical period is spontaneously generated as a function of the developmentally shifting inhibitory transmission and that the specific firing rate and increasing synchronization of neural circuits are seen at the initial stage of the critical period. This temporal relationship was experimentally supported by an in vitro primary culture of rat cortical neurons in a microchannel on a multi-electrode array (MEA). The firing rate decreased remarkably between the 18-25 days in vitro (DIV), and following these changes in the firing rate, the neurite density was slightly reduced. Our simulation and experimental results suggest that decreasing neural activity due to developing inhibitory synaptic transmission could induce synapse elimination and neurite pruning at particular time such as the critical period. Additionally, these findings indicate that we can estimate the maturity level of inhibitory transmission and the critical period by measuring the firing rate and the degree of synchronization in engineered neural networks.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Plasticidad Neuronal/fisiología , Transmisión Sináptica/fisiología , Animales , Animales Recién Nacidos , Axones/fisiología , Corteza Cerebral/citología , Corteza Cerebral/fisiología , Cerebro/citología , Cerebro/fisiología , Simulación por Computador , Microelectrodos , Neuritas/fisiología , Cultivo Primario de Células , Ratas , Receptores de GABA-A/fisiología , Receptores de GABA-B/fisiología , Sinapsis/fisiología , Factores de Tiempo
8.
Neural Comput ; 28(9): 1859-88, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-27391680

RESUMEN

The free-energy principle is a candidate unified theory for learning and memory in the brain that predicts that neurons, synapses, and neuromodulators work in a manner that minimizes free energy. However, electrophysiological data elucidating the neural and synaptic bases for this theory are lacking. Here, we propose a novel theory bridging the information-theoretical principle with the biological phenomenon of spike-timing dependent plasticity (STDP) regulated by neuromodulators, which we term mSTDP. We propose that by integrating an mSTDP equation, we can obtain a form of Friston's free energy (an information-theoretical function). Then we analytically and numerically show that dopamine (DA) and noradrenaline (NA) influence the accuracy of a principal component analysis (PCA) performed using the mSTDP algorithm. From the perspective of free-energy minimization, these neuromodulatory changes alter the relative weighting or precision of accuracy and prior terms, which induces a switch from pattern completion to separation. These results are consistent with electrophysiological findings and validate the free-energy principle and mSTDP. Moreover, our scheme can potentially be applied in computational psychiatry to build models of the faulty neural networks that underlie the positive symptoms of schizophrenia, which involve abnormal DA levels, as well as models of the NA contribution to memory triage and posttraumatic stress disorder.

9.
PLoS Comput Biol ; 11(12): e1004643, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26690814

RESUMEN

Blind source separation is the computation underlying the cocktail party effect--a partygoer can distinguish a particular talker's voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes' principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Cerebral/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Patrones de Reconocimiento Fisiológico/fisiología , Animales , Células Cultivadas , Corteza Cerebral/citología , Transferencia de Energía , Aprendizaje Automático , Modelos Estadísticos , Análisis de Componente Principal , Ratas
10.
Neural Comput ; 27(4): 819-44, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25710089

RESUMEN

Connection strength estimation is widely used in detecting the topology of neuronal networks and assessing their synaptic plasticity. A recently proposed model-based method using the leaky integrate-and-fire model neuron estimates membrane potential from spike trains by calculating the maximum a posteriori (MAP) path. We further enhance the MAP path method using variational Bayes and dynamic causal modeling. Several simulations demonstrate that the proposed method can accurately estimate connection strengths with an error ratio of less than 20%. The results suggest that the proposed method can be an effective tool for detecting network structure and synaptic plasticity.

11.
Neurosci Biobehav Rev ; 156: 105500, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38056542

RESUMEN

This paper concerns the distributed intelligence or federated inference that emerges under belief-sharing among agents who share a common world-and world model. Imagine, for example, several animals keeping a lookout for predators. Their collective surveillance rests upon being able to communicate their beliefs-about what they see-among themselves. But, how is this possible? Here, we show how all the necessary components arise from minimising free energy. We use numerical studies to simulate the generation, acquisition and emergence of language in synthetic agents. Specifically, we consider inference, learning and selection as minimising the variational free energy of posterior (i.e., Bayesian) beliefs about the states, parameters and structure of generative models, respectively. The common theme-that attends these optimisation processes-is the selection of actions that minimise expected free energy, leading to active inference, learning and model selection (a.k.a., structure learning). We first illustrate the role of communication in resolving uncertainty about the latent states of a partially observed world, on which agents have complementary perspectives. We then consider the acquisition of the requisite language-entailed by a likelihood mapping from an agent's beliefs to their overt expression (e.g., speech)-showing that language can be transmitted across generations by active learning. Finally, we show that language is an emergent property of free energy minimisation, when agents operate within the same econiche. We conclude with a discussion of various perspectives on these phenomena; ranging from cultural niche construction, through federated learning, to the emergence of complexity in ensembles of self-organising systems.


Asunto(s)
Comunicación , Lenguaje , Animales , Teorema de Bayes , Incertidumbre , Habla
12.
Nat Commun ; 14(1): 4547, 2023 08 07.
Artículo en Inglés | MEDLINE | ID: mdl-37550277

RESUMEN

Empirical applications of the free-energy principle are not straightforward because they entail a commitment to a particular process theory, especially at the cellular and synaptic levels. Using a recently established reverse engineering technique, we confirm the quantitative predictions of the free-energy principle using in vitro networks of rat cortical neurons that perform causal inference. Upon receiving electrical stimuli-generated by mixing two hidden sources-neurons self-organised to selectively encode the two sources. Pharmacological up- and downregulation of network excitability disrupted the ensuing inference, consistent with changes in prior beliefs about hidden sources. As predicted, changes in effective synaptic connectivity reduced variational free energy, where the connection strengths encoded parameters of the generative model. In short, we show that variational free energy minimisation can quantitatively predict the self-organisation of neuronal networks, in terms of their responses and plasticity. These results demonstrate the applicability of the free-energy principle to in vitro neural networks and establish its predictive validity in this setting.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Animales , Ratas , Neuronas/fisiología , Modelos Neurológicos
13.
Neurosci Res ; 175: 38-45, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34968557

RESUMEN

The neuronal substrates that implement the free-energy principle and ensuing active inference at the neuron and synapse level have not been fully elucidated. This Review considers possible neuronal substrates underlying the principle. First, the foundations of the free-energy principle are introduced, and then its ability to empirically explain various brain functions and psychological and biological phenomena in terms of Bayesian inference is described. Mathematically, the dynamics of neural activity and plasticity that minimise a cost function can be cast as performing Bayesian inference that minimises variational free energy. This equivalence licenses the adoption of the free-energy principle as a universal characterisation of neural networks. Further, the neural network structure itself represents a generative model under which an agent operates. A virtue of this perspective is that it enables the formal association of neural network properties with prior beliefs that regulate inference and learning. The possible neuronal substrates that implement prior and posterior beliefs and how to empirically examine the theory are discussed. This perspective renders brain activity explainable, leading to a deeper understanding of the neuronal mechanisms underlying basic psychology and psychiatric disorders in terms of an implicit generative model.


Asunto(s)
Modelos Neurológicos , Neurofisiología , Algoritmos , Teorema de Bayes , Humanos , Neuronas/fisiología
14.
Commun Biol ; 5(1): 55, 2022 01 14.
Artículo en Inglés | MEDLINE | ID: mdl-35031656

RESUMEN

This work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function-and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model. This equivalence indicates that the delayed modulation of Hebbian plasticity-accompanied with adaptation of firing thresholds-is a sufficient neuronal substrate to attain Bayes optimal inference and control. We corroborated this proposition using numerical analyses of maze tasks. This theory offers a universal characterisation of canonical neural networks in terms of Bayesian belief updating and provides insight into the neuronal mechanisms underlying planning and adaptive behavioural control.


Asunto(s)
Teorema de Bayes , Cadenas de Markov , Modelos Neurológicos , Red Nerviosa/fisiología , Conducta
15.
Nat Commun ; 12(1): 5712, 2021 09 29.
Artículo en Inglés | MEDLINE | ID: mdl-34588436

RESUMEN

Animals make decisions under the principle of reward value maximization and surprise minimization. It is still unclear how these principles are represented in the brain and are reflected in behavior. We addressed this question using a closed-loop virtual reality system to train adult zebrafish for active avoidance. Analysis of the neural activity of the dorsal pallium during training revealed neural ensembles assigning rules to the colors of the surrounding walls. Additionally, one third of fish generated another ensemble that becomes activated only when the real perceived scenery shows discrepancy from the predicted favorable scenery. The fish with the latter ensemble escape more efficiently than the fish with the former ensembles alone, even though both fish have successfully learned to escape, consistent with the hypothesis that the latter ensemble guides zebrafish to take action to minimize this prediction error. Our results suggest that zebrafish can use both principles of goal-directed behavior, but with different behavioral consequences depending on the repertoire of the adopted principles.


Asunto(s)
Reacción de Prevención/fisiología , Conducta Animal/fisiología , Neocórtex/fisiología , Recompensa , Pez Cebra/fisiología , Animales , Microscopía Intravital , Microscopía de Fluorescencia por Excitación Multifotónica , Neocórtex/citología , Redes Neurales de la Computación , Neuronas/fisiología , Estimulación Luminosa/métodos , Técnicas Estereotáxicas , Realidad Virtual
16.
Sci Rep ; 9(1): 7127, 2019 05 09.
Artículo en Inglés | MEDLINE | ID: mdl-31073206

RESUMEN

Animals need to adjust their inferences according to the context they are in. This is required for the multi-context blind source separation (BSS) task, where an agent needs to infer hidden sources from their context-dependent mixtures. The agent is expected to invert this mixing process for all contexts. Here, we show that a neural network that implements the error-gated Hebbian rule (EGHR) with sufficiently redundant sensory inputs can successfully learn this task. After training, the network can perform the multi-context BSS without further updating synapses, by retaining memories of all experienced contexts. This demonstrates an attractive use of the EGHR for dimensionality reduction by extracting low-dimensional sources across contexts. Finally, if there is a common feature shared across contexts, the EGHR can extract it and generalize the task to even inexperienced contexts. The results highlight the utility of the EGHR as a model for perceptual adaptation in animals.

17.
Sci Rep ; 9(1): 6412, 2019 04 30.
Artículo en Inglés | MEDLINE | ID: mdl-31040386

RESUMEN

This paper considers the emergence of a generalised synchrony in ensembles of coupled self-organising systems, such as neurons. We start from the premise that any self-organising system complies with the free energy principle, in virtue of placing an upper bound on its entropy. Crucially, the free energy principle allows one to interpret biological systems as inferring the state of their environment or external milieu. An emergent property of this inference is synchronisation among an ensemble of systems that infer each other. Here, we investigate the implications of neuronal dynamics by simulating neuronal networks, where each neuron minimises its free energy. We cast the ensuing ensemble dynamics in terms of inference and show that cardinal behaviours of neuronal networks - both in vivo and in vitro - can be explained by this framework. In particular, we test the hypotheses that (i) generalised synchrony is an emergent property of free energy minimisation; thereby explaining synchronisation in the resting brain: (ii) desynchronisation is induced by exogenous input; thereby explaining event-related desynchronisation and (iii) structure learning emerges in response to causal structure in exogenous input; thereby explaining functional segregation in real neuronal systems.


Asunto(s)
Encéfalo/citología , Entropía , Modelos Neurológicos , Neuronas/fisiología , Encéfalo/fisiología , Simulación por Computador
18.
Front Comput Neurosci ; 12: 83, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30344485

RESUMEN

Humans have flexible control over cognitive functions depending on the context. Several studies suggest that the prefrontal cortex (PFC) controls this cognitive flexibility, but the detailed underlying mechanisms remain unclear. Recent developments in machine learning techniques allow simple PFC models written as a recurrent neural network to perform various behavioral tasks like humans and animals. Computational modeling allows the estimation of neuronal parameters that are crucial for performing the tasks, which cannot be observed by biologic experiments. To identify salient neural-network features for flexible cognition tasks, we compared four PFC models using a context-dependent integration task. After training the neural networks with the task, we observed highly plastic synapses localized to a small neuronal population in all models. In three of the models, the neuronal units containing these highly plastic synapses contributed most to the performance. No common tendencies were observed in the distribution of synaptic strengths among the four models. These results suggest that task-dependent plastic synaptic changes are more important for accomplishing flexible cognitive tasks than the structures of the constructed synaptic networks.

19.
Sci Rep ; 8(1): 16926, 2018 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-30446766

RESUMEN

In this work, we address the neuronal encoding problem from a Bayesian perspective. Specifically, we ask whether neuronal responses in an in vitro neuronal network are consistent with ideal Bayesian observer responses under the free energy principle. In brief, we stimulated an in vitro cortical cell culture with stimulus trains that had a known statistical structure. We then asked whether recorded neuronal responses were consistent with variational message passing based upon free energy minimisation (i.e., evidence maximisation). Effectively, this required us to solve two problems: first, we had to formulate the Bayes-optimal encoding of the causes or sources of sensory stimulation, and then show that these idealised responses could account for observed electrophysiological responses. We describe a simulation of an optimal neural network (i.e., the ideal Bayesian neural code) and then consider the mapping from idealised in silico responses to recorded in vitro responses. Our objective was to find evidence for functional specialisation and segregation in the in vitro neural network that reproduced in silico learning via free energy minimisation. Finally, we combined the in vitro and in silico results to characterise learning in terms of trajectories in a variational information plane of accuracy and complexity.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Algoritmos , Teorema de Bayes , Humanos , Aprendizaje , Cadenas de Markov
20.
Sci Rep ; 8(1): 1835, 2018 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-29382868

RESUMEN

We developed a biologically plausible unsupervised learning algorithm, error-gated Hebbian rule (EGHR)-ß, that performs principal component analysis (PCA) and independent component analysis (ICA) in a single-layer feedforward neural network. If parameter ß = 1, it can extract the subspace that major principal components span similarly to Oja's subspace rule for PCA. If ß = 0, it can separate independent sources similarly to Bell-Sejnowski's ICA rule but without requiring the same number of input and output neurons. Unlike these engineering rules, the EGHR-ß can be easily implemented in a biological or neuromorphic circuit because it only uses local information available at each synapse. We analytically and numerically demonstrate the reliability of the EGHR-ß in extracting and separating major sources given high-dimensional input. By adjusting ß, the EGHR-ß can extract sources that are missed by the conventional engineering approach that first applies PCA and then ICA. Namely, the proposed rule can successfully extract hidden natural images even in the presence of dominant or non-Gaussian noise components. The results highlight the reliability and utility of the EGHR-ß for large-scale parallel computation of PCA and ICA and its future implementation in a neuromorphic hardware.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA