Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Artif Life ; : 1-22, 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38913399

RESUMEN

An embodied agent influences its environment and is influenced by it. We use the sensorimotor loop to model these interactions and quantify the information flows in the system by information-theoretic measures. This includes a measure for the interaction among the agent's body and its environment, often referred to as morphological computation. Additionally, we examine the controller complexity, which can be seen in the context of the integrated information theory of consciousness. Applying this framework to an experimental setting with simulated agents allows us to analyze the interaction between an agent and its environment, as well as the complexity of its controller. Previous research revealed that a morphology adapted well to a task can substantially reduce the required complexity of the controller. In this work, we observe that the agents first have to understand the relevant dynamics of the environment to interact well with their surroundings. Hence an increased controller complexity can facilitate a better interaction between an agent's body and its environment.

2.
Entropy (Basel) ; 22(10)2020 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-33286876

RESUMEN

Complexity measures in the context of the Integrated Information Theory of consciousness try to quantify the strength of the causal connections between different neurons. This is done by minimizing the KL-divergence between a full system and one without causal cross-connections. Various measures have been proposed and compared in this setting. We will discuss a class of information geometric measures that aim at assessing the intrinsic causal cross-influences in a system. One promising candidate of these measures, denoted by ΦCIS, is based on conditional independence statements and does satisfy all of the properties that have been postulated as desirable. Unfortunately it does not have a graphical representation, which makes it less intuitive and difficult to analyze. We propose an alternative approach using a latent variable, which models a common exterior influence. This leads to a measure ΦCII, Causal Information Integration, that satisfies all of the required conditions. Our measure can be calculated using an iterative information geometric algorithm, the em-algorithm. Therefore we are able to compare its behavior to existing integrated information measures.

3.
Entropy (Basel) ; 21(4)2019 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-33267149

RESUMEN

A new canonical divergence is put forward for generalizing an information-geometric measure of complexity for both classical and quantum systems. On the simplex of probability measures, it is proved that the new divergence coincides with the Kullback-Leibler divergence, which is used to quantify how much a probability measure deviates from the non-interacting states that are modeled by exponential families of probabilities. On the space of positive density operators, we prove that the same divergence reduces to the quantum relative entropy, which quantifies many-party correlations of a quantum state from a Gibbs family.

4.
PLoS Comput Biol ; 11(9): e1004427, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26325254

RESUMEN

We present a framework for designing cheap control architectures of embodied agents. Our derivation is guided by the classical problem of universal approximation, whereby we explore the possibility of exploiting the agent's embodiment for a new and more efficient universal approximation of behaviors generated by sensorimotor control. This embodied universal approximation is compared with the classical non-embodied universal approximation. To exemplify our approach, we present a detailed quantitative case study for policy models defined in terms of conditional restricted Boltzmann machines. In contrast to non-embodied universal approximation, which requires an exponential number of parameters, in the embodied setting we are able to generate all possible behaviors with a drastically smaller model, thus obtaining cheap universal approximation. We test and corroborate the theory experimentally with a six-legged walking machine. The experiments indicate that the controller complexity predicted by our theory is close to the minimal sufficient value, which means that the theory has direct practical implications.


Asunto(s)
Modelos Biológicos , Modelos Neurológicos , Redes Neurales de la Computación , Robótica/métodos , Retroalimentación , Movimiento (Física)
5.
Chaos ; 24(1): 013136, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24697398

RESUMEN

We quantify the relationship between the dynamics of a time-discrete dynamical system, the tent map T and its iterations T(m), and the induced dynamics at a symbolical level in information theoretical terms. The symbol dynamics, given by a binary string s of length m, is obtained by choosing a partition point [Formula: see text] and lumping together the points [Formula: see text] s.t. T(i)(x) concurs with the i - 1th digit of s-i.e., we apply a so called threshold crossing technique. Interpreting the original dynamics and the symbolic one as different levels, this allows us to quantitatively evaluate and compare various closure measures that have been proposed for identifying emergent macro-levels of a dynamical system. In particular, we can see how these measures depend on the choice of the partition point α. As main benefit of this new information theoretical approach, we get all Markov partitions with full support of the time-discrete dynamical system induced by the tent map. Furthermore, we could derive an example of a Markovian symbol dynamics whose underlying partition is not Markovian at all, and even a whole hierarchy of Markovian symbol dynamics.

6.
Neural Netw ; 155: 574-591, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36208615

RESUMEN

Helmholtz Machines (HMs) are a class of generative models composed of two Sigmoid Belief Networks (SBNs), acting respectively as an encoder and a decoder. These models are commonly trained using a two-step optimization algorithm called Wake-Sleep (WS) and more recently by improved versions, such as Reweighted Wake-Sleep (RWS) and Bidirectional Helmholtz Machines (BiHM). The locality of the connections in an SBN induces sparsity in the Fisher Information Matrices associated to the probabilistic models, in the form of a finely-grained block-diagonal structure. In this paper we exploit this property to efficiently train SBNs and HMs using the natural gradient. We present a novel algorithm, called Natural Reweighted Wake-Sleep (NRWS), that corresponds to the geometric adaptation of its standard version. In a similar manner, we also introduce Natural Bidirectional Helmholtz Machine (NBiHM). Differently from previous work, we will show how for HMs the natural gradient can be efficiently computed without the need of introducing any approximation in the structure of the Fisher information matrix. The experiments performed on standard datasets from the literature show a consistent improvement of NRWS and NBiHM not only with respect to their non-geometric baselines but also with respect to state-of-the-art training algorithms for HMs. The improvement is quantified both in terms of speed of convergence as well as value of the log-likelihood reached after training.


Asunto(s)
Algoritmos , Sueño , Modelos Estadísticos , Probabilidad
7.
Neural Comput ; 23(5): 1306-19, 2011 May.
Artículo en Inglés | MEDLINE | ID: mdl-21299421

RESUMEN

We improve recently published results about resources of restricted Boltzmann machines (RBM) and deep belief networks (DBN)required to make them universal approximators. We show that any distribution pon the set {0,1}(n) of binary vectors of length n can be arbitrarily well approximated by an RBM with k-1 hidden units, where k is the minimal number of pairs of binary vectors differing in only one entry such that their union contains the support set of p. In important cases this number is half the cardinality of the support set of p (given in Le Roux & Bengio, 2008). We construct a DBN with 2n/ 2(n-b) , b ∼ log n, hidden layers of width n that is capable of approximating any distribution on {0,1}(n) arbitrarily well. This confirms a conjecture presented in Le Roux and Bengio (2010).


Asunto(s)
Inteligencia Artificial , Cognición/fisiología , Cultura , Redes Neurales de la Computación , Algoritmos , Simulación por Computador/normas , Humanos , Conceptos Matemáticos , Modelos Teóricos
8.
Chaos ; 21(3): 037103, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21974666

RESUMEN

We develop a geometric approach to complexity based on the principle that complexity requires interactions at different scales of description. Complex systems are more than the sum of their parts of any size and not just more than the sum of their elements. Using information geometry, we therefore analyze the decomposition of a system in terms of an interaction hierarchy. In mathematical terms, we present a theory of complexity measures for finite random fields using the geometric framework of hierarchies of exponential families. Within our framework, previously proposed complexity measures find their natural place and gain a new interpretation.

9.
Front Psychol ; 12: 716433, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34912262

RESUMEN

The Integrated Information Theory provides a quantitative approach to consciousness and can be applied to neural networks. An embodied agent controlled by such a network influences and is being influenced by its environment. This involves, on the one hand, morphological computation within goal directed action and, on the other hand, integrated information within the controller, the agent's brain. In this article, we combine different methods in order to examine the information flows among and within the body, the brain and the environment of an agent. This allows us to relate various information flows to each other. We test this framework in a simple experimental setup. There, we calculate the optimal policy for goal-directed behavior based on the "planning as inference" method, in which the information-geometric em-algorithm is used to optimize the likelihood of the goal. Morphological computation and integrated information are then calculated with respect to the optimal policies. Comparing the dynamics of these measures under changing morphological circumstances highlights the antagonistic relationship between these two concepts. The more morphological computation is involved, the less information integration within the brain is required. In order to determine the influence of the brain on the behavior of the agent it is necessary to additionally measure the information flow to and from the brain.

10.
Theory Biosci ; 139(4): 309-318, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33263925

RESUMEN

A core property of robust systems is given by the invariance of their function against the removal of some of their structural components. This intuition has been formalised in the context of input-output maps, thereby introducing the notion of exclusion independence. We review work on how this formalisation allows us to derive characterisation theorems that provide a basis for the design of robust systems.

11.
Theory Biosci ; 139(2): 209-223, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32212028

RESUMEN

Despite the near universal assumption of individuality in biology, there is little agreement about what individuals are and few rigorous quantitative methods for their identification. Here, we propose that individuals are aggregates that preserve a measure of temporal integrity, i.e., "propagate" information from their past into their futures. We formalize this idea using information theory and graphical models. This mathematical formulation yields three principled and distinct forms of individuality-an organismal, a colonial, and a driven form-each of which varies in the degree of environmental dependence and inherited information. This approach can be thought of as a Gestalt approach to evolution where selection makes figure-ground (agent-environment) distinctions using suitable information-theoretic lenses. A benefit of the approach is that it expands the scope of allowable individuals to include adaptive aggregations in systems that are multi-scale, highly distributed, and do not necessarily have physical boundaries such as cell walls or clonal somatic tissue. Such individuals might be visible to selection but hard to detect by observers without suitable measurement principles. The information theory of individuality allows for the identification of individuals at all levels of organization from molecular to cultural and provides a basis for testing assumptions about the natural scales of a system and argues for the importance of uncertainty reduction through coarse-graining in adaptive systems.


Asunto(s)
Evolución Biológica , Individualidad , Teoría de la Información , Fenotipo , Algoritmos , Conducta , Humanos , Modelos Biológicos , Procesos Estocásticos , Incertidumbre
12.
Biosystems ; 91(2): 331-45, 2008 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-17897774

RESUMEN

We present a tentative proposal for a quantitative measure of autonomy. This is something that, surprisingly, is rarely found in the literature, even though autonomy is considered to be a basic concept in many disciplines, including artificial life. We work in an information theoretic setting for which the distinction between system and environment is the starting point. As a first measure for autonomy, we propose the conditional mutual information between consecutive states of the system conditioned on the history of the environment. This works well when the system cannot influence the environment at all and the environment does not interact synergetically with the system. When, in contrast, the system has full control over its environment, we should instead neglect the environment history and simply take the mutual information between consecutive system states as a measure of autonomy. In the case of mutual interaction between system and environment there remains an ambiguity regarding whether system or environment has caused observed correlations. If the interaction structure of the system is known, we define a "causal" autonomy measure which allows this ambiguity to be resolved. Synergetic interactions still pose a problem since in this case causation cannot be attributed to the system or the environment alone. Moreover, our analysis reveals some subtle facets of the concept of autonomy, in particular with respect to the seemingly innocent system-environment distinction we took for granted, and raises the issue of the attribution of control, i.e. the responsibility for observed effects. To further explore these issues, we evaluate our autonomy measure for simple automata, an agent moving in space, gliders in the game of life, and the tessellation automaton for autopoiesis of Varela et al. [Varela, F.J., Maturana, H.R., Uribe, R., 1974. Autopoiesis: the organization of living systems, its characterization and a model. BioSystems 5, 187-196].


Asunto(s)
Cognición/fisiología , Teoría de la Información , Intención , Vida , Modelos Biológicos , Autonomía Personal , Volición/fisiología , Animales , Humanos , Terminología como Asunto
13.
Biosystems ; 89(1-3): 190-7, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-17188422

RESUMEN

It has been argued that information processing in the cortex is optimised with regard to certain information theoretic principles. We have, for instance, recently shown that spike-timing dependent plasticity can improve an information-theoretic measure called spatio-temporal stochastic interaction which captures how strongly a set of neurons cooperates in space and time. Systems with high stochastic interaction reveal Poisson spike trains but nonetheless occupy only a strongly reduced area in their global phase space, they reveal repetiting but complex global activation patterns, and they can be interpreted as computational systems operating on selected sets of collective patterns or "global states" in a rule-like manner. In the present work we investigate stochastic interaction in high-resolution EEG-data from cat auditory cortex. Using Kohonen maps to reduce the high-dimensional dynamics of the system, we are able to detect repetiting system states and estimate the stochastic interaction in the data, which turns out to be fairly high. This suggests an organised cooperation in the underlying neural networks which cause the data and may reflect generic intrinsic computational capabilities of the cortex.


Asunto(s)
Corteza Auditiva/fisiología , Electroencefalografía/métodos , Procesos Estocásticos , Animales , Gatos
14.
Theory Biosci ; 134(3-4): 105-16, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26650201

RESUMEN

We consider a general model of the sensorimotor loop of an agent interacting with the world. This formalises Uexküll's notion of a function-circle. Here, we assume a particular causal structure, mechanistically described in terms of Markov kernels. In this generality, we define two σ-algebras of events in the world that describe two respective perspectives: (1) the perspective of an external observer, (2) the intrinsic perspective of the agent. Not all aspects of the world, seen from the external perspective, are accessible to the agent. This is expressed by the fact that the second σ-algebra is a subalgebra of the first one. We propose the smaller one as formalisation of Uexküll's Umwelt concept. We show that, under continuity and compactness assumptions, the global dynamics of the world can be simplified without changing the internal process. This simplification can serve as a minimal world model that the system must have in order to be consistent with the internal process.


Asunto(s)
Ecosistema , Retroalimentación Sensorial/fisiología , Modelos Biológicos , Movimiento/fisiología , Desempeño Psicomotor , Sensación/fisiología , Animales , Simulación por Computador , Humanos , Modelos Estadísticos , Desempeño Psicomotor/fisiología
15.
PLoS One ; 10(10): e0139475, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26427059

RESUMEN

We propose a model that explains the reliable emergence of power laws (e.g., Zipf's law) during the development of different human languages. The model incorporates the principle of least effort in communications, minimizing a combination of the information-theoretic communication inefficiency and direct signal cost. We prove a general relationship, for all optimal languages, between the signal cost distribution and the resulting distribution of signals. Zipf's law then emerges for logarithmic signal cost distributions, which is the cost distribution expected for words constructed from letters or phonemes.


Asunto(s)
Comunicación , Lenguaje , Modelos Teóricos , Humanos , Teoría de la Información , Factores de Tiempo
16.
Neural Netw ; 16(10): 1483-97, 2003 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-14622878

RESUMEN

Spatial interdependences of multiple stochastic units can be suitably quantified by the Kullback-Leibler divergence of the joint probability distribution from the corresponding factorized distribution. In the present paper, a generalized measure for stochastic interaction, which also captures temporal interdependences, is analysed within the setting of Markov chains. The dynamical properties of systems with strongly interacting stochastic units are analytically studied and illustrated by computer simulations. In particular, the emergence of determinism in such systems is demonstrated.


Asunto(s)
Simulación por Computador , Cadenas de Markov , Redes Neurales de la Computación , Dinámicas no Lineales , Procesos Estocásticos , Humanos , Modelos Biológicos
17.
Theory Biosci ; 133(2): 63-78, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24045959

RESUMEN

We study a notion of knockout robustness of a stochastic map (Markov kernel) that describes a system of several input random variables and one output random variable. Robustness requires that the behaviour of the system does not change if one or several of the input variables are knocked out. Gibbs potentials are used to give a mechanistic description of the behaviour of the system after knockouts. Robustness imposes structural constraints on these potentials. We show that robust systems can be described in terms of suitable interaction families of Gibbs potentials, which allows us to address the problem of systems design. Robustness is also characterized by conditional independence constraints on the joint distribution of input and output. The set of all probability distributions corresponding to robust systems can be decomposed into a finite union of components, and we find parametrizations of the components.


Asunto(s)
Algoritmos , Modelos Biológicos , Modelos Estadísticos , Análisis Numérico Asistido por Computador , Procesos Estocásticos , Biología de Sistemas/métodos , Animales , Simulación por Computador , Humanos
18.
PLoS One ; 8(5): e63400, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23723979

RESUMEN

Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.


Asunto(s)
Robótica/métodos , Algoritmos , Animales , Inteligencia Artificial , Conducta Animal , Simulación por Computador , Entropía , Teoría de la Información , Cadenas de Markov , Dinámicas no Lineales , Autonomía Personal
19.
Front Psychol ; 4: 801, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24204351

RESUMEN

One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviors. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviors, because a maximization of the PI corresponds to an exploration of morphology- and environment-dependent behavioral regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost.

20.
Brain Connect ; 3(3): 223-39, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23402339

RESUMEN

Two aspects play a key role in recently developed strategies for functional magnetic resonance imaging (fMRI) data analysis: first, it is now recognized that the human brain is a complex adaptive system and exhibits the hallmarks of complexity such as emergence of patterns arising out of a multitude of interactions between its many constituents. Second, the field of fMRI has evolved into a data-intensive, big data endeavor with large databases and masses of data being shared around the world. At the same time, ultra-high field MRI scanners are now available producing data at previously unobtainable quality and quantity. Both aspects have led to shifts in the way in which we view fMRI data. Here, we review recent developments in fMRI data analysis methodology that resulted from these shifts in paradigm.


Asunto(s)
Mapeo Encefálico , Encéfalo/irrigación sanguínea , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Interpretación Estadística de Datos , Entropía , Humanos , Teoría de la Información , Análisis de Componente Principal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA