RESUMO
Originally developed as a theory of consciousness, integrated information theory provides a mathematical framework to quantify the causal irreducibility of systems and subsets of units in the system. Specifically, mechanism integrated information quantifies how much of the causal powers of a subset of units in a state, also referred to as a mechanism, cannot be accounted for by its parts. If the causal powers of the mechanism can be fully explained by its parts, it is reducible and its integrated information is zero. Here, we study the upper bound of this measure and how it is achieved. We study mechanisms in isolation, groups of mechanisms, and groups of causal relations among mechanisms. We put forward new theoretical results that show mechanisms that share parts with each other cannot all achieve their maximum. We also introduce techniques to design systems that can maximize the integrated information of a subset of their mechanisms or relations. Our results can potentially be used to exploit the symmetries and constraints to reduce the computations significantly and to compare different connectivity profiles in terms of their maximal achievable integrated information.
Assuntos
Biologia Computacional , Teoria da Informação , Biologia Computacional/métodos , Humanos , Estado de Consciência/fisiologia , Modelos Neurológicos , Algoritmos , Simulação por ComputadorRESUMO
This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions concerning both the presence and the quality of experience, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate formulation of the axioms as postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system's irreducible cause-effect power, the distinctions and relations specified by a substrate can account for the quality of experience.
Assuntos
Encéfalo , Teoria da Informação , Modelos Neurológicos , Estado de ConsciênciaRESUMO
Integrated information theory (IIT) starts from consciousness itself and identifies a set of properties (axioms) that are true of every conceivable experience. The axioms are translated into a set of postulates about the substrate of consciousness (called a complex), which are then used to formulate a mathematical framework for assessing both the quality and quantity of experience. The explanatory identity proposed by IIT is that an experience is identical to the cause-effect structure unfolded from a maximally irreducible substrate (a Φ-structure). In this work we introduce a definition for the integrated information of a system (φs) that is based on the existence, intrinsicality, information, and integration postulates of IIT. We explore how notions of determinism, degeneracy, and fault lines in the connectivity impact system-integrated information. We then demonstrate how the proposed measure identifies complexes as systems, the φs of which is greater than the φs of any overlapping candidate systems.
RESUMO
Augmenting neural networks with skip connections, as introduced in the so-called ResNet architecture, surprised the community by enabling the training of networks of more than 1,000 layers with significant performance gains. This paper deciphers ResNet by analyzing the effect of skip connections, and puts forward new theoretical results on the advantages of identity skip connections in neural networks. We prove that the skip connections in the residual blocks facilitate preserving the norm of the gradient, and lead to stable back-propagation, which is desirable from optimization perspective. We also show that, perhaps surprisingly, as more residual blocks are stacked, the norm-preservation of the network is enhanced. Our theoretical arguments are supported by extensive empirical evidence. Can we push for extra norm-preservation? We answer this question by proposing an efficient method to regularize the singular values of the convolution operator and making the ResNet's transition layers extra norm-preserving. Our numerical investigations demonstrate that the learning dynamics and the classification performance of ResNet can be improved by making it even more norm preserving. Our results and the introduced modification for ResNet, referred to as Procrustes ResNets, can be used as a guide for training deeper networks and can also inspire new deeper architectures.