Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Proc Natl Acad Sci U S A ; 119(35): e2121338119, 2022 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-35994661

RESUMO

Precisely how humans process relational patterns of information in knowledge, language, music, and society is not well understood. Prior work in the field of statistical learning has demonstrated that humans process such information by building internal models of the underlying network structure. However, these mental maps are often inaccurate due to limitations in human information processing. The existence of such limitations raises clear questions: Given a target network that one wishes for a human to learn, what network should one present to the human? Should one simply present the target network as-is, or should one emphasize certain parts of the network to proactively mitigate expected errors in learning? To investigate these questions, we study the optimization of network learnability in a computational model of human learning. Evaluating an array of synthetic and real-world networks, we find that learnability is enhanced by reinforcing connections within modules or clusters. In contrast, when networks contain significant core-periphery structure, we find that learnability is best optimized by reinforcing peripheral edges between low-degree nodes. Overall, our findings suggest that the accuracy of human network learning can be systematically enhanced by targeted emphasis and de-emphasis of prescribed sectors of information.


Assuntos
Simulação por Computador , Conhecimento , Aprendizagem , Modelos Psicológicos , Humanos , Idioma , Música , Reforço Psicológico
2.
Proc Natl Acad Sci U S A ; 118(32)2021 08 10.
Artigo em Inglês | MEDLINE | ID: mdl-34349019

RESUMO

Many complex networks depend upon biological entities for their preservation. Such entities, from human cognition to evolution, must first encode and then replicate those networks under marked resource constraints. Networks that survive are those that are amenable to constrained encoding-or, in other words, are compressible. But how compressible is a network? And what features make one network more compressible than another? Here, we answer these questions by modeling networks as information sources before compressing them using rate-distortion theory. Each network yields a unique rate-distortion curve, which specifies the minimal amount of information that remains at a given scale of description. A natural definition then emerges for the compressibility of a network: the amount of information that can be removed via compression, averaged across all scales. Analyzing an array of real and model networks, we demonstrate that compressibility increases with two common network properties: transitivity (or clustering) and degree heterogeneity. These results indicate that hierarchical organization-which is characterized by modular structure and heterogeneous degrees-facilitates compression in complex networks. Generally, our framework sheds light on the interplay between a network's structure and its capacity to be compressed, enabling investigations into the role of compression in shaping real-world networks.


Assuntos
Redes de Comunicação de Computadores , Compressão de Dados , Modelos Teóricos , Algoritmos , Análise por Conglomerados , Redes Comunitárias , Humanos , Distribuição Aleatória
3.
Proc Natl Acad Sci U S A ; 118(47)2021 11 23.
Artigo em Inglês | MEDLINE | ID: mdl-34789565

RESUMO

Living systems break detailed balance at small scales, consuming energy and producing entropy in the environment to perform molecular and cellular functions. However, it remains unclear how broken detailed balance manifests at macroscopic scales and how such dynamics support higher-order biological functions. Here we present a framework to quantify broken detailed balance by measuring entropy production in macroscopic systems. We apply our method to the human brain, an organ whose immense metabolic consumption drives a diverse range of cognitive functions. Using whole-brain imaging data, we demonstrate that the brain nearly obeys detailed balance when at rest, but strongly breaks detailed balance when performing physically and cognitively demanding tasks. Using a dynamic Ising model, we show that these large-scale violations of detailed balance can emerge from fine-scale asymmetries in the interactions between elements, a known feature of neural systems. Together, these results suggest that violations of detailed balance are vital for cognition and provide a general tool for quantifying entropy production in macroscopic systems.


Assuntos
Encéfalo/fisiologia , Entropia , Fenômenos Fisiológicos Celulares , Neurociência Cognitiva , Humanos , Modelos Biológicos
4.
Phys Rev Lett ; 129(11): 118101, 2022 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-36154397

RESUMO

We show that the evidence for a local arrow of time, which is equivalent to the entropy production in thermodynamic systems, can be decomposed. In a system with many degrees of freedom, there is a term that arises from the irreversible dynamics of the individual variables, and then a series of non-negative terms contributed by correlations among pairs, triplets, and higher-order combinations of variables. We illustrate this decomposition on simple models of noisy logical computations, and then apply it to the analysis of patterns of neural activity in the retina as it responds to complex dynamic visual scenes. We find that neural activity breaks detailed balance even when the visual inputs do not, and that this irreversibility arises primarily from interactions between pairs of neurons.


Assuntos
Neurônios , Entropia , Neurônios/fisiologia
5.
PLoS Comput Biol ; 16(9): e1008144, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32886673

RESUMO

At the macroscale, the brain operates as a network of interconnected neuronal populations, which display coordinated rhythmic dynamics that support interareal communication. Understanding how stimulation of different brain areas impacts such activity is important for gaining basic insights into brain function and for further developing therapeutic neurmodulation. However, the complexity of brain structure and dynamics hinders predictions regarding the downstream effects of focal stimulation. More specifically, little is known about how the collective oscillatory regime of brain network activity-in concert with network structure-affects the outcomes of perturbations. Here, we combine human connectome data and biophysical modeling to begin filling these gaps. By tuning parameters that control collective system dynamics, we identify distinct states of simulated brain activity and investigate how the distributed effects of stimulation manifest at different dynamical working points. When baseline oscillations are weak, the stimulated area exhibits enhanced power and frequency, and due to network interactions, activity in this excited frequency band propagates to nearby regions. Notably, beyond these linear effects, we further find that focal stimulation causes more distributed modifications to interareal coherence in a band containing regions' baseline oscillation frequencies. Importantly, depending on the dynamical state of the system, these broadband effects can be better predicted by functional rather than structural connectivity, emphasizing a complex interplay between anatomical organization, dynamics, and response to perturbation. In contrast, when the network operates in a regime of strong regional oscillations, stimulation causes only slight shifts in power and frequency, and structural connectivity becomes most predictive of stimulation-induced changes in network activity patterns. In sum, this work builds upon and extends previous computational studies investigating the impacts of stimulation, and underscores the fact that both the stimulation site, and, crucially, the regime of brain network dynamics, can influence the network-wide responses to local perturbations.


Assuntos
Encéfalo/fisiologia , Conectoma , Modelos Neurológicos , Humanos , Neurônios/fisiologia
7.
Phys Rev E ; 109(4-1): 044305, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38755869

RESUMO

Humans are exposed to sequences of events in the environment, and the interevent transition probabilities in these sequences can be modeled as a graph or network. Many real-world networks are organized hierarchically and while much is known about how humans learn basic transition graph topology, whether and to what degree humans can learn hierarchical structures in such graphs remains unknown. We probe the mental estimates of transition probabilities via the surprisal effect phenomenon: humans react more slowly to less expected transitions. Using mean-field predictions and numerical simulations, we show that surprisal effects are stronger for finer-level than coarser-level hierarchical transitions, and that surprisal effects at coarser levels are difficult to detect for limited learning times or in small samples. Using a serial response experiment with human participants (n=100), we replicate our predictions by detecting a surprisal effect at the finer level of the hierarchy but not at the coarser level of the hierarchy. We then evaluate the presence of a trade-off in learning, whereby humans who learned the finer level of the hierarchy better also tended to learn the coarser level worse, and vice versa. This study elucidates the processes by which humans learn sequential events in hierarchical contexts. More broadly, our work charts a road map for future investigation of the neural underpinnings and behavioral manifestations of graph learning.


Assuntos
Aprendizagem , Humanos , Masculino , Feminino , Modelos Teóricos , Probabilidade , Adulto
8.
ArXiv ; 2023 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-36713259

RESUMO

Music has a complex structure that expresses emotion and conveys information. Humans process that information through imperfect cognitive instruments that produce a gestalt, smeared version of reality. How can we quantify the information contained in a piece of music? Further, what is the information inferred by a human, and how does that relate to (and differ from) the true structure of a piece? To tackle these questions quantitatively, we present a framework to study the information conveyed in a musical piece by constructing and analyzing networks formed by notes (nodes) and their transitions (edges). Using this framework, we analyze music composed by J. S. Bach through the lens of network science and information theory. Regarded as one of the greatest composers in the Western music tradition, Bach's work is highly mathematically structured and spans a wide range of compositional forms, such as fugues and choral pieces. Conceptualizing each composition as a network of note transitions, we quantify the information contained in each piece and find that different kinds of compositions can be grouped together according to their information content and network structure. Moreover, we find that the music networks communicate large amounts of information while maintaining small deviations of the inferred network from the true network, suggesting that they are structured for efficient communication of information. We probe the network structures that enable this rapid and efficient communication of information--namely, high heterogeneity and strong clustering. Taken together, our findings shed new light on the information and network properties of Bach's compositions. More generally, our framework serves as a stepping stone for exploring musical complexities, creativity and the structure of information in a range of complex systems.

9.
ArXiv ; 2023 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-37904743

RESUMO

Maximum entropy methods provide a principled path connecting measurements of neural activity directly to statistical physics models, and this approach has been successful for populations of N~100 neurons. As N increases in new experiments, we enter an undersampled regime where we have to choose which observables should be constrained in the maximum entropy construction. The best choice is the one that provides the greatest reduction in entropy, defining a "minimax entropy" principle. This principle becomes tractable if we restrict attention to correlations among pairs of neurons that link together into a tree; we can find the best tree efficiently, and the underlying statistical physics models are exactly solved. We use this approach to analyze experiments on N~1500 neurons in the mouse hippocampus, and show that the resulting model captures the distribution of synchronous activity in the network.

10.
Phys Rev E ; 108(6-1): 064410, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38243472

RESUMO

The brain is a nonequilibrium system whose dynamics change in different brain states, such as wakefulness and deep sleep. Thermodynamics provides the tools for revealing these nonequilibrium dynamics. We used violations of the fluctuation-dissipation theorem to describe the hierarchy of nonequilibrium dynamics associated with different brain states. Together with a whole-brain model fitted to empirical human neuroimaging data, and deriving the appropriate analytical expressions, we were able to capture the deviation from equilibrium in different brain states that arises from asymmetric interactions and hierarchical organization.


Assuntos
Encéfalo , Humanos , Termodinâmica , Encéfalo/diagnóstico por imagem
11.
ArXiv ; 2023 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-37731654

RESUMO

Humans are constantly exposed to sequences of events in the environment. Those sequences frequently evince statistical regularities, such as the probabilities with which one event transitions to another. Collectively, inter-event transition probabilities can be modeled as a graph or network. Many real-world networks are organized hierarchically and understanding how these networks are learned by humans is an ongoing aim of current investigations. While much is known about how humans learn basic transition graph topology, whether and to what degree humans can learn hierarchical structures in such graphs remains unknown. Here, we investigate how humans learn hierarchical graphs of the Sierpinski family using computer simulations and behavioral laboratory experiments. We probe the mental estimates of transition probabilities via the surprisal effect: a phenomenon in which humans react more slowly to less expected transitions, such as those between communities or modules in the network. Using mean-field predictions and numerical simulations, we show that surprisal effects are stronger for finer-level than coarser-level hierarchical transitions. Notably, surprisal effects at coarser levels of the hierarchy are difficult to detect for limited learning times or in small samples. Using a serial response experiment with human participants (n=100), we replicate our predictions by detecting a surprisal effect at the finer-level of the hierarchy but not at the coarser-level of the hierarchy. To further explain our findings, we evaluate the presence of a trade-off in learning, whereby humans who learned the finer-level of the hierarchy better tended to learn the coarser-level worse, and vice versa. Taken together, our computational and experimental studies elucidate the processes by which humans learn sequential events in hierarchical contexts. More broadly, our work charts a road map for future investigation of the neural underpinnings and behavioral manifestations of graph learning.

12.
Phys Rev E ; 106(3-1): 034102, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36266789

RESUMO

Living systems are fundamentally irreversible, breaking detailed balance and establishing an arrow of time. But how does the evident arrow of time for a whole system arise from the interactions among its multiple elements? We show that the local evidence for the arrow of time, which is the entropy production for thermodynamic systems, can be decomposed. First, it can be split into two components: an independent term reflecting the dynamics of individual elements and an interaction term driven by the dependencies among elements. Adapting tools from nonequilibrium physics, we further decompose the interaction term into contributions from pairs of elements, triplets, and higher-order terms. We illustrate our methods on models of cellular sensing and logical computations, as well as on patterns of neural activity in the retina as it responds to visual inputs. We find that neural activity can define the arrow of time even when the visual inputs do not, and that the dominant contribution to this breaking of detailed balance comes from interactions among pairs of neurons.

13.
Netw Neurosci ; 6(1): 234-274, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36605887

RESUMO

In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8-23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior-beyond the conventional network efficiency metric-for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.

14.
eNeuro ; 9(2)2022.
Artigo em Inglês | MEDLINE | ID: mdl-35105662

RESUMO

Humans deftly parse statistics from sequences. Some theories posit that humans learn these statistics by forming cognitive maps, or underlying representations of the latent space which links items in the sequence. Here, an item in the sequence is a node, and the probability of transitioning between two items is an edge. Sequences can then be generated from walks through the latent space, with different spaces giving rise to different sequence statistics. Individual or group differences in sequence learning can be modeled by changing the time scale over which estimates of transition probabilities are built, or in other words, by changing the amount of temporal discounting. Latent space models with temporal discounting bear a resemblance to models of navigation through Euclidean spaces. However, few explicit links have been made between predictions from Euclidean spatial navigation and neural activity during human sequence learning. Here, we use a combination of behavioral modeling and intracranial encephalography (iEEG) recordings to investigate how neural activity might support the formation of space-like cognitive maps through temporal discounting during sequence learning. Specifically, we acquire human reaction times from a sequential reaction time task, to which we fit a model that formulates the amount of temporal discounting as a single free parameter. From the parameter, we calculate each individual's estimate of the latent space. We find that neural activity reflects these estimates mostly in the temporal lobe, including areas involved in spatial navigation. Similar to spatial navigation, we find that low-dimensional representations of neural activity allow for easy separation of important features, such as modules, in the latent space. Lastly, we take advantage of the high temporal resolution of iEEG data to determine the time scale on which latent spaces are learned. We find that learning typically happens within the first 500 trials, and is modulated by the underlying latent space and the amount of temporal discounting characteristic of each participant. Ultimately, this work provides important links between behavioral models of sequence learning and neural activity during the same behavior, and contextualizes these results within a broader framework of domain general cognitive maps.


Assuntos
Navegação Espacial , Cognição/fisiologia , Humanos , Aprendizagem/fisiologia , Tempo de Reação , Navegação Espacial/fisiologia , Lobo Temporal/fisiologia
15.
Commun Biol ; 4(1): 210, 2021 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-33594239

RESUMO

A major challenge in neuroscience is determining a quantitative relationship between the brain's white matter structural connectivity and emergent activity. We seek to uncover the intrinsic relationship among brain regions fundamental to their functional activity by constructing a pairwise maximum entropy model (MEM) of the inter-ictal activation patterns of five patients with medically refractory epilepsy over an average of ~14 hours of band-passed intracranial EEG (iEEG) recordings per patient. We find that the pairwise MEM accurately predicts iEEG electrodes' activation patterns' probability and their pairwise correlations. We demonstrate that the estimated pairwise MEM's interaction weights predict structural connectivity and its strength over several frequencies significantly beyond what is expected based solely on sampled regions' distance in most patients. Together, the pairwise MEM offers a framework for explaining iEEG functional connectivity and provides insight into how the brain's structural connectome gives rise to large-scale activation patterns by promoting co-activation between connected structures.


Assuntos
Ondas Encefálicas , Epilepsia Resistente a Medicamentos/fisiopatologia , Epilepsia do Lobo Temporal/fisiopatologia , Modelos Neurológicos , Substância Branca/fisiopatologia , Adulto , Conectoma , Epilepsia Resistente a Medicamentos/diagnóstico , Epilepsia Resistente a Medicamentos/terapia , Eletrocorticografia , Entropia , Epilepsia do Lobo Temporal/diagnóstico , Epilepsia do Lobo Temporal/terapia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Rede Nervosa/fisiopatologia , Fatores de Tempo
16.
Nat Commun ; 11(1): 2313, 2020 05 08.
Artigo em Inglês | MEDLINE | ID: mdl-32385232

RESUMO

Humans are adept at uncovering abstract associations in the world around them, yet the underlying mechanisms remain poorly understood. Intuitively, learning the higher-order structure of statistical relationships should involve complex mental processes. Here we propose an alternative perspective: that higher-order associations instead arise from natural errors in learning and memory. Using the free energy principle, which bridges information theory and Bayesian inference, we derive a maximum entropy model of people's internal representations of the transitions between stimuli. Importantly, our model (i) affords a concise analytic form, (ii) qualitatively explains the effects of transition network structure on human expectations, and (iii) quantitatively predicts human reaction times in probabilistic sequential motor tasks. Together, these results suggest that mental errors influence our abstract representations of the world in significant and predictable ways, with direct implications for the study and design of optimally learnable information sources.


Assuntos
Aprendizagem/fisiologia , Memória/fisiologia , Teorema de Bayes , Humanos , Tempo de Reação/fisiologia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa