Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Phys Rev Lett ; 128(16): 168301, 2022 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-35522522

RESUMEN

Criticality is deeply related to optimal computational capacity. The lack of a renormalized theory of critical brain dynamics, however, so far limits insights into this form of biological information processing to mean-field results. These methods neglect a key feature of critical systems: the interaction between degrees of freedom across all length scales, required for complex nonlinear computation. We present a renormalized theory of a prototypical neural field theory, the stochastic Wilson-Cowan equation. We compute the flow of couplings, which parametrize interactions on increasing length scales. Despite similarities with the Kardar-Parisi-Zhang model, the theory is of a Gell-Mann-Low type, the archetypal form of a renormalizable quantum field theory. Here, nonlinear couplings vanish, flowing towards the Gaussian fixed point, but logarithmically slowly, thus remaining effective on most scales. We show this critical structure of interactions to implement a desirable trade-off between linearity, optimal for information storage, and nonlinearity, required for computation.


Asunto(s)
Encéfalo , Redes Neurales de la Computación , Distribución Normal , Teoría Cuántica
2.
Phys Rev Lett ; 127(15): 158302, 2021 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-34678014

RESUMEN

We here unify the field-theoretical approach to neuronal networks with large deviations theory. For a prototypical random recurrent network model with continuous-valued units, we show that the effective action is identical to the rate function and derive the latter using field theory. This rate function takes the form of a Kullback-Leibler divergence which enables data-driven inference of model parameters and calculation of fluctuations beyond mean-field theory. Lastly, we expose a regime with fluctuation-induced transitions between mean-field solutions.

3.
PLoS Comput Biol ; 13(6): e1005534, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28604771

RESUMEN

Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis.


Asunto(s)
Potenciales de Acción/fisiología , Relojes Biológicos/fisiología , Encéfalo/fisiología , Sincronización Cortical/fisiología , Modelos Neurológicos , Neuronas/fisiología , Animales , Ondas Encefálicas/fisiología , Simulación por Computador , Retroalimentación Fisiológica/fisiología , Humanos , Red Nerviosa/fisiología
4.
Evol Comput ; 24(3): 411-25, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26135717

RESUMEN

The hypervolume subset selection problem consists of finding a subset, with a given cardinality k, of a set of nondominated points that maximizes the hypervolume indicator. This problem arises in selection procedures of evolutionary algorithms for multiobjective optimization, for which practically efficient algorithms are required. In this article, two new formulations are provided for the two-dimensional variant of this problem. The first is a (linear) integer programming formulation that can be solved by solving its linear programming relaxation. The second formulation is a k-link shortest path formulation on a special digraph with the Monge property that can be solved by dynamic programming in [Formula: see text] time. This improves upon the result of [Formula: see text] in Bader ( 2009 ), and slightly improves upon the result of [Formula: see text] in Bringmann et al. ( 2014b ), which was developed independently from this work using different techniques. Numerical results are shown for several values of n and k.


Asunto(s)
Algoritmos , Modelos Teóricos
5.
J Biomed Semantics ; 15(1): 7, 2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38802877

RESUMEN

BACKGROUND: In today's landscape of data management, the importance of knowledge graphs and ontologies is escalating as critical mechanisms aligned with the FAIR Guiding Principles-ensuring data and metadata are Findable, Accessible, Interoperable, and Reusable. We discuss three challenges that may hinder the effective exploitation of the full potential of FAIR knowledge graphs. RESULTS: We introduce "semantic units" as a conceptual solution, although currently exemplified only in a limited prototype. Semantic units structure a knowledge graph into identifiable and semantically meaningful subgraphs by adding another layer of triples on top of the conventional data layer. Semantic units and their subgraphs are represented by their own resource that instantiates a corresponding semantic unit class. We distinguish statement and compound units as basic categories of semantic units. A statement unit is the smallest, independent proposition that is semantically meaningful for a human reader. Depending on the relation of its underlying proposition, it consists of one or more triples. Organizing a knowledge graph into statement units results in a partition of the graph, with each triple belonging to exactly one statement unit. A compound unit, on the other hand, is a semantically meaningful collection of statement and compound units that form larger subgraphs. Some semantic units organize the graph into different levels of representational granularity, others orthogonally into different types of granularity trees or different frames of reference, structuring and organizing the knowledge graph into partially overlapping, partially enclosed subgraphs, each of which can be referenced by its own resource. CONCLUSIONS: Semantic units, applicable in RDF/OWL and labeled property graphs, offer support for making statements about statements and facilitate graph-alignment, subgraph-matching, knowledge graph profiling, and for management of access restrictions to sensitive data. Additionally, we argue that organizing the graph into semantic units promotes the differentiation of ontological and discursive information, and that it also supports the differentiation of multiple frames of reference within the graph.


Asunto(s)
Semántica , Gráficos por Computador , Ontologías Biológicas , Humanos
6.
Phys Rev E ; 108(6-1): 064301, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38243526

RESUMEN

Continuous attractor neural networks (CANN) form an appealing conceptual model for the storage of information in the brain. However a drawback of CANN is that they require finely tuned interactions. We here study the effect of quenched noise in the interactions on the coding of positional information within CANN. Using the replica method we compute the Fisher information for a network with position-dependent input and recurrent connections composed of a short-range (in space) and a disordered component. We find that the loss in positional information is small for not too large disorder strength, indicating that CANN have a regime in which the advantageous effects of local connectivity on information storage outweigh the detrimental ones. Furthermore, a substantial part of this information can be extracted with a simple linear readout.

7.
PeerJ Comput Sci ; 9: e1159, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37346675

RESUMEN

With the rapidly increasing amount of scientific literature, it is getting continuously more difficult for researchers in different disciplines to keep up-to-date with the recent findings in their field of study. Processing scientific articles in an automated fashion has been proposed as a solution to this problem, but the accuracy of such processing remains very poor for extraction tasks beyond the most basic ones (like locating and identifying entities and simple classification based on predefined categories). Few approaches have tried to change how we publish scientific results in the first place, such as by making articles machine-interpretable by expressing them with formal semantics from the start. In the work presented here, we propose a first step in this direction by setting out to demonstrate that we can formally publish high-level scientific claims in formal logic, and publish the results in a special issue of an existing journal. We use the concept and technology of nanopublications for this endeavor, and represent not just the submissions and final papers in this RDF-based format, but also the whole process in between, including reviews, responses, and decisions. We do this by performing a field study with what we call formalization papers, which contribute a novel formalization of a previously published claim. We received 15 submissions from 18 authors, who then went through the whole publication process leading to the publication of their contributions in the special issue. Our evaluation shows the technical and practical feasibility of our approach. The participating authors mostly showed high levels of interest and confidence, and mostly experienced the process as not very difficult, despite the technical nature of the current user interfaces. We believe that these results indicate that it is possible to publish scientific results from different fields with machine-interpretable semantics from the start, which in turn opens countless possibilities to radically improve in the future the effectiveness and efficiency of the scientific endeavor as a whole.

8.
PeerJ Comput Sci ; 8: e1038, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36091999

RESUMEN

Understanding the complexity of restricted research data is vitally important in the current new era of Open Science. While the FAIR Guiding Principles have been introduced to help researchers to make data Findable, Accessible, Interoperable and Reusable, it is still unclear how the notions of FAIR and Openness can be applied in the context of restricted data. Many methods have been proposed in support of the implementation of the principles, but there is yet no consensus among the scientific community as to the suitable mechanisms of making restricted data FAIR. We present here a systematic literature review to identify the methods applied by scientists when researching restricted data in a FAIR-compliant manner in the context of the FAIR principles. Through the employment of a descriptive and iterative study design, we aim to answer the following three questions: (1) What methods have been proposed to apply the FAIR principles to restricted data?, (2) How can the relevant aspects of the methods proposed be categorized?, (3) What is the maturity of the methods proposed in applying the FAIR principles to restricted data?. After analysis of the 40 included publications, we noticed that the methods found, reflect the stages of the Data Life Cycle, and can be divided into the following Classes: Data Collection, Metadata Representation, Data Processing, Anonymization, Data Publication, Data Usage and Post Data Usage. We observed that a large number of publications used 'Access Control' and 'Usage and License Terms' methods, while others such as 'Embargo on Data Release' and the use of 'Synthetic Data' were used in fewer instances. In conclusion, we are presenting the first extensive literature review on the methods applied to confidential data in the context of FAIR, providing a comprehensive conceptual framework for future research on restricted access data.

9.
Phys Rev E ; 105(5-2): 059901, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35706324

RESUMEN

This corrects the article DOI: 10.1103/PhysRevE.101.042124.

10.
PeerJ Comput Sci ; 7: e387, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33817033

RESUMEN

While the publication of Linked Data has become increasingly common, the process tends to be a relatively complicated and heavy-weight one. Linked Data is typically published by centralized entities in the form of larger dataset releases, which has the downside that there is a central bottleneck in the form of the organization or individual responsible for the releases. Moreover, certain kinds of data entries, in particular those with subjective or original content, currently do not fit into any existing dataset and are therefore more difficult to publish. To address these problems, we present here an approach to use nanopublications and a decentralized network of services to allow users to directly publish small Linked Data statements through a simple and user-friendly interface, called Nanobench, powered by semantic templates that are themselves published as nanopublications. The published nanopublications are cryptographically verifiable and can be queried through a redundant and decentralized network of services, based on the grlc API generator and a new quad extension of Triple Pattern Fragments. We show here that these two kinds of services are complementary and together allow us to query nanopublications in a reliable and efficient manner. We also show that Nanobench makes it indeed very easy for users to publish Linked Data statements, even for those who have no prior experience in Linked Data publishing.

11.
F1000Res ; 10: 897, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34804501

RESUMEN

Scientific data analyses often combine several computational tools in automated pipelines, or workflows. Thousands of such workflows have been used in the life sciences, though their composition has remained a cumbersome manual process due to a lack of standards for annotation, assembly, and implementation. Recent technological advances have returned the long-standing vision of automated workflow composition into focus. This article summarizes a recent Lorentz Center workshop dedicated to automated composition of workflows in the life sciences. We survey previous initiatives to automate the composition process, and discuss the current state of the art and future perspectives. We start by drawing the "big picture" of the scientific workflow development life cycle, before surveying and discussing current methods, technologies and practices for semantic domain modelling, automation in workflow development, and workflow assessment. Finally, we derive a roadmap of individual and community-based actions to work toward the vision of automated workflow development in the forthcoming years. A central outcome of the workshop is a general description of the workflow life cycle in six stages: 1) scientific question or hypothesis, 2) conceptual workflow, 3) abstract workflow, 4) concrete workflow, 5) production workflow, and 6) scientific results. The transitions between stages are facilitated by diverse tools and methods, usually incorporating domain knowledge in some form. Formal semantic domain modelling is hard and often a bottleneck for the application of semantic technologies. However, life science communities have made considerable progress here in recent years and are continuously improving, renewing interest in the application of semantic technologies for workflow exploration, composition and instantiation. Combined with systematic benchmarking with reference data and large-scale deployment of production-stage workflows, such technologies enable a more systematic process of workflow development than we know today. We believe that this can lead to more robust, reusable, and sustainable workflows in the future.


Asunto(s)
Disciplinas de las Ciencias Biológicas , Biología Computacional , Benchmarking , Programas Informáticos , Flujo de Trabajo
12.
Phys Rev E ; 101(4-1): 042124, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32422832

RESUMEN

Neural dynamics is often investigated with tools from bifurcation theory. However, many neuron models are stochastic, mimicking fluctuations in the input from unknown parts of the brain or the spiking nature of signals. Noise changes the dynamics with respect to the deterministic model; in particular classical bifurcation theory cannot be applied. We formulate the stochastic neuron dynamics in the Martin-Siggia-Rose de Dominicis-Janssen (MSRDJ) formalism and present the fluctuation expansion of the effective action and the functional renormalization group (fRG) as two systematic ways to incorporate corrections to the mean dynamics and time-dependent statistics due to fluctuations in the presence of nonlinear neuronal gain. To formulate self-consistency equations, we derive a fundamental link between the effective action in the Onsager-Machlup (OM) formalism, which allows the study of phase transitions, and the MSRDJ effective action, which is computationally advantageous. These results in particular allow the derivation of an OM effective action for systems with non-Gaussian noise. This approach naturally leads to effective deterministic equations for the first moment of the stochastic system; they explain how nonlinearities and noise cooperate to produce memory effects. Moreover, the MSRDJ formulation yields an effective linear system that has identical power spectra and linear response. Starting from the better known loopwise approximation, we then discuss the use of the fRG as a method to obtain self-consistency beyond the mean. We present a new efficient truncation scheme for the hierarchy of flow equations for the vertex functions by adapting the Blaizot, Méndez, and Wschebor approximation from the derivative expansion to the vertex expansion. The methods are presented by means of the simplest possible example of a stochastic differential equation that has generic features of neuronal dynamics.

13.
PeerJ Comput Sci ; 6: e281, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33816932

RESUMEN

It is essential for the advancement of science that researchers share, reuse and reproduce each other's workflows and protocols. The FAIR principles are a set of guidelines that aim to maximize the value and usefulness of research data, and emphasize the importance of making digital objects findable and reusable by others. The question of how to apply these principles not just to data but also to the workflows and protocols that consume and produce them is still under debate and poses a number of challenges. In this paper we describe a two-fold approach of simultaneously applying the FAIR principles to scientific workflows as well as the involved data. We apply and evaluate our approach on the case of the PREDICT workflow, a highly cited drug repurposing workflow. This includes FAIRification of the involved datasets, as well as applying semantic technologies to represent and store data about the detailed versions of the general protocol, of the concrete workflow instructions, and of their execution traces. We propose a semantic model to address these specific requirements and was evaluated by answering competency questions. This semantic model consists of classes and relations from a number of existing ontologies, including Workflow4ever, PROV, EDAM, and BPMN. This allowed us then to formulate and answer new kinds of competency questions. Our evaluation shows the high degree to which our FAIRified OpenPREDICT workflow now adheres to the FAIR principles and the practicality and usefulness of being able to answer our new competency questions.

15.
PeerJ Comput Sci ; 5: e189, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-33816842

RESUMEN

The analysis of literary works has experienced a surge in computer-assisted processing. To obtain insights into the community structures and social interactions portrayed in novels, the creation of social networks from novels has gained popularity. Many methods rely on identifying named entities and relations for the construction of these networks, but many of these tools are not specifically created for the literary domain. Furthermore, many of the studies on information extraction from literature typically focus on 19th and early 20th century source material. Because of this, it is unclear if these techniques are as suitable to modern-day literature as they are to those older novels. We present a study in which we evaluate natural language processing tools for the automatic extraction of social networks from novels as well as their network structure. We find that there are no significant differences between old and modern novels but that both are subject to a large amount of variance. Furthermore, we identify several issues that complicate named entity recognition in our set of novels and we present methods to remedy these. We see this work as a step in creating more culturally-aware AI systems.

18.
Sci Data ; 6(1): 174, 2019 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-31541130

RESUMEN

Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine "sees" when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.

20.
Sci Data ; 3: 160018, 2016 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-26978244

RESUMEN

There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders-representing academia, industry, funding agencies, and scholarly publishers-have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.


Asunto(s)
Recolección de Datos , Curaduría de Datos , Proyectos de Investigación , Sistemas de Administración de Bases de Datos , Guías como Asunto , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda