Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Entropy (Basel) ; 25(1)2022 Dec 24.
Artigo em Inglês | MEDLINE | ID: mdl-36673174

RESUMO

Domain adaptation is a popular paradigm in modern machine learning which aims at tackling the problem of divergence (or shift) between the labeled training and validation datasets (source domain) and a potentially large unlabeled dataset (target domain). The task is to embed both datasets into a common space in which the source dataset is informative for training while the divergence between source and target is minimized. The most popular domain adaptation solutions are based on training neural networks that combine classification and adversarial learning modules, frequently making them both data-hungry and difficult to train. We present a method called Domain Adaptation Principal Component Analysis (DAPCA) that identifies a linear reduced data representation useful for solving the domain adaptation task. DAPCA algorithm introduces positive and negative weights between pairs of data points, and generalizes the supervised extension of principal component analysis. DAPCA is an iterative algorithm that solves a simple quadratic optimization problem at each iteration. The convergence of the algorithm is guaranteed, and the number of iterations is small in practice. We validate the suggested algorithm on previously proposed benchmarks for solving the domain adaptation task. We also show the benefit of using DAPCA in analyzing single-cell omics datasets in biomedical applications. Overall, DAPCA can serve as a practical preprocessing step in many machine learning applications leading to reduced dataset representations, taking into account possible divergence between source and target domains.

2.
Sensors (Basel) ; 21(22)2021 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-34833738

RESUMO

Data on artificial night-time light (NTL), emitted from the areas, and captured by satellites, are available at a global scale in panchromatic format. In the meantime, data on spectral properties of NTL give more information for further analysis. Such data, however, are available locally or on a commercial basis only. In our recent work, we examined several machine learning techniques, such as linear regression, kernel regression, random forest, and elastic map models, to convert the panchromatic NTL images into colored ones. We compared red, green, and blue light levels for eight geographical areas all over the world with panchromatic light intensities and characteristics of built-up extent from spatially corresponding pixels and their nearest neighbors. In the meantime, information from more distant neighboring pixels might improve the predictive power of models. In the present study, we explore this neighborhood effect using convolutional neural networks (CNN). The main outcome of our analysis is that the neighborhood effect goes in line with the geographical extent of metropolitan areas under analysis: For smaller areas, optimal input image size is smaller than for bigger ones. At that, for relatively large cities, the optimal input image size tends to differ for different colors, being on average higher for red and lower for blue lights. Compared to other machine learning techniques, CNN models emerged comparable in terms of Pearson's correlation but showed performed better in terms of WMSE, especially for testing datasets.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Cidades , Luz
3.
Entropy (Basel) ; 23(10)2021 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-34682092

RESUMO

Dealing with uncertainty in applications of machine learning to real-life data critically depends on the knowledge of intrinsic dimensionality (ID). A number of methods have been suggested for the purpose of estimating ID, but no standard package to easily apply them one by one or all at once has been implemented in Python. This technical note introduces scikit-dimension, an open-source Python package for intrinsic dimension estimation. The scikit-dimension package provides a uniform implementation of most of the known ID estimators based on the scikit-learn application programming interface to evaluate the global and local intrinsic dimension, as well as generators of synthetic toy and benchmark datasets widespread in the literature. The package is developed with tools assessing the code quality, coverage, unit testing and continuous integration. We briefly describe the package and demonstrate its use in a large-scale (more than 500 datasets) benchmarking of methods for ID estimation for real-life and synthetic data.

4.
Entropy (Basel) ; 23(8)2021 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-34441230

RESUMO

This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special 'external' devices, correctors. Elementary correctors consist of two parts, a classifier that separates the situations with high risk of error from the situations in which the legacy AI system works well and a new decision that should be recommended for situations with potential errors. Input signals for the correctors can be the inputs of the legacy AI system, its internal signals, and outputs. If the intrinsic dimensionality of data is high enough then the classifiers for correction of small number of errors can be very simple. According to the blessing of dimensionality effects, even simple and robust Fisher's discriminants can be used for one-shot learning of AI correctors. Stochastic separation theorems provide the mathematical basis for this one-short learning. However, as the number of correctors needed grows, the cluster structure of data becomes important and a new family of stochastic separation theorems is required. We refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. New stochastic separation theorems for data with fine-grained structure are formulated and proved. On the basis of these theorems, the multi-correctors for granular data are proposed. The advantages of the multi-corrector technology were demonstrated by examples of correcting errors and learning new classes of objects by a deep convolutional neural network on the CIFAR-10 dataset. The key problems of the non-classical high-dimensional data analysis are reviewed together with the basic preprocessing steps including the correlation transformation, supervised Principal Component Analysis (PCA), semi-supervised PCA, transfer component analysis, and new domain adaptation PCA.

5.
J Allergy Clin Immunol ; 144(1): 83-93, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-30682455

RESUMO

BACKGROUND: Asthma is a disease characterized by ventilation heterogeneity (VH). A number of studies have demonstrated that VH markers derived by using impulse oscillometry (IOS) or multiple-breath washout (MBW) are associated with key asthmatic patient-related outcome measures and airways hyperresponsiveness. However, the topographical mechanisms of VH in the lung remain poorly understood. OBJECTIVES: We hypothesized that specific regionalization of topographical small-airway disease would best account for IOS- and MBW-measured indices in patients. METHODS: We evaluated the results of paired expiratory/inspiratory computed tomography in a cohort of asthmatic (n = 41) and healthy (n = 11) volunteers to understand the determinants of clinical VH indices commonly reported by using IOS and MBW. Parametric response mapping (PRM) was used to calculate the functional small-airways disease marker PRMfSAD and Hounsfield unit (HU)-based density changes from total lung capacity to functional residual capacity (ΔHU); gradients of ΔHU in gravitationally perpendicular (parallel) inferior-superior (anterior-posterior) axes were quantified. RESULTS: The ΔHU gradient in the inferior-superior axis provided the highest level of discrimination of both acinar VH (measured by using phase 3 slope analysis of multiple-breath washout data) and resistance at 5 Hz minus resistance at 20 Hz measured by using impulse oscillometry (R5-R20) values. Patients with a high inferior-superior ΔHU gradient demonstrated evidence of reduced specific ventilation in the lower lobes of the lungs and high levels of PRMfSAD. A computational small-airway tree model confirmed that constriction of gravitationally dependent, lower-zone, small-airway branches would promote the largest increases in R5-R20 values. Ventilation gradients correlated with asthma control and quality of life but not with exacerbation frequency. CONCLUSIONS: Lower lobe-predominant small-airways disease is a major driver of clinically measured VH in adults with asthma.


Assuntos
Asma/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Adulto , Idoso , Asma/tratamento farmacológico , Asma/fisiopatologia , Broncodilatadores/uso terapêutico , Volume Expiratório Forçado , Humanos , Pulmão/fisiopatologia , Masculino , Pessoa de Meia-Idade , Tomografia Computadorizada por Raios X , Capacidade Vital
6.
Entropy (Basel) ; 22(10)2020 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-33286874

RESUMO

The curse of dimensionality causes the well-known and widely discussed problems for machine learning methods. There is a hypothesis that using the Manhattan distance and even fractional lp quasinorms (for p less than 1) can help to overcome the curse of dimensionality in classification problems. In this study, we systematically test this hypothesis. It is illustrated that fractional quasinorms have a greater relative contrast and coefficient of variation than the Euclidean norm l2, but it is shown that this difference decays with increasing space dimension. It has been demonstrated that the concentration of distances shows qualitatively the same behaviour for all tested norms and quasinorms. It is shown that a greater relative contrast does not mean a better classification quality. It was revealed that for different databases the best (worst) performance was achieved under different norms (quasinorms). A systematic comparison shows that the difference in the performance of kNN classifiers for lp at p = 0.5, 1, and 2 is statistically insignificant. Analysis of curse and blessing of dimensionality requires careful definition of data dimensionality that rarely coincides with the number of attributes. We systematically examined several intrinsic dimensions of the data.

7.
Entropy (Basel) ; 22(1)2020 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-33285855

RESUMO

High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. The well-known phenomenon of the "curse of dimensionality" states: many problems become exponentially difficult in high dimensions. Recently, the other side of the coin, the "blessing of dimensionality", has attracted much attention. It turns out that generic high-dimensional datasets exhibit fairly simple geometric properties. Thus, there is a fundamental tradeoff between complexity and simplicity in high dimensional spaces. Here we present a brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience.

8.
Entropy (Basel) ; 22(3)2020 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-33286070

RESUMO

Multidimensional datapoint clouds representing large datasets are frequently characterized by non-trivial low-dimensional geometry and topology which can be recovered by unsupervised machine learning approaches, in particular, by principal graphs. Principal graphs approximate the multivariate data by a graph injected into the data space with some constraints imposed on the node mapping. Here we present ElPiGraph, a scalable and robust method for constructing principal graphs. ElPiGraph exploits and further develops the concept of elastic energy, the topological graph grammar approach, and a gradient descent-like optimization of the graph topology. The method is able to withstand high levels of noise and is capable of approximating data point clouds via principal graph ensembles. This strategy can be used to estimate the statistical significance of complex data features and to summarize them into a single consensus principal graph. ElPiGraph deals efficiently with large datasets in various fields such as biology, where it can be used for example with single-cell transcriptomic or epigenomic datasets to infer gene expression dynamics and recover differentiation landscapes.

9.
Bull Math Biol ; 81(11): 4856-4888, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-29556797

RESUMO

Codifying memories is one of the fundamental problems of modern Neuroscience. The functional mechanisms behind this phenomenon remain largely unknown. Experimental evidence suggests that some of the memory functions are performed by stratified brain structures such as the hippocampus. In this particular case, single neurons in the CA1 region receive a highly multidimensional input from the CA3 area, which is a hub for information processing. We thus assess the implication of the abundance of neuronal signalling routes converging onto single cells on the information processing. We show that single neurons can selectively detect and learn arbitrary information items, given that they operate in high dimensions. The argument is based on stochastic separation theorems and the concentration of measure phenomena. We demonstrate that a simple enough functional neuronal model is capable of explaining: (i) the extreme selectivity of single neurons to the information content, (ii) simultaneous separation of several uncorrelated stimuli or informational items from a large set, and (iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organization of complex memories in ensembles of single neurons. Moreover, they show that no a priori assumptions on the structural organization of neuronal ensembles are necessary for explaining basic concepts of static and dynamic memories.


Assuntos
Encéfalo/citologia , Encéfalo/fisiologia , Aprendizagem/fisiologia , Memória/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais , Aprendizagem por Associação/fisiologia , Região CA1 Hipocampal/citologia , Região CA1 Hipocampal/fisiologia , Região CA3 Hipocampal/citologia , Região CA3 Hipocampal/fisiologia , Simulação por Computador , Humanos , Aprendizado de Máquina , Conceitos Matemáticos , Redes Neurais de Computação , Plasticidade Neuronal/fisiologia , Estimulação Luminosa , Células Piramidais/citologia , Células Piramidais/fisiologia , Processos Estocásticos
10.
Int J Mol Sci ; 20(18)2019 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-31500324

RESUMO

Independent component analysis (ICA) is a matrix factorization approach where the signals captured by each individual matrix factors are optimized to become as mutually independent as possible. Initially suggested for solving source blind separation problems in various fields, ICA was shown to be successful in analyzing functional magnetic resonance imaging (fMRI) and other types of biomedical data. In the last twenty years, ICA became a part of the standard machine learning toolbox, together with other matrix factorization methods such as principal component analysis (PCA) and non-negative matrix factorization (NMF). Here, we review a number of recent works where ICA was shown to be a useful tool for unraveling the complexity of cancer biology from the analysis of different types of omics data, mainly collected for tumoral samples. Such works highlight the use of ICA in dimensionality reduction, deconvolution, data pre-processing, meta-analysis, and others applied to different data types (transcriptome, methylome, proteome, single-cell data). We particularly focus on the technical aspects of ICA application in omics studies such as using different protocols, determining the optimal number of components, assessing and improving reproducibility of the ICA results, and comparison with other popular matrix factorization techniques. We discuss the emerging ICA applications to the integrative analysis of multi-level omics datasets and introduce a conceptual view on ICA as a tool for defining functional subsystems of a complex biological system and their interactions under various conditions. Our review is accompanied by a Jupyter notebook which illustrates the discussed concepts and provides a practical tool for applying ICA to the analysis of cancer omics datasets.


Assuntos
Biologia Computacional/métodos , Neoplasias/genética , Neoplasias/metabolismo , Algoritmos , Curadoria de Dados , Bases de Dados Factuais , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Neoplasias/diagnóstico por imagem , Análise de Componente Principal
12.
J Theor Biol ; 405: 127-39, 2016 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-26801872

RESUMO

In 1938, Selye proposed the notion of adaptation energy and published 'Experimental evidence supporting the conception of adaptation energy.' Adaptation of an animal to different factors appears as the spending of one resource. Adaptation energy is a hypothetical extensive quantity spent for adaptation. This term causes much debate when one takes it literally, as a physical quantity, i.e. a sort of energy. The controversial points of view impede the systematic use of the notion of adaptation energy despite experimental evidence. Nevertheless, the response to many harmful factors often has general non-specific form and we suggest that the mechanisms of physiological adaptation admit a very general and nonspecific description. We aim to demonstrate that Selye׳s adaptation energy is the cornerstone of the top-down approach to modelling of non-specific adaptation processes. We analyze Selye׳s axioms of adaptation energy together with Goldstone׳s modifications and propose a series of models for interpretation of these axioms. Adaptation energy is considered as an internal coordinate on the 'dominant path' in the model of adaptation. The phenomena of 'oscillating death' and 'oscillating remission' are predicted on the base of the dynamical models of adaptation. Natural selection plays a key role in the evolution of mechanisms of physiological adaptation. We use the fitness optimization approach to study of the distribution of resources for neutralization of harmful factors, during adaptation to a multifactor environment, and analyze the optimal strategies for different systems of factors.


Assuntos
Adaptação Fisiológica , Evolução Biológica , Estresse Fisiológico , Animais , Aptidão Genética , Genótipo , Modelos Biológicos
13.
RNA ; 18(9): 1635-55, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22850425

RESUMO

MicroRNAs (miRNAs) are key regulators of all important biological processes, including development, differentiation, and cancer. Although remarkable progress has been made in deciphering the mechanisms used by miRNAs to regulate translation, many contradictory findings have been published that stimulate active debate in this field. Here we contribute to this discussion in three ways. First, based on a comprehensive analysis of the existing literature, we hypothesize a model in which all proposed mechanisms of microRNA action coexist, and where the apparent mechanism that is detected in a given experiment is determined by the relative values of the intrinsic characteristics of the target mRNAs and associated biological processes. Among several coexisting miRNA mechanisms, the one that will effectively be measurable is that which acts on or changes the sensitive parameters of the translation process. Second, we have created a mathematical model that combines nine known mechanisms of miRNA action and estimated the model parameters from the literature. Third, based on the mathematical modeling, we have developed a computational tool for discriminating among different possible individual mechanisms of miRNA action based on translation kinetics data that can be experimentally measured (kinetic signatures). To confirm the discriminatory power of these kinetic signatures and to test our hypothesis, we have performed several computational experiments with the model in which we simulated the coexistence of several miRNA action mechanisms in the context of variable parameter values of the translation.


Assuntos
MicroRNAs/metabolismo , Modelos Biológicos , Cinética , Biossíntese de Proteínas/fisiologia
14.
Adv Exp Med Biol ; 774: 189-224, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23377975

RESUMO

MicroRNAs can affect the protein translation using nine mechanistically different mechanisms, including repression of initiation and degradation of the transcript. There is a hot debate in the current literature about which mechanism and in which situations has a dominant role in living cells. The worst, same experimental systems dealing with the same pairs of mRNA and miRNA can provide ambiguous evidences about which is the actual mechanism of translation repression observed in the experiment. We start with reviewing the current knowledge of various mechanisms of miRNA action and suggest that mathematical modeling can help resolving some of the controversial interpretations. We describe three simple mathematical models of miRNA translation that can be used as tools in interpreting the experimental data on the dynamics of protein synthesis. The most complex model developed by us includes all known mechanisms of miRNA action. It allowed us to study possible dynamical patterns corresponding to different miRNA-mediated mechanisms of translation repression and to suggest concrete recipes on determining the dominant mechanism of miRNA action in the form of kinetic signatures. Using computational experiments and systematizing existing evidences from the literature, we justify a hypothesis about co-existence of distinct miRNA-mediated mechanisms of translation repression. The actually observed mechanism will be that acting on or changing the sensitive parameters of the translation process. The limiting place can vary from one experimental setting to another. This model explains the majority of existing controversies reported.


Assuntos
Regulação da Expressão Gênica , MicroRNAs/metabolismo , Modelos Biológicos , Biossíntese de Proteínas/genética , Animais , Humanos , Cinética , MicroRNAs/genética , RNA Mensageiro/genética , RNA Mensageiro/metabolismo
15.
Artigo em Inglês | MEDLINE | ID: mdl-38048242

RESUMO

Mammalian brains operate in very special surroundings: to survive they have to react quickly and effectively to the pool of stimuli patterns previously recognized as danger. Many learning tasks often encountered by living organisms involve a specific set-up centered around a relatively small set of patterns presented in a particular environment. For example, at a party, people recognize friends immediately, without deep analysis, just by seeing a fragment of their clothes. This set-up with reduced "ontology" is referred to as a "situation." Situations are usually local in space and time. In this work, we propose that neuron-astrocyte networks provide a network topology that is effectively adapted to accommodate situation-based memory. In order to illustrate this, we numerically simulate and analyze a well-established model of a neuron-astrocyte network, which is subjected to stimuli conforming to the situation-driven environment. Three pools of stimuli patterns are considered: external patterns, patterns from the situation associative pool regularly presented to the network and learned by the network, and patterns already learned and remembered by astrocytes. Patterns from the external world are added to and removed from the associative pool. Then, we show that astrocytes are structurally necessary for an effective function in such a learning and testing set-up. To demonstrate this we present a novel neuromorphic computational model for short-term memory implemented by a two-net spiking neural-astrocytic network. Our results show that such a system tested on synthesized data with selective astrocyte-induced modulation of neuronal activity provides an enhancement of retrieval quality in comparison to standard spiking neural networks trained via Hebbian plasticity only. We argue that the proposed set-up may offer a new way to analyze, model, and understand neuromorphic artificial intelligence systems.

16.
Bull Math Biol ; 73(9): 2013-44, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21088995

RESUMO

The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed.


Assuntos
Adaptação Fisiológica , Ecossistema , Modelos Biológicos , Dinâmica Populacional
17.
Neural Netw ; 138: 33-56, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33621897

RESUMO

Phenomenon of stochastic separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities. In high-dimensional datasets under broad assumptions each point can be separated from the rest of the set by simple and robust Fisher's discriminant (is Fisher separable). Errors or clusters of errors can be separated from the rest of the data. The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same stochastic separability that holds the keys to understanding the fundamentals of robustness and adaptivity in high-dimensional data-driven AI. To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions. Explicit and optimal estimates of these separation probabilities are required, and this problem is solved in the present work. The general stochastic separation theorems with optimal probability estimates are obtained for important classes of distributions: log-concave distribution, their convex combinations and product distributions. The standard i.i.d. assumption was significantly relaxed. These theorems and estimates can be used both for correction of high-dimensional data driven AI systems and for analysis of their vulnerabilities. The third area of application is the emergence of memories in ensembles of neurons, the phenomena of grandmother's cells and sparse coding in the brain, and explanation of unexpected effectiveness of small neural ensembles in high-dimensional brain.


Assuntos
Aprendizado de Máquina , Processos Estocásticos
18.
Sci Rep ; 11(1): 22497, 2021 11 18.
Artigo em Inglês | MEDLINE | ID: mdl-34795311

RESUMO

The dynamics of epidemics depend on how people's behavior changes during an outbreak. At the beginning of the epidemic, people do not know about the virus, then, after the outbreak of epidemics and alarm, they begin to comply with the restrictions and the spreading of epidemics may decline. Over time, some people get tired/frustrated by the restrictions and stop following them (exhaustion), especially if the number of new cases drops down. After resting for a while, they can follow the restrictions again. But during this pause the second wave can come and become even stronger then the first one. Studies based on SIR models do not predict the observed quick exit from the first wave of epidemics. Social dynamics should be considered. The appearance of the second wave also depends on social factors. Many generalizations of the SIR model have been developed that take into account the weakening of immunity over time, the evolution of the virus, vaccination and other medical and biological details. However, these more sophisticated models do not explain the apparent differences in outbreak profiles between countries with different intrinsic socio-cultural features. In our work, a system of models of the COVID-19 pandemic is proposed, combining the dynamics of social stress with classical epidemic models. Social stress is described by the tools of sociophysics. The combination of a dynamic SIR-type model with the classical triad of stages of the general adaptation syndrome, alarm-resistance-exhaustion, makes it possible to describe with high accuracy the available statistical data for 13 countries. The sets of kinetic constants corresponding to optimal fit of model to data were found. These constants characterize the ability of society to mobilize efforts against epidemics and maintain this concentration over time and can further help in the development of management strategies specific to a particular society.


Assuntos
COVID-19 , Modelos Biológicos , Pandemias , SARS-CoV-2 , COVID-19/epidemiologia , COVID-19/prevenção & controle , COVID-19/transmissão , Humanos
19.
Front Mol Biosci ; 8: 793912, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35178429

RESUMO

Cell cycle is a biological process underlying the existence and propagation of life in time and space. It has been an object for mathematical modeling for long, with several alternative mechanistic modeling principles suggested, describing in more or less details the known molecular mechanisms. Recently, cell cycle has been investigated at single cell level in snapshots of unsynchronized cell populations, exploiting the new methods for transcriptomic and proteomic molecular profiling. This raises a need for simplified semi-phenomenological cell cycle models, in order to formalize the processes underlying the cell cycle, at a higher abstracted level. Here we suggest a modeling framework, recapitulating the most important properties of the cell cycle as a limit trajectory of a dynamical process characterized by several internal states with switches between them. In the simplest form, this leads to a limit cycle trajectory, composed by linear segments in logarithmic coordinates describing some extensive (depending on system size) cell properties. We prove a theorem connecting the effective embedding dimensionality of the cell cycle trajectory with the number of its linear segments. We also develop a simplified kinetic model with piecewise-constant kinetic rates describing the dynamics of lumps of genes involved in S-phase and G2/M phases. We show how the developed cell cycle models can be applied to analyze the available single cell datasets and simulate certain properties of the observed cell cycle trajectories. Based on our model, we can predict with good accuracy the cell line doubling time from the length of cell cycle trajectory.

20.
Front Cell Neurosci ; 15: 631485, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33867939

RESUMO

We propose a novel biologically plausible computational model of working memory (WM) implemented by a spiking neuron network (SNN) interacting with a network of astrocytes. The SNN is modeled by synaptically coupled Izhikevich neurons with a non-specific architecture connection topology. Astrocytes generating calcium signals are connected by local gap junction diffusive couplings and interact with neurons via chemicals diffused in the extracellular space. Calcium elevations occur in response to the increased concentration of the neurotransmitter released by spiking neurons when a group of them fire coherently. In turn, gliotransmitters are released by activated astrocytes modulating the strength of the synaptic connections in the corresponding neuronal group. Input information is encoded as two-dimensional patterns of short applied current pulses stimulating neurons. The output is taken from frequencies of transient discharges of corresponding neurons. We show how a set of information patterns with quite significant overlapping areas can be uploaded into the neuron-astrocyte network and stored for several seconds. Information retrieval is organized by the application of a cue pattern representing one from the memory set distorted by noise. We found that successful retrieval with the level of the correlation between the recalled pattern and ideal pattern exceeding 90% is possible for the multi-item WM task. Having analyzed the dynamical mechanism of WM formation, we discovered that astrocytes operating at a time scale of a dozen of seconds can successfully store traces of neuronal activations corresponding to information patterns. In the retrieval stage, the astrocytic network selectively modulates synaptic connections in the SNN leading to successful recall. Information and dynamical characteristics of the proposed WM model agrees with classical concepts and other WM models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA