RESUMO
Dynamical descriptions and modeling of natural systems have generally focused on fixed points, with saddles and saddle-based phase-space objects such as heteroclinic channels or cycles being central concepts behind the emergence of quasistable long transients. Reliable and robust transient dynamics observed for real, inherently noisy systems is, however, not met by saddle-based dynamics, as demonstrated here. Generalizing the notion of ghost states, we provide a complementary framework that does not rely on the precise knowledge or existence of (un)stable fixed points, but rather on slow directed flows organized by ghost sets in ghost channels and ghost cycles. Moreover, we show that the appearance of these novel objects is an emergent property of a broad class of models typically used for description of natural systems.
RESUMO
Phylogeography of the steppe bat, Myotis davidii in the eastern part of its broad range, was explored for the first time using mitochondrial genetic markers. The presence of two main intraspecific clades, Eastern and Western, was confirmed. Definite inner structure inside the Eastern group was shown. We discovered genetic diversity hotspot in northwestern Mongolia and neighboring regions, where highly divergent haplotypes are found. Presumably, this can be explained by Pleistocene refugial structure shaped by the ridges of the Mongolian Altai. The haplogroups from the southeast of Mongolia and Transbaikalia found to be related, while populations of the Kerulen valley located between these regions carry more distant haplotypes.
Assuntos
Quirópteros , Animais , Quirópteros/genética , Haplótipos/genética , Mongólia , Filogenia , FilogeografiaRESUMO
In this introduction, the essence of the sixth problem is discussed and the content of this issue is introduced.This article is part of the theme issue 'Hilbert's sixth problem'.
RESUMO
The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.This article is part of the theme issue 'Hilbert's sixth problem'.
RESUMO
Cough is considered to be one of the leading clinical symptoms associated with the pathological changes in the respiratory system. Notwithstanding a great variety of therapeutic pharmaceutical products possessed of the antitussive action, physicians tend the give preference to the preparations producing the combined effect. The present article reports the results of the clinical study designed to evaluate the effectiveness and safety of the application of rengalin exhibiting the combined antitussive, anti-inflammatory, and broncholytic action in the patients presenting with the postnasal drip syndrome. The comparison of the therapeutic effects of rengalin with those of other therapeutic modalities frequently employed for the management of postnasal drip give evidence of the high efficiency of this product for the optimization of the treatment of this condition and the associated chronic cough.
Assuntos
Anti-Inflamatórios , Antitussígenos , Broncodilatadores , Tosse , Sistema Respiratório/efeitos dos fármacos , Infecções Respiratórias/complicações , Adulto , Anti-Inflamatórios/administração & dosagem , Anti-Inflamatórios/farmacocinética , Antitussígenos/administração & dosagem , Antitussígenos/farmacocinética , Disponibilidade Biológica , Broncodilatadores/administração & dosagem , Broncodilatadores/farmacocinética , Doença Crônica , Tosse/tratamento farmacológico , Tosse/etiologia , Tosse/fisiopatologia , Combinação de Medicamentos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sistema Respiratório/fisiopatologia , Resultado do TratamentoRESUMO
The concept of biological adaptation was closely connected to some mathematical, engineering and physical ideas from the very beginning. Cannon in his "The wisdom of the body" (1932) systematically used the engineering vision of regulation. In 1938, Selye enriched this approach by the notion of adaptation energy. This term causes much debate when one takes it literally, as a physical quantity, i.e. a sort of energy. Selye did not use the language of mathematics systematically, but the formalization of his phenomenological theory in the spirit of thermodynamics was simple and led to verifiable predictions. In 1980s, the dynamics of correlation and variance in systems under adaptation to a load of environmental factors were studied and the universal effect in ensembles of systems under a load of similar factors was discovered: in a crisis, as a rule, even before the onset of obvious symptoms of stress, the correlation increases together with variance (and volatility). During 30 years, this effect has been supported by many observations of groups of humans, mice, trees, grassy plants, and on financial time series. In the last ten years, these results were supplemented by many new experiments, from gene networks in cardiology and oncology to dynamics of depression and clinical psychotherapy. Several systems of models were developed: the thermodynamic-like theory of adaptation of ensembles and several families of models of individual adaptation. Historically, the first group of models was based on Selye's concept of adaptation energy and used fitness estimates. Two other groups of models are based on the idea of hidden attractor bifurcation and on the advection-diffusion model for distribution of population in the space of physiological attributes. We explore this world of models and experiments, starting with classic works, with particular attention to the results of the last ten years and open questions.
Assuntos
Aclimatação , Adaptação Fisiológica , Animais , Camundongos , Modelos Biológicos , TermodinâmicaRESUMO
We revisit the classical stability versus accuracy dilemma for the lattice Boltzmann methods (LBM). Our goal is a stable method of second-order accuracy for fluid dynamics based on the lattice Bhatnager-Gross-Krook method (LBGK). The LBGK scheme can be recognized as a discrete dynamical system generated by free flight and entropic involution. In this framework the stability and accuracy analysis are more natural. We find the necessary and sufficient conditions for second-order accurate fluid dynamics modeling. In particular, it is proven that in order to guarantee second-order accuracy the distribution should belong to a distinguished surface--the invariant film (up to second order in the time step). This surface is the trajectory of the (quasi)equilibrium distribution surface under free flight. The main instability mechanisms are identified. The simplest recipes for stabilization add no artificial dissipation (up to second order) and provide second-order accuracy of the method. Two other prescriptions add some artificial dissipation locally and prevent the system from loss of positivity and local blowup. Demonstration of the proposed stable LBGK schemes are provided by the numerical simulation of a one-dimensional (1D) shock tube and the unsteady 2D flow around a square cylinder up to Reynolds number Re approximately 20,000.
RESUMO
The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We demonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants. Based on fundamental properties of measure concentration, we show that for M
Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Probabilidade , Processos EstocásticosRESUMO
The lattice Boltzmann method (LBM) and its variants have emerged as promising, computationally efficient and increasingly popular numerical methods for modeling complex fluid flow. However, it is acknowledged that the method can demonstrate numerical instabilities, e.g., in the vicinity of shocks. We propose a simple technique to stabilize the LBM by monitoring the difference between microscopic and macroscopic entropy. Populations are returned to their equilibrium states if a threshold value is exceeded. We coin the name Ehrenfests' steps for this procedure in homage to the vehicle that we use to introduce the procedure, namely, the Ehrenfests' coarse-graining idea.
RESUMO
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L1 norm or even sub-linear potentials corresponding to quasinorms Lp (0
Assuntos
Aprendizado de Máquina , Modelos Teóricos , Algoritmos , Bases de Dados Factuais , Fatores de TempoRESUMO
Handling of missed data is one of the main tasks in data preprocessing especially in large public service datasets. We have analysed data from the Trauma Audit and Research Network (TARN) database, the largest trauma database in Europe. For the analysis we used 165,559 trauma cases. Among them, there are 19,289 cases (11.35%) with unknown outcome. We have demonstrated that these outcomes are not missed 'completely at random' and, hence, it is impossible just to exclude these cases from analysis despite the large amount of available data. We have developed a system of non-stationary Markov models for the handling of missed outcomes and validated these models on the data of 15,437 patients which arrived into TARN hospitals later than 24h but within 30days from injury. We used these Markov models for the analysis of mortality. In particular, we corrected the observed fraction of death. Two naïve approaches give 7.20% (available case study) or 6.36% (if we assume that all unknown outcomes are 'alive'). The corrected value is 6.78%. Following the seminal paper of Trunkey (1983 [15]) the multimodality of mortality curves has become a much discussed idea. For the whole analysed TARN dataset the coefficient of mortality monotonically decreases in time but the stratified analysis of the mortality gives a different result: for lower severities the coefficient of mortality is a non-monotonic function of the time after injury and may have maxima at the second and third weeks. The approach developed here can be applied to various healthcare datasets which experience the problem of lost patients and missed outcomes.
Assuntos
Bases de Dados Factuais , Processamento Eletrônico de Dados/métodos , Ferimentos e Lesões/mortalidade , Europa (Continente)/epidemiologia , Feminino , Humanos , Masculino , Cadeias de MarkovRESUMO
The rate-limiting step in adrenal steroidogenesis is associated with the mitochondrial-cytochrome-P450scc-dependent production of pregnenolone from cholesterol. This sterol side-chain cleavage reaction is influenced by the supply of cholesterol to the mitochondria. Cholesterol is stored as cholesterol esters while the cytosol contains a hormone-sensitive cholesterol ester hydrolase. This enzyme is activated by phosphorylation involving a cyclic AMP-dependent protein kinase and ATP; this enzyme preferentially attacks cholesterol oleate or cholesterol linoleate. The lipid composition of the adrenal cortex is influenced by diet so that animals on a low-fat diet tend to store cholesterol oleate and as the linoleate content of the diet is increased, the cholesterol linoleate content of the adrenal cortex increases. Animals maintained on a high erucate diet tend to store large amounts of cholesterol erucate in the adrenal cortex; such animals have an impaired adrenal cortical function. Animals maintained on a low-fat diet (marginally deficient in essential fatty acids), a linoleate-replete diet or a moderate erucate diet, all exhibited normal responses to ACTH and normal corticosterone production rates.
Assuntos
Córtex Suprarrenal/fisiologia , Gorduras na Dieta/farmacologia , Corticosteroides/biossíntese , Hormônio Adrenocorticotrópico/fisiologia , Animais , Ésteres do Colesterol/metabolismo , Corticosterona/biossíntese , AMP Cíclico/fisiologia , Sistema Enzimático do Citocromo P-450/fisiologia , Mitocôndrias/fisiologia , Ratos , Esterol Esterase/metabolismoRESUMO
A general method of constructing dissipative equations is developed, following Ehrenfest's idea of coarse graining. The approach resolves the major issue of discrete time coarse graining versus continuous time macroscopic equations. Proof of the H theorem for macroscopic equations is given, several examples supporting the construction are presented, and generalizations are suggested.
RESUMO
A new method is developed to measure and compare the redundancy of genes. The method is based on the creating a Frequency/Correlation Dictionary of a gene. The Dictionary is a set of all possible words (i.e., subsequences) which could be met within a gene, of a length from 1 to N, where N is the length of the gene. The measure of redundancy of a gene is the minimal length of words which occur in the gene as a single copy. The results of the redundancy measurements are presented in the paper. It is shown that genes are not homogeneous in respect of redundancy of various sites in the gene. This phenomenon was called a mosaic pattern.
Assuntos
Genoma Humano , Mosaicismo , HumanosRESUMO
Statistical parameters of nucleotide sequences of mature human RNAs and those of human viruses were compared. The redundancy values of the appropriate genes were compared. The redundancy of virus genes was shown to be, on the average, less than that of human genes. The distribution of human genes according to redundancy values is bimodal, and that of human virus genes is trimodal. This fact suggests possibility of a novel gene classification according to statistical characteristics of nucleotide sequences.
Assuntos
Interpretação Estatística de Dados , Genes Virais , RNA/genética , Sequências Repetitivas de Ácido Nucleico , HumanosRESUMO
This paper is devoted to the comparative study of redundancy of genetic texts of various organisms and viruses. To determine the redundance of a gene, we have introduced the strict measure for that latter. The measure for a text's redundance is the length of restriction of Frequency/Correlation Dictionary of a given genetic text. Frequency/Correlation Dictionary is the ser of all subsequences belong to a given genetic text, accompanied by the frequencies of their occurrence. The restriction length is defined as that one, for which all the subsequences (of that length) are unique. We have found, that genes of human viruses are less redundant, in comparison to those of human genes. Other aspects of a comparative redundance investigations of the genes are discussed. The problem of the determination of "truet" intron could be treated by this methodology, as well, as the evolution of genome.
Assuntos
Frequência do Gene , Genes Virais , Sequência de Aminoácidos , Sequência de Bases , Humanos , Dados de Sequência Molecular , Mapeamento por RestriçãoRESUMO
The problem of determining the information content of nucleotide sequences is discussed. Exact expressions for the reconstitution of higher-order frequency dictionaries from lower-order once were obtained by the maximum entropy method. In form, they are analogous to superpositional approximations known in statistical physics. The features of entropy characteristics of real nucleotide sequences are described that reliably distinguish them from random texts. Methods for comparing the information content of frequency dictionaries and assessing the residual uncertainty of the text at the known frequency dictionary are proposed.
Assuntos
Sequência de Bases , Teoria da Informação , DNA , Entropia , RNA , Estatística como AssuntoRESUMO
An approach to the study of the properties of genetic texts is proposed. It is based on the investigation of the frequencies of all possible words (subsequences) in a text. The most important effect is that the original text could be reconstructed completely without deletions and/or mistakes using the set of words which are met in the text as a single copy. The length of words for which the effect occurs is a measure of the text redundancy. Some real genetic sequences were studied as well.