Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 124
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39001047

RESUMO

The Broad Learning System (BLS) has demonstrated strong performance across a variety of problems. However, BLS based on the Minimum Mean Square Error (MMSE) criterion is highly sensitive to label noise. To enhance the robustness of BLS in environments with label noise, a function called Logarithm Kernel (LK) is designed to reweight the samples for outputting weights during the training of BLS in order to construct a Logarithm Kernel-based BLS (L-BLS) in this paper. Additionally, for image databases with numerous features, a Mixture Autoencoder (MAE) is designed to construct more representative feature nodes of BLS in complex label noise environments. For the MAE, two corresponding versions of BLS, MAEBLS, and L-MAEBLS were also developed. The extensive experiments validate the robustness and effectiveness of the proposed L-BLS, and MAE can provide more representative feature nodes for the corresponding version of BLS.

2.
Entropy (Basel) ; 26(5)2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38785655

RESUMO

The axiomatic structure of the κ-statistcal theory is proven. In addition to the first three standard Khinchin-Shannon axioms of continuity, maximality, and expansibility, two further axioms are identified, namely the self-duality axiom and the scaling axiom. It is shown that both the κ-entropy and its special limiting case, the classical Boltzmann-Gibbs-Shannon entropy, follow unambiguously from the above new set of five axioms. It has been emphasized that the statistical theory that can be built from κ-entropy has a validity that goes beyond physics and can be used to treat physical, natural, or artificial complex systems. The physical origin of the self-duality and scaling axioms has been investigated and traced back to the first principles of relativistic physics, i.e., the Galileo relativity principle and the Einstein principle of the constancy of the speed of light. It has been shown that the κ-formalism, which emerges from the κ-entropy, can treat both simple (few-body) and complex (statistical) systems in a unified way. Relativistic statistical mechanics based on κ-entropy is shown that preserves the main features of classical statistical mechanics (kinetic theory, molecular chaos hypothesis, maximum entropy principle, thermodynamic stability, H-theorem, and Lesche stability). The answers that the κ-statistical theory gives to the more-than-a-century-old open problems of relativistic physics, such as how thermodynamic quantities like temperature and entropy vary with the speed of the reference frame, have been emphasized.

3.
Psychol Sci ; 34(12): 1322-1335, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37883792

RESUMO

The psychophysical laws governing the judgment of perceived numbers of objects or events, called the number sense, have been studied in detail. However, the behavioral principles of equally important numerical representations for action are largely unexplored in both humans and animals. We trained two male carrion crows (Corvus corone) to judge numerical values of instruction stimuli from one to five and to flexibly perform a matching number of pecks. Our quantitative analysis of the crows' number production performance shows the same behavioral regularities that have previously been demonstrated for the judgment of sensory numerosity, such as the numerical distance effect, the numerical magnitude effect, and the logarithmical compression of the number line. The presence of these psychophysical phenomena in crows producing number of pecks suggests a unified sensorimotor number representation system underlying the judgment of the number of external stimuli and internally generated actions.


Assuntos
Corvos , Animais , Humanos , Masculino , Limiar Diferencial , Cognição , Julgamento , Neurônios
4.
Entropy (Basel) ; 25(7)2023 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-37510054

RESUMO

We propose to use a particular case of Kaniadakis' logarithm for the exploratory analysis of compositional data following the Aitchison approach. The affine information geometry derived from Kaniadakis' logarithm provides a consistent setup for the geometric analysis of compositional data. Moreover, the affine setup suggests a rationale for choosing a specific divergence, which we name the Kaniadakis divergence.

5.
J Theor Biol ; 543: 111107, 2022 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-35367452

RESUMO

Weber's law states that the ratio of the smallest perceptual change in an input signal and the background signal is constant. The law is observed across the perception of weight, light intensity, and sound intensity and pitch. To explain Weber's law observed in steady-state responses, two models of perception have been proposed, namely the logarithmic and the linear model. This paper argues in favour of the linear model, which requires the sensory system to generate linear input-output relationship over several orders of magnitude. To this end, a four-node motif (FNM) is constructed from first principles whose series provides almost linear relationship between input signal and the output over arbitrary range of input signal. Mathematical analysis into the origin of this quasi-linear relationship shows that the series of coherent type-1 feed-forward loop (C1-FFL) is able to provide perfectly linear input-output relationship over arbitrary range of input signal. FNM also reproduces the neuronal data of numerosity detection study on the monkey. The series of FNM also provides a mechanism for sensitive detection over arbitrary range of input signal when the output has an upper limit. Further, the series of FNM provides a general basis for a class of bow-tie architecture where the number of receptors is much lower than the range of input signal and the "decoded output". Besides (quasi-)linear input-output relationship, another example of this class of bow-tie architecture that the series of FNM is able to produce is absorption spectra of cone opsins of humans. Further, the series of FNM and C1-FFL, both, can compute logarithm over arbitrary range of input signal.


Assuntos
Neurônios , Matemática
6.
Trends Food Sci Technol ; 120: 254-264, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35210697

RESUMO

BACKGROUND: Starch is a principal dietary source of digestible carbohydrate and energy. Glycaemic and insulinaemic responses to foods containing starch vary considerably and glucose responses to starchy foods are often described by the glycaemic index (GI) and/or glycaemic load (GL). Low GI/GL foods are beneficial in the management of cardiometabolic disorders (e.g., type 2 diabetes, cardiovascular disease). Differences in rates and extents of digestion of starch-containing foods will affect postprandial glycaemia. SCOPE AND APPROACH: Amylolysis kinetics are influenced by structural properties of the food matrix and of starch itself. Native (raw) semi-crystalline starch is digested slowly but hydrothermal processing (cooking) gelatinises the starch and greatly increases its digestibility. In plants, starch granules are contained within cells and intact cell walls can limit accessibility of water and digestive enzymes hindering gelatinisation and digestibility. In vitro studies of starch digestion by α-amylase model early stages in digestion and can suggest likely rates of digestion in vivo and expected glycaemic responses. Reports that metabolic responses to dietary starch are influenced by α-amylase gene copy number, heightens interest in amylolysis. KEY FINDINGS AND CONCLUSIONS: This review shows how enzyme kinetic strategies can provide explanations for differences in digestion rate of different starchy foods. Michaelis-Menten and Log of Slope analyses provide kinetic parameters (e.g., K m and k cat /K m ) for evaluating catalytic efficiency and ease of digestibility of starch by α-amylase. Suitable kinetic methods maximise the information that can be obtained from in vitro work for predictions of starch digestion and glycaemic responses in vivo.

7.
Solid State Nucl Magn Reson ; 122: 101821, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36191580

RESUMO

We present a theoretical and numerical description of the spin dynamics associated with TRAPDOR-HMQC (T-HMQC) experiment for a 1H (I) - 35Cl (S) spin system under fast magic angle spinning (MAS). Towards this an exact effective Hamiltonian describing the system is numerically evaluated with matrix logarithm approach. The different magnitudes of the heteronuclear and pure S terms in the effective Hamiltonian allow us to suggest a truncation approximation, which is shown to be in excellent agreement with the exact time evolution. Limitations of this approximation, especially at the rotary resonance condition, are discussed. The truncated effective Hamiltonian is further employed to monitor the buildup of various coherences during TRAPDOR irradiation. We observe and explain a functional resemblance between the magnitude of different terms in the truncated effective Hamiltonian and the amplitudes of various coherences during TRAPDOR irradiation, as function of crystallite orientation. Subsequently, the dependence of the sign (phase) of the T-HMQC signal on the coherence type generated is investigated numerically and analytically. We examine the continuous creation and evolution of various coherences at arbitrary times, i.e., at and between avoided level crossings. Behavior between consecutive crossings is described analytically and reveals 'quadrature' evolution of pairs of coherences and coherence interconversions. The adiabatic, sudden, and intermediate regimes for T-HMQC experiments are discussed within the approach established by A. J. Vega. Equations as well as numerical simulations suggest the existence of a driving coherence which builds up between consecutive crossings and then gets distributed at crossings among other coherences. In the intermediate regime, redistribution of the driving coherence to other coherences is almost uniform such that coherences involving S-spin double-quantum terms may be efficiently produced.

8.
Lifetime Data Anal ; 28(1): 89-115, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34608590

RESUMO

Multivariate panel count data frequently arise in follow up studies involving several related types of recurrent events. For univariate panel count data, several varying coefficient models have been developed. However, varying coefficient models for multivariate panel count data remain to be studied. In this paper, we propose a varying coefficient mean model for multivariate panel count data to describe the possible nonlinear interact effects between the covariates and the local logarithm partial likelihood procedure is considered to estimate the unknown covariate effects. Furthermore, a Breslow-type estimator is constructed for the baseline mean functions. The consistency and asymptotic normality of the proposed estimators are established under some mild conditions. The utility of the proposed approach is evaluated by some numerical simulations and an application to a dataset of skin cancer study.


Assuntos
Recidiva Local de Neoplasia , Simulação por Computador , Humanos
9.
Entropy (Basel) ; 24(11)2022 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-36359706

RESUMO

We are concerned with the weighted Tsallis and Kaniadakis divergences between two measures. More precisely, we find inequalities between these divergences and Tsallis and Kaniadakis logarithms, prove that they are limited by similar bounds with those that limit Kullback-Leibler divergence and show that are pseudo-additive.

10.
Entropy (Basel) ; 24(7)2022 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-35885178

RESUMO

In this paper, we consider the optimization of the quantum circuit for discrete logarithm of binary elliptic curves under a constrained connectivity, focusing on the resource expenditure and the optimal design for quantum operations such as the addition, binary shift, multiplication, squaring, inversion, and division included in the point addition on binary elliptic curves. Based on the space-efficient quantum Karatsuba multiplication, the number of CNOTs in the circuits of inversion and division has been reduced with the help of the Steiner tree problem reduction. The optimized size of the CNOTs is related to the minimum degree of the connected graph.

11.
Entropy (Basel) ; 24(11)2022 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-36421528

RESUMO

The Calogero-Leyvraz Lagrangian framework, associated with the dynamics of a charged particle moving in a plane under the combined influence of a magnetic field as well as a frictional force, proposed by Calogero and Leyvraz, has some special features. It is endowed with a Shannon "entropic" type kinetic energy term. In this paper, we carry out the constructions of the 2D Lotka-Volterra replicator equations and the N=2 Relativistic Toda lattice systems using this class of Lagrangians. We take advantage of the special structure of the kinetic term and deform the kinetic energy term of the Calogero-Leyvraz Lagrangians using the κ-deformed logarithm as proposed by Kaniadakis and Tsallis. This method yields the new construction of the κ-deformed Lotka-Volterra replicator and relativistic Toda lattice equations.

12.
Habitat Int ; 123: None, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35685950

RESUMO

The application of last-generation spatial data modelling, integrating Earth Observation, population, economic and other spatially explicit data, enables insights into the sustainability of the global urbanisation processes with unprecedented detail, consistency, and international comparability. In this study, the land use efficiency indicator, as developed in the Sustainable Development Goals, is assessed globally for the first time at the level of Functional Urban Areas (FUAs). Each FUA includes the city and its commuting zone as inferred from statistical modelling of available spatial data. FUAs represent the economic area of influence of each urban centre. Hence, the analysis of land consumption within their boundary has significance in the fields of spatial planning and policy analyses as well as many other research areas. We utilize the boundaries of more than 9,000 FUAs to estimate the land use efficiency between 1990 and 2015, by using population and built-up area data extracted from the Global Human Settlement Layer. This analysis shows how, in the observed period, FUAs in low-income countries of the Global South evolved with rates of population growth surpassing the ones of land consumption. However, in almost all regions of the globe, more than half of the FUAs improved their land use efficiency in recent years (2000-2015) with respect to the previous decade (1990-2000). Our study concludes that the spatial expansion of urban areas within FUA boundaries is reducing compactness of settlements, and that settlements located within FUAs do not display higher land use efficiency than those outside FUAs.

13.
J Proteome Res ; 20(2): 1397-1404, 2021 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-33417772

RESUMO

Data from untargeted metabolomics studies employing nuclear magnetic resonance (NMR) spectroscopy oftentimes contain negative values. These negative values hamper data processing and analysis algorithms and prevent the use of such data in multiomics integration settings. New methods to deal with such negative values are thus an urgent need in the metabolomics community. This study presents affine transformation of negative values (ATNV), a novel algorithm for replacement of negative values in NMR data sets. ATNV was implemented in the R package mrbin, which features interactive menus for user-friendly application and is available for free for various operating systems within the free R statistical programming language. The novel algorithms were tested on a set of human urinary NMR spectra and were able to successfully identify relevant metabolites.


Assuntos
Metabolômica , Software , Algoritmos , Humanos , Imageamento por Ressonância Magnética , Espectroscopia de Ressonância Magnética
14.
Chem Eng J ; 405: 126893, 2021 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-32901196

RESUMO

The unprecedented global spread of the severe acute respiratory syndrome (SARS) caused by SARS-CoV-2 is depicting the distressing pandemic consequence on human health, economy as well as ecosystem services. So far novel coronavirus (CoV) outbreaks were associated with SARS-CoV-2 (2019), middle east respiratory syndrome coronavirus (MERS-CoV, 2012), and SARS-CoV-1 (2003) events. CoV relates to the enveloped family of Betacoronavirus (ßCoV) with positive-sense single-stranded RNA (+ssRNA). Knowing well the persistence, transmission, and spread of SARS-CoV-2 through proximity, the faecal-oral route is now emerging as a major environmental concern to community transmission. The replication and persistence of CoV in the gastrointestinal (GI) tract and shedding through stools is indicating a potential transmission route to the environment settings. Despite of the evidence, based on fewer reports on SARS-CoV-2 occurrence and persistence in wastewater/sewage/water, the transmission of the infective virus to the community is yet to be established. In this realm, this communication attempted to review the possible influx route of the enteric enveloped viral transmission in the environmental settings with reference to its occurrence, persistence, detection, and inactivation based on the published literature so far. The possibilities of airborne transmission through enteric virus-laden aerosols, environmental factors that may influence the viral transmission, and disinfection methods (conventional and emerging) as well as the inactivation mechanism with reference to the enveloped virus were reviewed. The need for wastewater epidemiology (WBE) studies for surveillance as well as for early warning signal was elaborated. This communication will provide a basis to understand the SARS-CoV-2 as well as other viruses in the context of the environmental engineering perspective to design effective strategies to counter the enteric virus transmission and also serves as a working paper for researchers, policy makers and regulators.

15.
Entropy (Basel) ; 23(10)2021 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-34682037

RESUMO

In this paper, we obtain the law of iterated logarithm for linear processes in sub-linear expectation space. It is established for strictly stationary independent random variable sequences with finite second-order moments in the sense of non-additive capacity.

16.
J Food Sci Technol ; 58(6): 2237-2245, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33967320

RESUMO

Factors affecting the extent and rate constant of starch digestion have been determined by applying logarithm of slope (LOS) plot approach to banana starch modified by heat moisture treatment (HMT) at different temperatures (100, 110 and 120 °C) and time (4 and 8 h). LOS plot result showed that native and HMT-starches exhibited two separate digestion rate constants (k 1 in rapid and k 2 in slower phase). The digestibility increased with the increase in temperature and time of HMT in which the amount of digestible starch at slower phase (C 2∞ ) predominantly contributed to the increase in total digestible starch (total C ∞ ). Small change of the digestible starch in rapid phase (C 1∞ ) might be attributed to the comparatively intact granule surfaces and the relatively small change in ratio of ordered to disordered α-glucan chains at the granule surface as observed by FTIR-ATR. Meanwhile the increase in C 2∞ might be linked to the decrease in crystallinity as observed by XRD. Compared to the native starch, denser packed-structure characterising an A-type crystalline structure in HMT-starches is a key factor determining the lower k 2 in HMT-starches. The densely packed-matrices might slow down the amylase diffusion through the granule to reach digestible α-glucan chains.

17.
Biostatistics ; 20(3): 499-516, 2019 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-29912318

RESUMO

Low-density lipoprotein cholesterol (LDL-C) has been identified as a causative factor for atherosclerosis and related coronary heart disease, and as the main target for cholesterol- and lipid-lowering therapy. Statin drugs inhibit cholesterol synthesis in the liver and are typically the first line of therapy to lower elevated levels of LDL-C. On the other hand, a different drug, Ezetimibe, inhibits the absorption of cholesterol by the small intestine and provides a different mechanism of action. Many clinical trials have been carried out on safety and efficacy evaluation of cholesterol lowering drugs. To synthesize the results from different clinical trials, we examine treatment level (aggregate) network meta-data from 29 double-blind, randomized, active, or placebo-controlled statins +/$-$ Ezetimibe clinical trials on adult treatment-naïve patients with primary hypercholesterolemia. In this article, we propose a new approach to carry out Bayesian inference for arm-based network meta-regression. Specifically, we develop a new strategy of grouping the variances of random effects, in which we first formulate possible sets of the groups of the treatments based on their clinical mechanisms of action and then use Bayesian model comparison criteria to select the best set of groups. The proposed approach is especially useful when some treatment arms are involved in only a single trial. In addition, a Markov chain Monte Carlo sampling algorithm is developed to carry out the posterior computations. In particular, the correlation matrix is generated from its full conditional distribution via partial correlations. The proposed methodology is further applied to analyze the network meta-data from 29 trials with 11 treatment arms.


Assuntos
Anticolesterolemiantes/farmacologia , LDL-Colesterol/efeitos dos fármacos , Hipercolesterolemia/tratamento farmacológico , Modelos Estatísticos , Metanálise em Rede , Teorema de Bayes , LDL-Colesterol/sangue , Humanos , Hipercolesterolemia/sangue , Análise de Regressão
18.
Transfus Apher Sci ; 59(4): 102806, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32446633

RESUMO

BACKGROUND: Anti-blood group antibody titers (ABTs) reported in titer values are variable depending on the testing method used. The introduction of new test methods such as automated methods requires proper method comparison. In this study, the automated blood bank system and manual tube method for ABT were compared using a log-transformed dataset to evaluate the alternative statistical approach. METHODS: ABT was conducted using specimens referred for solid organ transplantation. Methods for comparison were conventional manual tube method and IH-500 automated blood bank system using column agglutination (CAT). Criteria for agreement were exact match and 1-titer match. Measured titer values were log-transformed into interval scale for Deming regression analysis. RESULTS: From the comparison of the tube and CAT methods using titer values and the two criteria, the exact and 1-titer match were 15.9-41.5 % and 65.9-97.6 %, respectively. Deming regression was used to demonstrate the presence of both proportional and constant difference between the two methods. CONCLUSION: The method comparison using conventional statistical approaches had limits due to the semi-quantitative value of the test. Log-transformed interval scale values for comparison were useful for interpretation of method comparison datasets. This alternative statistical approach could contribute to a more accurate comparison between assays and standardization of ABT testing.


Assuntos
Sistema ABO de Grupos Sanguíneos/imunologia , Armazenamento de Sangue/métodos , Estudos de Avaliação como Assunto , Humanos
19.
J Math Biol ; 80(4): 995-1019, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31705189

RESUMO

Deciding whether a substitution matrix is embeddable (i.e. the corresponding Markov process has a continuous-time realization) is an open problem even for [Formula: see text] matrices. We study the embedding problem and rate identifiability for the K80 model of nucleotide substitution. For these [Formula: see text] matrices, we fully characterize the set of embeddable K80 Markov matrices and the set of embeddable matrices for which rates are identifiable. In particular, we describe an open subset of embeddable matrices with non-identifiable rates. This set contains matrices with positive eigenvalues and also diagonal largest in column matrices, which might lead to consequences in parameter estimation in phylogenetics. Finally, we compute the relative volumes of embeddable K80 matrices and of embeddable matrices with identifiable rates. This study concludes the embedding problem for the more general model K81 and its submodels, which had been initiated by the last two authors in a separate work.


Assuntos
Modelos Genéticos , Taxa de Mutação , Filogenia , Evolução Molecular , Cadeias de Markov , Conceitos Matemáticos , Mutação , Nucleotídeos/genética
20.
Sensors (Basel) ; 20(4)2020 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-32070005

RESUMO

Sparse Code Multiple Access (SCMA) technology is a new multiple access scheme based on non-orthogonal spread spectrum technology, which was proposed by Huawei in 2014. In the algorithm application of this technology, the original Message Passing Algorithm (MPA) has slow convergence speed and high algorithm complexity. The threshold-based MPA has a high Bit Error Ratio (BER) when the threshold is low. In the Maximum logarithm Message Passing Algorithm (Max-log-MPA), the approximation method is used, which will cause some messages to be lost and the detection performance to be poor. Therefore, in order to solve the above problems, a Threshold-Based Max-log-MPA (T-Max-log-MPA) low complexity multiuser detection algorithm is proposed in this paper. The Maximum logarithm (Max-log) algorithm is combined with threshold setting, and the stability of user nodes is considered as a necessary condition for decision in the algorithm. Before message updating, the user information nodes are judged whether the necessary conditions for the stability of the user node have been met, and then the threshold is determined. Only users who meet the threshold condition and pass the necessary condition of user node stability can be decoded in advance. In the whole process, the logarithm domain MPA algorithm is used to convert an exp operation and a multiplication operation into a maximum value and addition operation. The simulation results show that the proposed algorithm can effectively reduce the computational complexity while ensuring the BER, and with the increase of signal-to-noise ratio, the effect of the Computational Complexity Reduction Ratio (CCRR) is more obvious.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA