Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(22): e2318329121, 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38787881

RESUMO

The Hill functions, [Formula: see text], have been widely used in biology for over a century but, with the exception of [Formula: see text], they have had no justification other than as a convenient fit to empirical data. Here, we show that they are the universal limit for the sharpness of any input-output response arising from a Markov process model at thermodynamic equilibrium. Models may represent arbitrary molecular complexity, with multiple ligands, internal states, conformations, coregulators, etc, under core assumptions that are detailed in the paper. The model output may be any linear combination of steady-state probabilities, with components other than the chosen input ligand held constant. This formulation generalizes most of the responses in the literature. We use a coarse-graining method in the graph-theoretic linear framework to show that two sharpness measures for input-output responses fall within an effectively bounded region of the positive quadrant, [Formula: see text], for any equilibrium model with [Formula: see text] input binding sites. [Formula: see text] exhibits a cusp which approaches, but never exceeds, the sharpness of [Formula: see text], but the region and the cusp can be exceeded when models are taken away from thermodynamic equilibrium. Such fundamental thermodynamic limits are called Hopfield barriers, and our results provide a biophysical justification for the Hill functions as the universal Hopfield barriers for sharpness. Our results also introduce an object, [Formula: see text], whose structure may be of mathematical interest, and suggest the importance of characterizing Hopfield barriers for other forms of cellular information processing.


Assuntos
Cadeias de Markov , Termodinâmica , Modelos Biológicos , Ligantes
2.
Genet Epidemiol ; 45(6): 577-592, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34082482

RESUMO

Interest in analyzing X chromosome single nucleotide polymorphisms (SNPs) is growing and several approaches have been proposed. Prior studies have compared power of different approaches, but bias and interpretation of coefficients have received less attention. We performed simulations to demonstrate the impact of X chromosome model assumptions on effect estimates. We investigated the coefficient biases of SNP and sex effects with commonly used models for X chromosome SNPs, including models with and without assumptions of X chromosome inactivation (XCI), and with and without SNP-sex interaction terms. Sex and SNP coefficient biases were observed when assumptions made about XCI and sex differences in SNP effect in the analysis model were inconsistent with the data-generating model. However, including a SNP-sex interaction term often eliminated these biases. To illustrate these findings, estimates under different genetic model assumptions are compared and interpreted in a real data example. Models to analyze X chromosome SNPs make assumptions beyond those made in autosomal variant analysis. Assumptions made about X chromosome SNP effects should be stated clearly when reporting and interpreting X chromosome associations. Fitting models with SNP × Sex interaction terms can avoid reliance on assumptions, eliminating coefficient bias even in the absence of sex differences in SNP effect.


Assuntos
Cromossomos Humanos X/genética , Modelos Genéticos , Polimorfismo de Nucleotídeo Único , Viés , Feminino , Humanos , Masculino , Inativação do Cromossomo X/genética
3.
Biostatistics ; 18(3): 505-520, 2017 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-28334368

RESUMO

Net survival, the one that would be observed if the disease under study was the only cause of death, is an important, useful, and increasingly used indicator in public health, especially in population-based studies. Estimates of net survival and effects of prognostic factor can be obtained by excess hazard regression modeling. Whereas various diagnostic tools were developed for overall survival analysis, few methods are available to check the assumptions of excess hazard models. We propose here two formal tests to check the proportional hazard assumption and the validity of the functional form of the covariate effects in the context of flexible parametric excess hazard modeling. These tests were adapted from martingale residual-based tests for parametric modeling of overall survival to allow adding to the model a necessary element for net survival analysis: the population mortality hazard. We studied the size and the power of these tests through an extensive simulation study based on complex but realistic data. The new tests showed sizes close to the nominal values and satisfactory powers. The power of the proportionality test was similar or greater than that of other tests already available in the field of net survival. We illustrate the use of these tests with real data from French cancer registries.


Assuntos
Prognóstico , Modelos de Riscos Proporcionais , Análise de Sobrevida , Humanos , Neoplasias , Saúde Pública , Sistema de Registros , Projetos de Pesquisa
4.
Environ Health ; 16(1): 85, 2017 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-28793913

RESUMO

Hazard identification is a major scientific challenge, notably for environmental epidemiology, and is often surrounded, as the recent case of glyphosate shows, by debate arising in the first place by the inherently problematic nature of many components of the identification process. Particularly relevant in this respect are components less amenable to logical or mathematical formalization and essentially dependent on scientists' judgment. Four such potentially hazardous components that are capable of distorting the correct process of hazard identification are reviewed and discussed from an epidemiologist perspective: (1) lexical mix-up of hazard and risk (2) scientific questions as distinct from testable hypotheses, and implications for the hierarchy of strength of evidence obtainable from different types of study designs (3) assumptions in prior beliefs and model choices and (4) conflicts of interest. Four suggestions are put forward to strengthen a process that remains in several aspects judgmental, but not arbitrary, in nature.


Assuntos
Saúde Ambiental , Métodos Epidemiológicos , Substâncias Perigosas , Conflito de Interesses , Projetos de Pesquisa , Risco , Terminologia como Assunto
5.
bioRxiv ; 2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38585761

RESUMO

The Hill functions, ℋh(x)=xh/1+xh, have been widely used in biology for over a century but, with the exception of ℋ1, they have had no justification other than as a convenient fit to empirical data. Here, we show that they are the universal limit for the sharpness of any input-output response arising from a Markov process model at thermodynamic equilibrium. Models may represent arbitrary molecular complexity, with multiple ligands, internal states, conformations, co-regulators, etc, under core assumptions that are detailed in the paper. The model output may be any linear combination of steady-state probabilities, with components other than the chosen input ligand held constant. This formulation generalises most of the responses in the literature. We use a coarse-graining method in the graph-theoretic linear framework to show that two sharpness measures for input-output responses fall within an effectively bounded region of the positive quadrant, Ωm⊂ℝ+2, for any equilibrium model with m input binding sites. Ωm exhibits a cusp which approaches, but never exceeds, the sharpness of ℋm but the region and the cusp can be exceeded when models are taken away from thermodynamic equilibrium. Such fundamental thermodynamic limits are called Hopfield barriers and our results provide a biophysical justification for the Hill functions as the universal Hopfield barriers for sharpness. Our results also introduce an object, Ωm, whose structure may be of mathematical interest, and suggest the importance of characterising Hopfield barriers for other forms of cellular information processing.

6.
Ecol Evol ; 14(7): e11387, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38994210

RESUMO

Generalized linear models (GLMs) are an integral tool in ecology. Like general linear models, GLMs assume linearity, which entails a linear relationship between independent and dependent variables. However, because this assumption acts on the link rather than the natural scale in GLMs, it is more easily overlooked. We reviewed recent ecological literature to quantify the use of linearity. We then used two case studies to confront the linearity assumption via two GLMs fit to empirical data. In the first case study we compared GLMs to generalized additive models (GAMs) fit to mammal relative abundance data. In the second case study we tested for linearity in occupancy models using passerine point-count data. We reviewed 162 studies published in the last 5 years in five leading ecology journals and found less than 15% reported testing for linearity. These studies used transformations and GAMs more often than they reported a linearity test. In the first case study, GAMs strongly out-performed GLMs as measured by AIC in modeling relative abundance, and GAMs helped uncover nonlinear responses of carnivore species to landscape development. In the second case study, 14% of species-specific models failed a formal statistical test for linearity. We also found that differences between linear and nonlinear (i.e., those with a transformed independent variable) model predictions were similar for some species but not for others, with implications for inference and conservation decision-making. Our review suggests that reporting tests for linearity are rare in recent studies employing GLMs. Our case studies show how formally comparing models that allow for nonlinear relationships between the dependent and independent variables has the potential to impact inference, generate new hypotheses, and alter conservation implications. We conclude by suggesting that ecological studies report tests for linearity and use formal methods to address linearity assumption violations in GLMs.

7.
Adv Nutr ; 15(5): 100214, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38521239

RESUMO

Observational studies of foods and health are susceptible to bias, particularly from confounding between diet and other lifestyle factors. Common methods for deriving dose-response meta-analysis (DRMA) may contribute to biased or overly certain risk estimates. We used DRMA models to evaluate the empirical evidence for colorectal cancer (CRC) association with unprocessed red meat (RM) and processed meats (PM), and the consistency of this association for low and high consumers under different modeling assumptions. Using the Global Burden of Disease project's systematic reviews as a start, we compiled a data set of studies of PM with 29 cohorts contributing 23,522,676 person-years and of 23 cohorts for RM totaling 17,259,839 person-years. We fitted DRMA models to lower consumers only [consumption < United States median of PM (21 g/d) or RM (56 g/d)] and compared them with DRMA models using all consumers. To investigate impacts of model selection, we compared classical DRMA models against an empirical model for both lower consumers only and for all consumers. Finally, we assessed if the type of reference consumer (nonconsumer or mixed consumer/nonconsumer) influenced a meta-analysis of the lowest consumption arm. We found no significant association with consumption of 50 g/d RM using an empirical fit with lower consumption (relative risk [RR] 0.93 (0.8-1.02) or all consumption levels (1.04 (0.99-1.10)), while classical models showed RRs as high as 1.09 (1.00-1.18) at 50g/day. PM consumption of 20 g/d was not associated with CRC (1.01 (0.87-1.18)) when using lower consumer data, regardless of model choice. Using all consumption data resulted in association with CRC at 20g/day of PM for the empirical models (1.07 (1.02-1.12)) and with as little as 1g/day for classical models. The empirical DRMA showed nonlinear, nonmonotonic relationships for PM and RM. Nonconsumer reference groups did not affect RM (P = 0.056) or PM (P = 0.937) association with CRC in lowest consumption arms. In conclusion, classical DRMA model assumptions and inclusion of higher consumption levels influence the association between CRC and low RM and PM consumption. Furthermore, a no-risk limit of 0 g/d consumption of RM and PM is inconsistent with the evidence.


Assuntos
Neoplasias Colorretais , Dieta , Humanos , Neoplasias Colorretais/epidemiologia , Carne , Estudos Observacionais como Assunto , Viés , Medição de Risco , Carne Vermelha/efeitos adversos , Metanálise como Assunto , Fatores de Risco , Produtos da Carne/efeitos adversos
8.
Life (Basel) ; 12(10)2022 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-36295038

RESUMO

Transboundary animal diseases, such as foot and mouth disease (FMD) pose a significant and ongoing threat to global food security. Such diseases can produce large, spatially complex outbreaks. Mathematical models are often used to understand the spatio-temporal dynamics and create response plans for possible disease introductions. Model assumptions regarding transmission behavior of premises and movement patterns of livestock directly impact our understanding of the ecological drivers of outbreaks and how to best control them. Here, we investigate the impact that these assumptions have on model predictions of FMD outbreaks in the U.S. using models of livestock shipment networks and disease spread. We explore the impact of changing assumptions about premises transmission behavior, both by including within-herd dynamics, and by accounting for premises type and increasing the accuracy of shipment predictions. We find that the impact these assumptions have on outbreak predictions is less than the impact of the underlying livestock demography, but that they are important for investigating some response objectives, such as the impact on trade. These results suggest that demography is a key ecological driver of outbreaks and is critical for making robust predictions but that understanding management objectives is also important when making choices about model assumptions.

9.
Bioinform Biol Insights ; 15: 11779322211051522, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34707351

RESUMO

Regression modeling is a workhorse of statistical ecology that allows to find relationships between a response variable and a set of explanatory variables. Despite being one of the fundamental statistical ideas in ecological curricula, regression modeling can be complex and subtle. This paper is intended as an applied protocol to help students understand the data, select the most appropriate models, verify assumptions, and interpret the output. Basic ecological questions are tackled using data from a fictional series, "Fantastic beasts and where to find them," with the aim to show how statistical thinking can foster curiosity, creativity and imagination in ecology, from the formulation of hypotheses to the interpretation of results.

10.
Psychometrika ; 86(3): 800-824, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34463910

RESUMO

Item response theory (IRT) model applications extend well beyond cognitive ability testing, and various patient-reported outcomes (PRO) measures are among the more prominent examples. PRO (and like) constructs differ from cognitive ability constructs in many ways, and these differences have model fitting implications. With a few notable exceptions, however, most IRT applications to PRO constructs rely on traditional IRT models, such as the graded response model. We review some notable differences between cognitive and PRO constructs and how these differences can present challenges for traditional IRT model applications. We then apply two models (the traditional graded response model and an alternative log-logistic model) to depression measure data drawn from the Patient-Reported Outcomes Measurement Information System project. We do not claim that one model is "a better fit" or more "valid" than the other; rather, we show that the log-logistic model may be more consistent with the construct of depression as a unipolar phenomenon. Clearly, the graded response and log-logistic models can lead to different conclusions about the psychometrics of an instrument and the scaling of individual differences. We underscore, too, that, in general, explorations of which model may be more appropriate cannot be decided only by fit index comparisons; these decisions may require the integration of psychometrics with theory and research findings on the construct of interest.


Assuntos
Depressão , Medidas de Resultados Relatados pelo Paciente , Humanos , Modelos Logísticos , Escalas de Graduação Psiquiátrica , Psicometria
11.
Assessment ; 28(8): 1932-1948, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-32659111

RESUMO

In continuous test norming, the test score distribution is estimated as a continuous function of predictor(s). A flexible approach for norm estimation is the use of generalized additive models for location, scale, and shape. It is unknown how sensitive their estimates are to model flexibility and sample size. Generally, a flexible model that fits at the population level has smaller bias than its restricted nonfitting version, yet it has larger sampling variability. We investigated how model flexibility relates to bias, variance, and total variability in estimates of normalized z scores under empirically relevant conditions, involving the skew Student t and normal distributions as population distributions. We considered both transversal and longitudinal assumption violations. We found that models with too strict distributional assumptions yield biased estimates, whereas too flexible models yield increased variance. The skew Student t distribution, unlike the Box-Cox Power Exponential distribution, appeared problematic to estimate for normally distributed data. Recommendations for empirical norming practice are provided.


Assuntos
Tamanho da Amostra , Viés , Humanos
12.
JP J Biostat ; 15(1): 1-20, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-31452580

RESUMO

Regression mixture models are becoming more widely used in applied research. It has been recognized that these models are quite sensitive to underlying assumptions, yet many of these assumptions are not directly testable. We discuss a diagnostic tool based on reconstructed residuals that can help uncover violations of model assumptions. These residuals are found by using the posterior probability of class membership to assign, based on a multinomial distribution, a class to each observation. Standard residual checks can be applied to these posterior draw residuals to explore violations of the model assumptions. We present several illustrations of the diagnostic tool.

13.
Soc Personal Psychol Compass ; 10(3): 150-163, 2016 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-26985234

RESUMO

Mediation analysis is a popular framework for identifying underlying mechanisms in social psychology. In the context of simple mediation, we review and discuss the implications of three facets of mediation analysis: (a) conceptualization of the relations between the variables, (b) statistical approaches, and (c) relevant elements of design. We also highlight the issue of equivalent models that are inherent in simple mediation. The extent to which results are meaningful stem directly from choices regarding these three facets of mediation analysis. We conclude by discussing how mediation analysis can be better applied to examine causal processes, highlight the limits of simple mediation, and make recommendations for better practice.

14.
Math Biosci ; 277: 89-107, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27130854

RESUMO

Mathematical models have been used to study Ebola disease transmission dynamics and control for the recent epidemics in West Africa. Many of the models used in these studies are based on the model of Legrand et al. (2007), and most failed to accurately project the outbreak's course (Butler, 2014). Although there could be many reasons for this, including incomplete and unreliable data on Ebola epidemiology and lack of empirical data on how disease-control measures quantitatively affect Ebola transmission, we examine the underlying assumptions of the Legrand model, and provide alternate formulations that are simpler and provide additional information regarding the epidemiology of Ebola during an outbreak. We developed three models with different assumptions about disease stage durations, one of which simplifies to the Legrand model while the others have more realistic distributions. Control and basic reproduction numbers for all three models are derived and shown to provide threshold conditions for outbreak control and prevention.


Assuntos
Surtos de Doenças , Doença pelo Vírus Ebola/transmissão , Modelos Teóricos , Humanos
15.
Vector Borne Zoonotic Dis ; 15(3): 215-7, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25793478

RESUMO

Mathematical modeling and notably the basic reproduction number R0 have become popular tools for the description of vector-borne disease dynamics. We compare two widely used methods to calculate the probability of a vector to survive the extrinsic incubation period. The two methods are based on different assumptions for the duration of the extrinsic incubation period; one method assumes a fixed period and the other method assumes a fixed daily rate of becoming infectious. We conclude that the outcomes differ substantially between the methods when the average life span of the vector is short compared to the extrinsic incubation period.


Assuntos
Doenças Transmissíveis/transmissão , Vetores de Doenças , Modelos Biológicos , Animais , Humanos , Reprodução
16.
Front Psychol ; 3: 354, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23055992

RESUMO

This paper discusses the influence that decisions about data cleaning and violations of statistical assumptions can have on drawing valid conclusions to research studies. The datasets provided in this paper were collected as part of a National Science Foundation grant to design online games and associated labs for use in undergraduate and graduate statistics courses that can effectively illustrate issues not always addressed in traditional instruction. Students play the role of a researcher by selecting from a wide variety of independent variables to explain why some students complete games faster than others. Typical project data sets are "messy," with many outliers (usually from some students taking much longer than others) and distributions that do not appear normal. Classroom testing of the games over several semesters has produced evidence of their efficacy in statistics education. The projects tend to be engaging for students and they make the impact of data cleaning and violations of model assumptions more relevant. We discuss the use of one of the games and associated guided lab in introducing students to issues prevalent in real data and the challenges involved in data cleaning and dangers when model assumptions are violated.

17.
Res Synth Methods ; 3(4): 300-11, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26053423

RESUMO

Indirect comparisons and mixed treatment comparison (MTC) meta-analyses are increasingly used in medical research. These methods allow a simultaneous analysis of all relevant interventions in a connected network even if direct evidence regarding two interventions is missing. The framework of MTC meta-analysis provides a flexible approach for complex networks. However, this method has yet some unsolved problems, in particular the choice of the network size and the assessment of inconsistency. In this paper, we describe the practical application of MTC meta-analysis by using a data set on antidepressants. We focus on the impact of the size of the chosen network and the assumption of consistency. A larger network is based on more evidence but may show inconsistencies, whereas a smaller network contains less evidence but may show no clear inconsistencies. A choice is required which network should be used in practice. In summary, MTC meta-analysis represents a promising approach; however, clear application standards are still lacking. Especially, standards for the identification of inconsistency and the way to deal with potential inconsistency are required. Copyright © 2012 John Wiley & Sons, Ltd.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA