Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 190
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Epidemiology ; 35(2): 218-231, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38290142

RESUMO

BACKGROUND: Instrumental variable (IV) analysis provides an alternative set of identification assumptions in the presence of uncontrolled confounding when attempting to estimate causal effects. Our objective was to evaluate the suitability of measures of prescriber preference and calendar time as potential IVs to evaluate the comparative effectiveness of buprenorphine/naloxone versus methadone for treatment of opioid use disorder (OUD). METHODS: Using linked population-level health administrative data, we constructed five IVs: prescribing preference at the individual, facility, and region levels (continuous and categorical variables), calendar time, and a binary prescriber's preference IV in analyzing the treatment assignment-treatment discontinuation association using both incident-user and prevalent-new-user designs. Using published guidelines, we assessed and compared each IV according to the four assumptions for IVs, employing both empirical assessment and content expertise. We evaluated the robustness of results using sensitivity analyses. RESULTS: The study sample included 35,904 incident users (43.3% on buprenorphine/naloxone) initiated on opioid agonist treatment by 1585 prescribers during the study period. While all candidate IVs were strong (A1) according to conventional criteria, by expert opinion, we found no evidence against assumptions of exclusion (A2), independence (A3), monotonicity (A4a), and homogeneity (A4b) for prescribing preference-based IV. Some criteria were violated for the calendar time-based IV. We determined that preference in provider-level prescribing, measured on a continuous scale, was the most suitable IV for comparative effectiveness of buprenorphine/naloxone and methadone for the treatment of OUD. CONCLUSIONS: Our results suggest that prescriber's preference measures are suitable IVs in comparative effectiveness studies of treatment for OUD.


Assuntos
Metadona , Transtornos Relacionados ao Uso de Opioides , Humanos , Metadona/uso terapêutico , Transtornos Relacionados ao Uso de Opioides/tratamento farmacológico , Combinação Buprenorfina e Naloxona/uso terapêutico , Tratamento de Substituição de Opiáceos/métodos , Nível de Saúde , Analgésicos Opioides/uso terapêutico
2.
Prev Med ; 164: 107127, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35787846

RESUMO

It is well known that the statistical analyses in health-science and medical journals are frequently misleading or even wrong. Despite many decades of reform efforts by hundreds of scientists and statisticians, attempts to fix the problem by avoiding obvious error and encouraging good practice have not altered this basic situation. Statistical teaching and reporting remain mired in damaging yet editorially enforced jargon of "significance", "confidence", and imbalanced focus on null (no-effect or "nil") hypotheses, leading to flawed attempts to simplify descriptions of results in ordinary terms. A positive development amidst all this has been the introduction of interval estimates alongside or in place of significance tests and P-values, but intervals have been beset by similar misinterpretations. Attempts to remedy this situation by calling for replacement of traditional statistics with competitors (such as pure-likelihood or Bayesian methods) have had little impact. Thus, rather than ban or replace P-values or confidence intervals, we propose to replace traditional jargon with more accurate and modest ordinary-language labels that describe these statistics as measures of compatibility between data and hypotheses or models, which have long been in use in the statistical modeling literature. Such descriptions emphasize the full range of possibilities compatible with observations. Additionally, a simple transform of the P-value called the surprisal or S-value provides a sense of how much or how little information the data supply against those possibilities. We illustrate these reforms using some examples from a highly charged topic: trials of ivermectin treatment for Covid-19.


Assuntos
COVID-19 , Humanos , Interpretação Estatística de Dados , Teorema de Bayes , COVID-19/prevenção & controle , Probabilidade , Modelos Estatísticos , Intervalos de Confiança
3.
Eur J Epidemiol ; 37(11): 1149-1154, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36369315

RESUMO

The 1970s and 1980s saw the appearance of many papers on the topics of synergy, antagonism, and similar concepts of causal interactions and interdependence of effects, with a special emphasis on distinguishing these concepts from that of statistical interaction - the need for a product term in a model. As an example, Miettinen defined "synergism" as "the existence of instances in which both risk factors are needed for the effect", whereas "antagonism" is where "at least one [factor] can block the solo effect of the other". In response, Greenland and Poole constructed a systematic analysis of 16 possible individual response patterns in a deterministic causal model for two binary exposure variables, and showed how these patterns can be mapped onto nine types of sufficient causes, which in turn can be simplified into four intuitive categories. Although these and other papers recognized that epidemiology cannot directly study biological mechanisms underlying interaction, they showed how it can usefully study causal and preventive interdependence - which, despite its mechanistic agnosticism, has important implications for clinical decision making as well as for public health.


Assuntos
Modelos Teóricos , Humanos , Causalidade , Fatores de Risco
4.
Am J Epidemiol ; 190(8): 1617-1621, 2021 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-33778862

RESUMO

Lash et al. (Am J Epidemiol. 2021;190(8):1604-1612) have presented detailed critiques of 3 bias analyses that they identify as "suboptimal." This identification raises the question of what "optimal" means for bias analysis, because it is practically impossible to do statistically optimal analyses of typical population studies-with or without bias analysis. At best the analysis can only attempt to satisfy practice guidelines and account for available information both within and outside the study. One should not expect a full accounting for all sources of uncertainty; hence, interval estimates and distributions for causal effects should never be treated as valid uncertainty assessments-they are instead only example analyses that follow from collections of often questionable assumptions. These observations reinforce those of Lash et al. and point to the need for more development of methods for judging bias-parameter distributions and utilization of available information.


Assuntos
Projetos de Pesquisa , Viés , Causalidade , Humanos
5.
Am J Epidemiol ; 190(2): 191-193, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-32648906

RESUMO

Measures of information and surprise, such as the Shannon information value (S value), quantify the signal present in a stream of noisy data. We illustrate the use of such information measures in the context of interpreting P values as compatibility indices. S values help communicate the limited information supplied by conventional statistics and cast a critical light on cutoffs used to judge and construct those statistics. Misinterpretations of statistics may be reduced by interpreting P values and interval estimates using compatibility concepts and S values instead of "significance" and "confidence."


Assuntos
Interpretação Estatística de Dados , Métodos Epidemiológicos , Intervalos de Confiança , Humanos , Incerteza
6.
Epidemiology ; 32(5): 617-624, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34224472

RESUMO

Quantitative bias analyses allow researchers to adjust for uncontrolled confounding, given specification of certain bias parameters. When researchers are concerned about unknown confounders, plausible values for these bias parameters will be difficult to specify. Ding and VanderWeele developed bounding factor and E-value approaches that require the user to specify only some of the bias parameters. We describe the mathematical meaning of bounding factors and E-values and the plausibility of these methods in an applied context. We encourage researchers to pay particular attention to the assumption made, when using E-values, that the prevalence of the uncontrolled confounder among the exposed is 100% (or, equivalently, the prevalence of the exposure among those without the confounder is 0%). We contrast methods that attempt to bound biases or effects and alternative approaches such as quantitative bias analysis. We provide an example where failure to make this distinction led to erroneous statements. If the primary concern in an analysis is with known but unmeasured potential confounders, then E-values are not needed and may be misleading. In cases where the concern is with unknown confounders, the E-value assumption of an extreme possible prevalence of the confounder limits its practical utility.


Assuntos
Fatores de Confusão Epidemiológicos , Viés , Humanos
7.
Paediatr Perinat Epidemiol ; 35(1): 8-23, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33269490

RESUMO

The "replication crisis" has been attributed to perverse incentives that lead to selective reporting and misinterpretations of P-values and confidence intervals. A crude fix offered for this problem is to lower testing cut-offs (α levels), either directly or in the form of null-biased multiple comparisons procedures such as naïve Bonferroni adjustments. Methodologists and statisticians have expressed positions that range from condemning all such procedures to demanding their application in almost all analyses. Navigating between these unjustifiable extremes requires defining analysis goals precisely enough to separate inappropriate from appropriate adjustments. To meet this need, I here review issues arising in single-parameter inference (such as error costs and loss functions) that are often skipped in basic statistics, yet are crucial to understanding controversies in testing and multiple comparisons. I also review considerations that should be made when examining arguments for and against modifications of decision cut-offs and adjustments for multiple comparisons. The goal is to provide researchers a better understanding of what is assumed by each side and to enable recognition of hidden assumptions. Basic issues of goal specification and error costs are illustrated with simple fixed cut-off hypothesis testing scenarios. These illustrations show how adjustment choices are extremely sensitive to implicit decision costs, making it inevitable that different stakeholders will vehemently disagree about what is necessary or appropriate. Because decisions cannot be justified without explicit costs, resolution of inference controversies is impossible without recognising this sensitivity. Pre-analysis statements of funding, scientific goals, and analysis plans can help counter demands for inappropriate adjustments, and can provide guidance as to what adjustments are advisable. Hierarchical (multilevel) regression methods (including Bayesian, semi-Bayes, and empirical-Bayes methods) provide preferable alternatives to conventional adjustments, insofar as they facilitate use of background information in the analysis model, and thus can provide better-informed estimates on which to base inferences and decisions.


Assuntos
Objetivos , Projetos de Pesquisa , Teorema de Bayes , Humanos
8.
BMC Med Res Methodol ; 20(1): 244, 2020 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-32998683

RESUMO

BACKGROUND: Researchers often misinterpret and misrepresent statistical outputs. This abuse has led to a large literature on modification or replacement of testing thresholds and P-values with confidence intervals, Bayes factors, and other devices. Because the core problems appear cognitive rather than statistical, we review some simple methods to aid researchers in interpreting statistical outputs. These methods emphasize logical and information concepts over probability, and thus may be more robust to common misinterpretations than are traditional descriptions. METHODS: We use the Shannon transform of the P-value p, also known as the binary surprisal or S-value s = -log2(p), to provide a measure of the information supplied by the testing procedure, and to help calibrate intuitions against simple physical experiments like coin tossing. We also use tables or graphs of test statistics for alternative hypotheses, and interval estimates for different percentile levels, to thwart fallacies arising from arbitrary dichotomies. Finally, we reinterpret P-values and interval estimates in unconditional terms, which describe compatibility of data with the entire set of analysis assumptions. We illustrate these methods with a reanalysis of data from an existing record-based cohort study. CONCLUSIONS: In line with other recent recommendations, we advise that teaching materials and research reports discuss P-values as measures of compatibility rather than significance, compute P-values for alternative hypotheses whenever they are computed for null hypotheses, and interpret interval estimates as showing values of high compatibility with data, rather than regions of confidence. Our recommendations emphasize cognitive devices for displaying the compatibility of the observed data with various hypotheses of interest, rather than focusing on single hypothesis tests or interval estimates. We believe these simple reforms are well worth the minor effort they require.


Assuntos
Cognição , Semântica , Teorema de Bayes , Estudos de Coortes , Intervalos de Confiança , Humanos , Probabilidade
10.
Am J Epidemiol ; 188(4): 753-759, 2019 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-30576419

RESUMO

There are now many published applications of causal (structural) models for estimating effects of time-varying exposures in the presence of confounding by earlier exposures and confounders affected by earlier exposures. Results from these models can be highly sensitive to inclusion of lagged and baseline exposure terms for different visits. This sensitivity is often overlooked in practice; moreover, results from these models are not directly comparable to results from conventional time-dependent regression models, because the latter do not estimate the same causal parameter even when no bias is present. We thus explore the implications of including lagged and baseline exposure terms in causal and regression models, using a public data set (Caerphilly Heart Disease Study in the United Kingdom, 1979-1998) relating smoking to cardiovascular outcomes.


Assuntos
Doenças Cardiovasculares/epidemiologia , Vigilância da População/métodos , Fumar/epidemiologia , Doenças Cardiovasculares/etiologia , Fatores de Confusão Epidemiológicos , Humanos , Estudos Longitudinais , Análise de Regressão , Fumar/efeitos adversos , Fatores de Tempo , Reino Unido/epidemiologia
11.
JAMA ; 331(4): 285-286, 2024 01 23.
Artigo em Inglês | MEDLINE | ID: mdl-38175628

RESUMO

This Viewpoint argues that a hypothesis-centric approach to writing grant applications is problematic and instead suggests that funding applications should be evaluated by their relevance and methodological quality rather than by qualitative assertions before the study is conducted.


Assuntos
Organização do Financiamento , Apoio à Pesquisa como Assunto , Redação , Organização do Financiamento/métodos , Organização do Financiamento/normas , Apoio à Pesquisa como Assunto/métodos , Apoio à Pesquisa como Assunto/normas
12.
Am J Epidemiol ; 187(4): 864-870, 2018 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-29020135

RESUMO

Separation is encountered in regression models with a discrete outcome (such as logistic regression) where the covariates perfectly predict the outcome. It is most frequent under the same conditions that lead to small-sample and sparse-data bias, such as presence of a rare outcome, rare exposures, highly correlated covariates, or covariates with strong effects. In theory, separation will produce infinite estimates for some coefficients. In practice, however, separation may be unnoticed or mishandled because of software limits in recognizing and handling the problem and in notifying the user. We discuss causes of separation in logistic regression and describe how common software packages deal with it. We then describe methods that remove separation, focusing on the same penalized-likelihood techniques used to address more general sparse-data problems. These methods improve accuracy, avoid software problems, and allow interpretation as Bayesian analyses with weakly informative priors. We discuss likelihood penalties, including some that can be implemented easily with any software package, and their relative advantages and disadvantages. We provide an illustration of ideas and methods using data from a case-control study of contraceptive practices and urinary tract infection.


Assuntos
Interpretação Estatística de Dados , Projetos de Pesquisa Epidemiológica , Modelos Logísticos , Humanos , Tamanho da Amostra
13.
Epidemiology ; 29(5): 599-603, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29912015

RESUMO

Study size has typically been planned based on statistical power and therefore has been heavily influenced by the philosophy of statistical hypothesis testing. A worthwhile alternative is to plan study size based on precision, for example by aiming to obtain a desired width of a confidence interval for the targeted effect. This article presents formulas for planning the size of an epidemiologic study based on the desired precision of the basic epidemiologic effect measures.


Assuntos
Projetos de Pesquisa , Tamanho da Amostra , Intervalos de Confiança , Humanos , Probabilidade , Estatística como Assunto
14.
Eur J Epidemiol ; 33(1): 5-14, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29101596

RESUMO

Misconceptions about the impact of case-control matching remain common. We discuss several subtle problems associated with matched case-control studies that do not arise or are minor in matched cohort studies: (1) matching, even for non-confounders, can create selection bias; (2) matching distorts dose-response relations between matching variables and the outcome; (3) unbiased estimation requires accounting for the actual matching protocol as well as for any residual confounding effects; (4) for efficiency, identically matched groups should be collapsed; (5) matching may harm precision and power; (6) matched analyses may suffer from sparse-data bias, even when using basic sparse-data methods. These problems support advice to limit case-control matching to a few strong well-measured confounders, which would devolve to no matching if no such confounders are measured. On the positive side, odds ratio modification by matched variables can be assessed in matched case-control studies without further data, and when one knows either the distribution of the matching factors or their relation to the outcome in the source population, one can estimate and study patterns in absolute rates. Throughout, we emphasize distinctions from the more intuitive impacts of cohort matching.


Assuntos
Viés , Estudos de Casos e Controles , Fatores de Confusão Epidemiológicos , Análise por Pareamento , Estudos de Coortes , Humanos , Razão de Chances , Projetos de Pesquisa
15.
Biom J ; 60(1): 100-114, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29076182

RESUMO

Marginal structural models for time-fixed treatments fit using inverse-probability weighted estimating equations are increasingly popular. Nonetheless, the resulting effect estimates are subject to finite-sample bias when data are sparse, as is typical for large-sample procedures. Here we propose a semi-Bayes estimation approach which penalizes or shrinks the estimated model parameters to improve finite-sample performance. This approach uses simple symmetric data-augmentation priors. Limited simulation experiments indicate that the proposed approach reduces finite-sample bias and improves confidence-interval coverage when the true values lie within the central "hill" of the prior distribution. We illustrate the approach with data from a nonexperimental study of HIV treatments.


Assuntos
Biometria/métodos , Modelos Estatísticos , Fármacos Anti-HIV/uso terapêutico , Teorema de Bayes , Infecções por HIV/tratamento farmacológico , Humanos , Modelos de Riscos Proporcionais , Análise de Regressão
16.
Am J Epidemiol ; 186(6): 639-645, 2017 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-28938712

RESUMO

There is no complete solution for the problem of abuse of statistics, but methodological training needs to cover cognitive biases and other psychosocial factors affecting inferences. The present paper discusses 3 common cognitive distortions: 1) dichotomania, the compulsion to perceive quantities as dichotomous even when dichotomization is unnecessary and misleading, as in inferences based on whether a P value is "statistically significant"; 2) nullism, the tendency to privilege the hypothesis of no difference or no effect when there is no scientific basis for doing so, as when testing only the null hypothesis; and 3) statistical reification, treating hypothetical data distributions and statistical models as if they reflect known physical laws rather than speculative assumptions for thought experiments. As commonly misused, null-hypothesis significance testing combines these cognitive problems to produce highly distorted interpretation and reporting of study results. Interval estimation has so far proven to be an inadequate solution because it involves dichotomization, an avenue for nullism. Sensitivity and bias analyses have been proposed to address reproducibility problems (Am J Epidemiol. 2017;186(6):646-647); these methods can indeed address reification, but they can also introduce new distortions via misleading specifications for bias parameters. P values can be reframed to lessen distortions by presenting them without reference to a cutoff, providing them for relevant alternatives to the null, and recognizing their dependence on all assumptions used in their computation; they nonetheless require rescaling for measuring evidence. I conclude that methodological development and training should go beyond coverage of mechanistic biases (e.g., confounding, selection bias, measurement error) to cover distortions of conclusions produced by statistical methods and psychosocial forces.


Assuntos
Modelos Estatísticos , Viés de Seleção , Ciência Cognitiva , Humanos , Reprodutibilidade dos Testes , Projetos de Pesquisa
17.
Eur J Epidemiol ; 32(1): 3-20, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28220361

RESUMO

I present an overview of two methods controversies that are central to analysis and inference: That surrounding causal modeling as reflected in the "causal inference" movement, and that surrounding null bias in statistical methods as applied to causal questions. Human factors have expanded what might otherwise have been narrow technical discussions into broad philosophical debates. There seem to be misconceptions about the requirements and capabilities of formal methods, especially in notions that certain assumptions or models (such as potential-outcome models) are necessary or sufficient for valid inference. I argue that, once these misconceptions are removed, most elements of the opposing views can be reconciled. The chief problem of causal inference then becomes one of how to teach sound use of formal methods (such as causal modeling, statistical inference, and sensitivity analysis), and how to apply them without generating the overconfidence and misinterpretations that have ruined so many statistical practices.


Assuntos
Causalidade , Modelos Estatísticos , Projetos de Pesquisa , Métodos Epidemiológicos , Humanos
18.
JAMA ; 327(11): 1083-1084, 2022 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-35226050
19.
Eur J Epidemiol ; 31(4): 337-50, 2016 04.
Artigo em Inglês | MEDLINE | ID: mdl-27209009

RESUMO

Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting.


Assuntos
Intervalos de Confiança , Interpretação Estatística de Dados , Humanos , Probabilidade
20.
Risk Anal ; 36(1): 74-82, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26178183

RESUMO

Job exposure matrices (JEMs) are used to measure exposures based on information about particular jobs and tasks. JEMs are especially useful when individual exposure data cannot be obtained. Nonetheless, there may be other workplace exposures associated with the study disease that are not measured in available JEMs. When these exposures are also associated with the exposures measured in the JEM, biases due to uncontrolled confounding will be introduced. Furthermore, individual exposures differ from JEM measurements due to differences in job conditions and worker practices. Uncertainty may also be present at the assessor level since exposure information for each job may be imprecise or incomplete. Assigning individuals a fixed exposure determined by the JEM ignores these uncertainty sources. We examine the uncertainty displayed by bias analyses in a study of occupational electric shocks, occupational magnetic fields, and amyotrophic lateral sclerosis.


Assuntos
Variações Dependentes do Observador , Exposição Ocupacional , Incerteza , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA