Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Biometrics ; 79(2): 554-558, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36445729

RESUMO

We propose and study an augmented variant of the estimator proposed by Wang, Tchetgen Tchetgen, Martinussen, and Vansteelandt.


Assuntos
Causalidade , Modelos de Riscos Proporcionais
2.
Biometrics ; 79(2): 1029-1041, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-35839293

RESUMO

Inverse-probability-weighted estimators are the oldest and potentially most commonly used class of procedures for the estimation of causal effects. By adjusting for selection biases via a weighting mechanism, these procedures estimate an effect of interest by constructing a pseudopopulation in which selection biases are eliminated. Despite their ease of use, these estimators require the correct specification of a model for the weighting mechanism, are known to be inefficient, and suffer from the curse of dimensionality. We propose a class of nonparametric inverse-probability-weighted estimators in which the weighting mechanism is estimated via undersmoothing of the highly adaptive lasso, a nonparametric regression function proven to converge at nearly n - 1 / 3 $ n^{-1/3}$ -rate to the true weighting mechanism. We demonstrate that our estimators are asymptotically linear with variance converging to the nonparametric efficiency bound. Unlike doubly robust estimators, our procedures require neither derivation of the efficient influence function nor specification of the conditional outcome model. Our theoretical developments have broad implications for the construction of efficient inverse-probability-weighted estimators in large statistical models and a variety of problem settings. We assess the practical performance of our estimators in simulation studies and demonstrate use of our proposed methodology with data from a large-scale epidemiologic study.


Assuntos
Modelos Estatísticos , Probabilidade , Simulação por Computador , Viés de Seleção , Causalidade
3.
Biometrics ; 79(2): 569-581, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36305081

RESUMO

Unmeasured confounding is a key threat to reliable causal inference based on observational studies. Motivated from two powerful natural experiment devices, the instrumental variables and difference-in-differences, we propose a new method called instrumented difference-in-differences that explicitly leverages exogenous randomness in an exposure trend to estimate the average and conditional average treatment effect in the presence of unmeasured confounding. We develop the identification assumptions using the potential outcomes framework. We propose a Wald estimator and a class of multiply robust and efficient semiparametric estimators, with provable consistency and asymptotic normality. In addition, we extend the instrumented difference-in-differences to a two-sample design to facilitate investigations of delayed treatment effect and provide a measure of weak identification. We demonstrate our results in simulated and real datasets.


Assuntos
Causalidade
4.
Biometrics ; 79(2): 601-603, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36314073

RESUMO

We thank all the discussants for the careful reading and insightful comments. In our rejoinder, we extend the discussion of how the assumptions of instrumented difference-in-differences (iDID) compare to the assumptions of the standard instrumental variable method. We also make additional comments on how iDID is related to the fuzzy DID. We highlight future research directions to enhance the utility of iDID, including extensions to adjust for covariate shift in two-sample iDID design, and generalization of iDID to multiple time points and a multi-valued instrumental variable for DID.

5.
Stat Med ; 42(18): 3067-3092, 2023 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-37315949

RESUMO

Existing statistical methods can estimate a policy, or a mapping from covariates to decisions, which can then instruct decision makers (eg, whether to administer hypotension treatment based on covariates blood pressure and heart rate). There is great interest in using such data-driven policies in healthcare. However, it is often important to explain to the healthcare provider, and to the patient, how a new policy differs from the current standard of care. This end is facilitated if one can pinpoint the aspects of the policy (ie, the parameters for blood pressure and heart rate) that change when moving from the standard of care to the new, suggested policy. To this end, we adapt ideas from Trust Region Policy Optimization (TRPO). In our work, however, unlike in TRPO, the difference between the suggested policy and standard of care is required to be sparse, aiding with interpretability. This yields "relative sparsity," where, as a function of a tuning parameter, λ $$ \lambda $$ , we can approximately control the number of parameters in our suggested policy that differ from their counterparts in the standard of care (eg, heart rate only). We propose a criterion for selecting λ $$ \lambda $$ , perform simulations, and illustrate our method with a real, observational healthcare dataset, deriving a policy that is easy to explain in the context of the current standard of care. Our work promotes the adoption of data-driven decision aids, which have great potential to improve health outcomes.


Assuntos
Tomada de Decisão Clínica , Atenção à Saúde , Humanos
6.
Stat Med ; 42(21): 3838-3859, 2023 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-37345519

RESUMO

Unmeasured confounding is a major obstacle to reliable causal inference based on observational studies. Instrumented difference-in-differences (iDiD), a novel idea connecting instrumental variable and standard DiD, ameliorates the above issue by explicitly leveraging exogenous randomness in an exposure trend. In this article, we utilize the above idea of iDiD, and propose a novel group sequential testing method that provides valid inference even in the presence of unmeasured confounders. At each time point, we estimate the average or conditional average treatment effect under iDiD setting using the data accumulated up to that time point, and test the significance of the treatment effect. We derive the joint distribution of the test statistics under the null using the asymptotic properties of M-estimation, and the group sequential boundaries are obtained using the α $$ \alpha $$ -spending functions. The performance of our proposed approach is evaluated on both synthetic data and Clinformatics Data Mart Database (OptumInsight, Eden Prairie, MN) to examine the association between rofecoxib and acute myocardial infarction, and our method detects significant adverse effect of rofecoxib much earlier than the time when it was finally withdrawn from the market.


Assuntos
Viés , Estatística como Assunto , Humanos , Infarto do Miocárdio , Retirada de Medicamento Baseada em Segurança
7.
Stat Med ; 42(15): 2661-2691, 2023 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-37037602

RESUMO

Existing methods for estimating the mean outcome under a given sequential treatment rule often rely on intention-to-treat analyses, which estimate the effect of following a certain treatment rule regardless of compliance behavior of patients. There are two major concerns with intention-to-treat analyses: (1) the estimated effects are often biased toward the null effect; (2) the results are not generalizable and reproducible due to the potentially differential compliance behavior. These are particularly problematic in settings with a high level of non-compliance, such as substance use disorder studies. Our work is motivated by the Adaptive Treatment for Alcohol and Cocaine Dependence study (ENGAGE), which is a multi-stage trial that aimed to construct optimal treatment strategies to engage patients in therapy. Due to the relatively low level of compliance in this trial, intention-to-treat analyses essentially estimate the effect of being randomized to a certain treatment, instead of the actual effect of the treatment. We obviate this challenge by defining the target parameter as the mean outcome under a dynamic treatment regime conditional on a potential compliance stratum. We propose a flexible non-parametric Bayesian approach based on principal stratification, which consists of a Gaussian copula model for the joint distribution of the potential compliances, and a Dirichlet process mixture model for the treatment sequence specific outcomes. We conduct extensive simulation studies which highlight the utility of our approach in the context of multi-stage randomized trials. We show robustness of our estimator to non-linear and non-Gaussian settings as well.


Assuntos
Tomada de Decisões , Cooperação do Paciente , Humanos , Teorema de Bayes , Simulação por Computador , Resultado do Tratamento
8.
Biometrics ; 78(4): 1503-1514, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34086345

RESUMO

An adaptive treatment length strategy is a sequential stage-wise treatment strategy where a subject's treatment begins at baseline and one chooses to stop or continue treatment at each stage provided the subject has been continuously treated. The effects of treatment are assumed to be cumulative and, therefore, the effect of treatment length on clinical endpoint, measured at the end of the study, is of primary scientific interest. At the same time, adverse treatment-terminating events may occur during the course of treatment that require treatment be stopped immediately. Because the presence of a treatment-terminating event may be strongly associated with the study outcome, the treatment-terminating event is informative. In observational studies, decisions to stop or continue treatment depend on covariate history that confounds the relationship between treatment length on outcome. We propose a new risk-set weighted estimator of the mean potential outcome under the condition that time-dependent covariates update at a set of common landmarks. We show that our proposed estimator is asymptotically linear given mild assumptions and correctly specified working models. Specifically, we study the theoretical properties of our estimator when the nuisance parameters are modeled using either parametric or semiparametric methods. The finite sample performance and theoretical results of the proposed estimator are evaluated through simulation studies and demonstrated by application to the Enhanced Suppression of the Platelet Receptor IIb/IIIa with Integrilin Therapy (ESPRIT) infusion trial data.


Assuntos
Modelos Estatísticos , Simulação por Computador , Resultado do Tratamento
9.
Stat Med ; 41(9): 1688-1708, 2022 04 30.
Artigo em Inglês | MEDLINE | ID: mdl-35124836

RESUMO

Sequential, multiple assignment, randomized trials (SMARTs) compare sequences of treatment decision rules called dynamic treatment regimes (DTRs). In particular, the Adaptive Treatment for Alcohol and Cocaine Dependence (ENGAGE) SMART aimed to determine the best DTRs for patients with a substance use disorder. While many authors have focused on a single pairwise comparison, addressing the main goal involves comparisons of >2 DTRs. For complex comparisons, there is a paucity of methods for binary outcomes. We fill this gap by extending the multiple comparisons with the best (MCB) methodology to the Bayesian binary outcome setting. The set of best is constructed based on simultaneous credible intervals. A substantial challenge for power analysis is the correlation between outcome estimators for distinct DTRs embedded in SMARTs due to overlapping subjects. We address this using Robins' G-computation formula to take a weighted average of parameter draws obtained via simulation from the parameter posteriors. We use non-informative priors and work with the exact distribution of parameters avoiding unnecessary normality assumptions and specification of the correlation matrix of DTR outcome summary statistics. We conduct simulation studies for both the construction of a set of optimal DTRs using the Bayesian MCB procedure and the sample size calculation for two common SMART designs. We illustrate our method on the ENGAGE SMART. The R package SMARTbayesR for power calculations is freely available on the Comprehensive R Archive Network (CRAN) repository. An RShiny app is available at https://wilart.shinyapps.io/shinysmartbayesr/.


Assuntos
Projetos de Pesquisa , Teorema de Bayes , Simulação por Computador , Humanos , Tamanho da Amostra
10.
J R Stat Soc Series B Stat Methodol ; 84(2): 382-413, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36147733

RESUMO

Effect modification occurs when the effect of the treatment on an outcome varies according to the level of other covariates and often has important implications in decision-making. When there are tens or hundreds of covariates, it becomes necessary to use the observed data to select a simpler model for effect modification and then make valid statistical inference. We propose a two-stage procedure to solve this problem. First, we use Robinson's transformation to decouple the nuisance parameters from the treatment effect of interest and use machine learning algorithms to estimate the nuisance parameters. Next, after plugging in the estimates of the nuisance parameters, we use the lasso to choose a low-complexity model for effect modification. Compared to a full model consisting of all the covariates, the selected model is much more interpretable. Compared to the univariate subgroup analyses, the selected model greatly reduces the number of false discoveries. We show that the conditional selective inference for the selected model is asymptotically valid given the rate assumptions in classical semiparametric regression. Extensive simulation studies are conducted to verify the asymptotic results and an epidemiological application is used to demonstrate the method.

11.
Biostatistics ; 21(3): 432-448, 2020 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-30380020

RESUMO

Sequential, multiple assignment, randomized trial (SMART) designs have become increasingly popular in the field of precision medicine by providing a means for comparing more than two sequences of treatments tailored to the individual patient, i.e., dynamic treatment regime (DTR). The construction of evidence-based DTRs promises a replacement to ad hoc one-size-fits-all decisions pervasive in patient care. However, there are substantial statistical challenges in sizing SMART designs due to the correlation structure between the DTRs embedded in the design (EDTR). Since a primary goal of SMARTs is the construction of an optimal EDTR, investigators are interested in sizing SMARTs based on the ability to screen out EDTRs inferior to the optimal EDTR by a given amount which cannot be done using existing methods. In this article, we fill this gap by developing a rigorous power analysis framework that leverages the multiple comparisons with the best methodology. Our method employs Monte Carlo simulation to compute the number of individuals to enroll in an arbitrary SMART. We evaluate our method through extensive simulation studies. We illustrate our method by retrospectively computing the power in the Extending Treatment Effectiveness of Naltrexone (EXTEND) trial. An R package implementing our methodology is available to download from the Comprehensive R Archive Network.


Assuntos
Pesquisa Biomédica , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Pesquisa Biomédica/métodos , Pesquisa Biomédica/normas , Humanos , Método de Monte Carlo , Naltrexona/farmacologia , Avaliação de Resultados em Cuidados de Saúde/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Projetos de Pesquisa/normas , Tamanho da Amostra
12.
J Pediatr ; 232: 192-199.e2, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33421424

RESUMO

OBJECTIVE: To develop a novel predictive model using primarily clinical history factors and compare performance to the widely used Rochester Low Risk (RLR) model. STUDY DESIGN: In this cross-sectional study, we identified infants brought to one pediatric emergency department from January 2014 to December 2016. We included infants age 0-90 days, with temperature ≥38°C, and documented gestational age and illness duration. The primary outcome was bacterial infection. We used 10 predictors to develop regression and ensemble machine learning models, which we trained and tested using 10-fold cross-validation. We compared areas under the curve (AUCs), sensitivities, and specificities of the RLR, regression, and ensemble models. RESULTS: Of 877 infants, 67 had a bacterial infection (7.6%). The AUCs of the RLR, regression, and ensemble models were 0.776 (95% CI 0.746, 0.807), 0.945 (0.913, 0.977), and 0.956 (0.935, 0.975), respectively. Using a bacterial infection risk threshold of .01, the sensitivity and specificity of the regression model was 94.6% (87.4%, 100%) and 74.5% (62.4%, 85.4%), compared with 95.5% (87.5%, 99.1%) and 59.6% (56.2%, 63.0%) using the RLR model. CONCLUSIONS: Compared with the RLR model, sensitivities of the novel predictive models were similar whereas AUCs and specificities were significantly greater. If externally validated, these models, by producing an individualized bacterial infection risk estimate, may offer a targeted approach to young febrile infants that is noninvasive and inexpensive.


Assuntos
Infecções Bacterianas/diagnóstico , Regras de Decisão Clínica , Febre/microbiologia , Anamnese/métodos , Infecções Bacterianas/complicações , Estudos Transversais , Serviço Hospitalar de Emergência , Feminino , Humanos , Lactente , Recém-Nascido , Modelos Lineares , Modelos Logísticos , Aprendizado de Máquina , Masculino , Estudos Retrospectivos , Medição de Risco , Sensibilidade e Especificidade
13.
BJU Int ; 127(6): 645-653, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-32936977

RESUMO

OBJECTIVE: To conduct a multi-institutional validation of a high-fidelity, perfused, inanimate, simulation platform for robot-assisted partial nephrectomy (RAPN) using incorporated clinically relevant objective metrics of simulation (CROMS), applying modern validity standards. MATERIALS AND METHODS: Using a combination of three-dimensional (3D) printing and hydrogel casting, a RAPN model was developed from the computed tomography scan of a patient with a 4.2-cm, upper-pole renal tumour (RENAL nephrometry score 7×). 3D-printed casts designed from the patient's imaging were used to fabricate and register hydrogel (polyvinyl alcohol) components of the kidney, including the vascular and pelvicalyceal systems. After mechanical and anatomical verification of the kidney phantom, it was surrounded by other relevant hydrogel organs and placed in a laparoscopic trainer. Twenty-seven novice and 16 expert urologists, categorized according to caseload, from five academic institutions completed the simulation. RESULTS: Clinically relevant objective metrics of simulators, operative complications, and objective performance ratings (Global Evaluative Assessment of Robotic Skills [GEARS]) were compared between groups using Wilcoxon rank-sum (continuous variables) and parametric chi-squared (categorical variables) tests. Pearson and point-biserial correlation coefficients were used to correlate GEARS scores to each CROMS variable. Post-simulation questionnaires were used to obtain subjective supplementation of realism ratings and training effectiveness. RESULTS: Expert ratings demonstrated the model's superiority to other procedural simulations in replicating procedural steps, bleeding, tissue texture and appearance. A significant difference between groups was demonstrated in CROMS [console time (P < 0.001), warm ischaemia time (P < 0.001), estimated blood loss (P < 0.001)] and GEARS (P < 0.001). Six major intra-operative complications occurred only in novice simulations. GEARS scores highly correlated with the CROMS. CONCLUSIONS: This perfused, procedural model offers an unprecedented realistic simulation platform, which incorporates objective, clinically relevant and procedure-specific performance metrics.


Assuntos
Benchmarking , Simulação por Computador , Neoplasias Renais/cirurgia , Nefrectomia/métodos , Procedimentos Cirúrgicos Robóticos , Feminino , Humanos , Masculino
14.
BJU Int ; 125(2): 322-332, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31677325

RESUMO

OBJECTIVES: To incorporate and validate clinically relevant performance metrics of simulation (CRPMS) into a hydrogel model for nerve-sparing robot-assisted radical prostatectomy (NS-RARP). MATERIALS AND METHODS: Anatomically accurate models of the human pelvis, bladder, prostate, urethra, neurovascular bundle (NVB) and relevant adjacent structures were created from patient MRI by injecting polyvinyl alcohol (PVA) hydrogels into three-dimensionally printed injection molds. The following steps of NS-RARP were simulated: bladder neck dissection; seminal vesicle mobilization; NVB dissection; and urethrovesical anastomosis (UVA). Five experts (caseload >500) and nine novices (caseload <50) completed the simulation. Force applied to the NVB during the dissection was quantified by a novel tension wire sensor system fabricated into the NVB. Post-simulation margin status (assessed by induction of chemiluminescent reaction with fluorescent dye mixed into the prostate PVA) and UVA weathertightness (via a standard 180-mL leak test) were also assessed. Objective scoring, using Global Evaluative Assessment of Robotic Skills (GEARS) and Robotic Anastomosis Competency Evaluation (RACE), was performed by two blinded surgeons. GEARS scores were correlated with forces applied to the NVB, and RACE scores were correlated with UVA leak rates. RESULTS: The expert group achieved faster task-specific times for nerve-sparing (P = 0.007) and superior surgical margin results (P = 0.011). Nerve forces applied were significantly lower for the expert group with regard to maximum force (P = 0.011), average force (P = 0.011), peak frequency (P = 0.027) and total energy (P = 0.003). Higher force sensitivity (subcategory of GEARS score) and total GEARS score correlated with lower nerve forces (total energy in Joules) applied to NVB during the simulation with a correlation coefficient (r value) of -0.66 (P = 0.019) and -0.87 (P = 0.000), respectively. Both total and force sensitivity GEARS scores were significantly higher in the expert group compared to the novice group (P = 0.003). UVA leak rate highly correlated with total RACE score r value = -0.86 (P = 0.000). Mean RACE scores were also significantly different between novices and experts (P = 0.003). CONCLUSION: We present a realistic, feedback-driven, full-immersion simulation platform for the development and evaluation of surgical skills pertinent to NS-RARP. The correlation of validated objective metrics (GEARS and RACE) with our CRPMS suggests their application as a novel method for real-time assessment and feedback during robotic surgery training. Further work is required to assess the ability to predict live surgical outcomes.


Assuntos
Impressão Tridimensional , Próstata/anatomia & histologia , Prostatectomia/educação , Procedimentos Cirúrgicos Robóticos/educação , Treinamento por Simulação , Cirurgia Assistida por Computador/educação , Anastomose Cirúrgica/normas , Benchmarking , Competência Clínica , Simulação por Computador , Estudos de Viabilidade , Humanos , Hidrogéis , Internato e Residência , Masculino , Modelos Anatômicos , Prostatectomia/normas , Reprodutibilidade dos Testes , Procedimentos Cirúrgicos Robóticos/normas , Análise e Desempenho de Tarefas
16.
Am J Epidemiol ; 185(12): 1233-1239, 2017 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-28338946

RESUMO

Instrumental variable (IV) methods provide unbiased treatment effect estimation in the presence of unmeasured confounders under certain assumptions. To provide valid estimates of treatment effect, treatment effect confounders that are associated with the IV (IV-confounders) must be included in the analysis, and not including observations with missing values may lead to bias. Missing covariate data are particularly problematic when the probability that a value is missing is related to the value itself, which is known as nonignorable missingness. In such cases, imputation-based methods are biased. Using health-care provider preference as an IV method, we propose a 2-step procedure with which to estimate a valid treatment effect in the presence of baseline variables with nonignorable missing values. First, the provider preference IV value is estimated by performing a complete-case analysis using a random-effects model that includes IV-confounders. Second, the treatment effect is estimated using a 2-stage least squares IV approach that excludes IV-confounders with missing values. Simulation results are presented, and the method is applied to an analysis comparing the effects of sulfonylureas versus metformin on body mass index, where the variables baseline body mass index and glycosylated hemoglobin have missing values. Our result supports the association of sulfonylureas with weight gain.


Assuntos
Viés , Fatores de Confusão Epidemiológicos , Interpretação Estatística de Dados , Avaliação de Resultados em Cuidados de Saúde/métodos , Projetos de Pesquisa , Índice de Massa Corporal , Simulação por Computador , Hemoglobinas Glicadas/efeitos dos fármacos , Humanos , Hipoglicemiantes/farmacologia , Análise dos Mínimos Quadrados , Metformina/farmacologia , Compostos de Sulfonilureia/farmacologia , Aumento de Peso/efeitos dos fármacos
17.
Biostatistics ; 17(1): 135-48, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26243172

RESUMO

A dynamic treatment regime (DTR) is a treatment design that seeks to accommodate patient heterogeneity in response to treatment. DTRs can be operationalized by a sequence of decision rules that map patient information to treatment options at specific decision points. The sequential, multiple assignment, randomized trial (SMART) is a trial design that was developed specifically for the purpose of obtaining data that informs the construction of good (i.e. efficacious) decision rules. One of the scientific questions motivating a SMART concerns the comparison of multiple DTRs that are embedded in the design. Typical approaches for identifying the best DTRs involve all possible comparisons between DTRs that are embedded in a SMART, at the cost of greatly reduced power to the extent that the number of embedded DTRs (EDTRs) increase. Here, we propose a method that will enable investigators to use SMART study data more efficiently to identify the set that contains the most efficacious EDTRs. Our method ensures that the true best EDTRs are included in this set with at least a given probability. Simulation results are presented to evaluate the proposed method, and the Extending Treatment Effectiveness of Naltrexone SMART study data are analyzed to illustrate its application.


Assuntos
Interpretação Estatística de Dados , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Humanos
18.
Biometrics ; 73(4): 1111-1122, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28273693

RESUMO

Methodological advancements, including propensity score methods, have resulted in improved unbiased estimation of treatment effects from observational data. Traditionally, a "throw in the kitchen sink" approach has been used to select covariates for inclusion into the propensity score, but recent work shows including unnecessary covariates can impact both the bias and statistical efficiency of propensity score estimators. In particular, the inclusion of covariates that impact exposure but not the outcome, can inflate standard errors without improving bias, while the inclusion of covariates associated with the outcome but unrelated to exposure can improve precision. We propose the outcome-adaptive lasso for selecting appropriate covariates for inclusion in propensity score models to account for confounding bias and maintaining statistical efficiency. This proposed approach can perform variable selection in the presence of a large number of spurious covariates, that is, covariates unrelated to outcome or exposure. We present theoretical and simulation results indicating that the outcome-adaptive lasso selects the propensity score model that includes all true confounders and predictors of outcome, while excluding other covariates. We illustrate covariate selection using the outcome-adaptive lasso, including comparison to alternative approaches, using simulated data and in a survey of patients using opioid therapy to manage chronic pain.


Assuntos
Modelos Estatísticos , Avaliação de Resultados em Cuidados de Saúde , Pontuação de Propensão , Analgésicos Opioides/uso terapêutico , Viés , Biometria , Dor Crônica/tratamento farmacológico , Simulação por Computador , Humanos
19.
Pharmacoepidemiol Drug Saf ; 26(4): 357-367, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28239929

RESUMO

PURPOSE: Instrumental variable (IV) methods are used increasingly in pharmacoepidemiology to address unmeasured confounding. In this tutorial, we review the steps used in IV analyses and the underlying assumptions. We also present methods to assess the validity of those assumptions and describe sensitivity analysis to examine the effects of possible violations of those assumptions. METHODS: Observational studies based on regression or propensity score analyses rely on the untestable assumption that there are no unmeasured confounders. IV analysis is a tool that removes the bias caused by unmeasured confounding provided that key assumptions (some of which are also untestable) are met. RESULTS: When instruments are valid, IV methods provided unbiased treatment effect estimation in the presence of unmeasured confounders. However, the standard error of the IV estimate is higher than the standard error of non-IV estimates, e.g., regression and propensity score methods. Sensitivity analyses provided insight about the robustness of the IV results to the plausible degrees of violation of assumptions. CONCLUSIONS: IV analysis should be used cautiously because the validity of IV estimates relies on assumptions that are, in general, untestable and difficult to be certain about. Thus, assessing the sensitivity of the estimate to violations of these assumptions is important and can better inform the causal inferences that can be drawn from the study. Copyright © 2017 John Wiley & Sons, Ltd.


Assuntos
Fatores de Confusão Epidemiológicos , Projetos de Pesquisa Epidemiológica , Farmacoepidemiologia/métodos , Viés , Humanos , Pontuação de Propensão
20.
Stat Med ; 35(13): 2221-34, 2016 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-26750518

RESUMO

Q-learning is a regression-based approach that uses longitudinal data to construct dynamic treatment regimes, which are sequences of decision rules that use patient information to inform future treatment decisions. An optimal dynamic treatment regime is composed of a sequence of decision rules that indicate how to optimally individualize treatment using the patients' baseline and time-varying characteristics to optimize the final outcome. Constructing optimal dynamic regimes using Q-learning depends heavily on the assumption that regression models at each decision point are correctly specified; yet model checking in the context of Q-learning has been largely overlooked in the current literature. In this article, we show that residual plots obtained from standard Q-learning models may fail to adequately check the quality of the model fit. We present a modified Q-learning procedure that accommodates residual analyses using standard tools. We present simulation studies showing the advantage of the proposed modification over standard Q-learning. We illustrate this new Q-learning approach using data collected from a sequential multiple assignment randomized trial of patients with schizophrenia. Copyright © 2016 John Wiley & Sons, Ltd.


Assuntos
Antipsicóticos/administração & dosagem , Interpretação Estatística de Dados , Esquizofrenia/tratamento farmacológico , Antipsicóticos/uso terapêutico , Técnicas de Apoio para a Decisão , Esquema de Medicação , Humanos , Modelos Lineares , Modelos Estatísticos , Análise de Regressão , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa