Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Decis Making ; : 272989X241264287, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39082512

RESUMO

BACKGROUND: The expected value of sample information (EVSI) measures the expected benefits that could be obtained by collecting additional data. Estimating EVSI using the traditional nested Monte Carlo method is computationally expensive, but the recently developed Gaussian approximation (GA) approach can efficiently estimate EVSI across different sample sizes. However, the conventional GA may result in biased EVSI estimates if the decision models are highly nonlinear. This bias may lead to suboptimal study designs when GA is used to optimize the value of different studies. Therefore, we extend the conventional GA approach to improve its performance for nonlinear decision models. METHODS: Our method provides accurate EVSI estimates by approximating the conditional expectation of the benefit based on 2 steps. First, a Taylor series approximation is applied to estimate the conditional expectation of the benefit as a function of the conditional moments of the parameters of interest using a spline, which is fitted to the samples of the parameters and the corresponding benefits. Next, the conditional moments of parameters are approximated by the conventional GA and Fisher information. The proposed approach is applied to several data collection exercises involving non-Gaussian parameters and nonlinear decision models. Its performance is compared with the nested Monte Carlo method, the conventional GA approach, and the nonparametric regression-based method for EVSI calculation. RESULTS: The proposed approach provides accurate EVSI estimates across different sample sizes when the parameters of interest are non-Gaussian and the decision models are nonlinear. The computational cost of the proposed method is similar to that of other novel methods. CONCLUSIONS: The proposed approach can estimate EVSI across sample sizes accurately and efficiently, which may support researchers in determining an economically optimal study design using EVSI. HIGHLIGHTS: The Gaussian approximation method efficiently estimates the expected value of sample information (EVSI) for clinical trials with varying sample sizes, but it may introduce bias when health economic models have a nonlinear structure.We introduce the spline-based Taylor series approximation method and combine it with the original Gaussian approximation to correct the nonlinearity-induced bias in EVSI estimation.Our approach can provide more precise EVSI estimates for complex decision models without sacrificing computational efficiency, which can enhance the resource allocation strategies from the cost-effective perspective.

2.
Pediatrics ; 154(1)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38832441

RESUMO

To identify priority areas to improve the design, conduct, and reporting of pediatric clinical trials, the international expert network, Standards for Research (StaR) in Child Health, was assembled and published the first 6 Standards in Pediatrics in 2012. After a recent review summarizing the 247 publications by StaR Child Health authors that highlight research practices that add value and reduce research "waste," the current review assesses the progress in key child health trial methods areas: consent and recruitment, containing risk of bias, roles of data monitoring committees, appropriate sample size calculations, outcome selection and measurement, and age groups for pediatric trials. Although meaningful change has occurred within the child health research ecosystem, measurable progress is still disappointingly slow. In this context, we identify and review emerging trends that will advance the agenda of increased clinical usefulness of pediatric trials, including patient and public engagement, Bayesian statistical approaches, adaptive designs, and platform trials. We explore how implementation science approaches could be applied to effect measurable improvements in the design, conducted, and reporting of child health research.


Assuntos
Saúde da Criança , Ensaios Clínicos como Assunto , Projetos de Pesquisa , Humanos , Criança , Projetos de Pesquisa/normas , Ensaios Clínicos como Assunto/normas , Pediatria/normas , Teorema de Bayes
3.
Crit Care Explor ; 6(6): e1098, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38836575

RESUMO

OBJECTIVES: To estimate the expected value of undertaking a future randomized controlled trial of thresholds used to initiate invasive ventilation compared with usual care in hypoxemic respiratory failure. PERSPECTIVE: Publicly funded healthcare payer. SETTING: Critical care units capable of providing invasive ventilation and unconstrained by resource limitations during usual (nonpandemic) practice. METHODS: We performed a model-based cost-utility estimation with individual-level simulation and value-of-information analysis focused on adults, admitted to critical care, receiving noninvasive oxygen. In the primary scenario, we compared hypothetical threshold A to usual care, where threshold A resulted in increased use of invasive ventilation and improved survival compared with usual care. In the secondary scenario, we compared hypothetical threshold B to usual care, where threshold B resulted in decreased use of invasive ventilation and similar survival compared with usual care. We assumed a willingness-to-pay of 100,000 Canadian dollars (CADs) per quality-adjusted life year. RESULTS: In the primary scenario, threshold A was cost-effective compared with usual care due to improved hospital survival (78.1% vs. 75.1%), despite more use of invasive ventilation (62% vs. 30%) and higher lifetime costs (86,900 vs. 75,500 CAD). In the secondary scenario, threshold B was cost-effective compared with usual care due to similar survival (74.5% vs. 74.6%) with less use of invasive ventilation (20.2% vs. 27.6%) and lower lifetime costs (71,700 vs. 74,700 CAD). Value-of-information analysis showed that the expected value to Canadian society over 10 years of a 400-person randomized trial comparing a threshold for invasive ventilation to usual care in hypoxemic respiratory failure was 1.35 billion CAD or more in both scenarios. CONCLUSIONS: It would be highly valuable to society to identify thresholds that, in comparison to usual care, either increase survival or reduce invasive ventilation without reducing survival.


Assuntos
Análise Custo-Benefício , Ensaios Clínicos Controlados Aleatórios como Assunto , Respiração Artificial , Insuficiência Respiratória , Humanos , Respiração Artificial/economia , Análise Custo-Benefício/métodos , Insuficiência Respiratória/terapia , Insuficiência Respiratória/economia , Insuficiência Respiratória/mortalidade , Anos de Vida Ajustados por Qualidade de Vida , Canadá , Unidades de Terapia Intensiva/economia , Adulto
4.
Clin Trials ; : 17407745241251812, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38771021

RESUMO

BACKGROUND/AIMS: Multi-arm, multi-stage trials frequently include a standard care to which all interventions are compared. This may increase costs and hinders comparisons among the experimental arms. Furthermore, the standard care may not be evident, particularly when there is a large variation in standard practice. Thus, we aimed to develop an adaptive clinical trial that drops ineffective interventions following an interim analysis before selecting the best intervention at the final stage without requiring a standard care. METHODS: We used Bayesian methods to develop a multi-arm, two-stage adaptive trial and evaluated two different methods for ranking interventions, the probability that each intervention was optimal (Pbest) and using the surface under the cumulative ranking curve (SUCRA), at both the interim and final analysis. The proposed trial design determines the maximum sample size for each intervention using the Average Length Criteria. The interim analysis takes place at approximately half the pre-specified maximum sample size and aims to drop interventions for futility if either Pbest or the SUCRA is below a pre-specified threshold. The final analysis compares all remaining interventions at the maximum sample size to conclude superiority based on either Pbest or the SUCRA. The two ranking methods were compared across 12 scenarios that vary the number of interventions and the assumed differences between the interventions. The thresholds for futility and superiority were chosen to control type 1 error, and then the predictive power and expected sample size were evaluated across scenarios. A trial comparing three interventions that aim to reduce anxiety for children undergoing a laceration repair in the emergency department was then designed, known as the Anxiolysis for Laceration Repair in Children Trial (ALICE) trial. RESULTS: As the number of interventions increases, the SUCRA results in a higher predictive power compared with Pbest. Using Pbest results in a lower expected sample size when there is an effective intervention. Using the Average Length Criterion, the ALICE trial has a maximum sample size for each arm of 100 patients. This sample size results in a 86% and 85% predictive power using Pbest and the SUCRA, respectively. Thus, we chose Pbest as the ranking method for the ALICE trial. CONCLUSION: Bayesian ranking methods can be used in multi-arm, multi-stage trials with no clear control intervention. When more interventions are included, the SUCRA results in a higher power than Pbest. Future work should consider whether other ranking methods may also be relevant for clinical trial design.

5.
Contemp Clin Trials ; 142: 107560, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38735571

RESUMO

BACKGROUND: Adaptive trials usually require simulations to determine values for design parameters, demonstrate error rates, and establish the sample size. We designed a Bayesian adaptive trial comparing ventilation strategies for patients with acute hypoxemic respiratory failure using simulations. The complexity of the analysis would usually require computationally expensive Markov Chain Monte Carlo methods but this barrier to simulation was overcome using the Integrated Nested Laplace Approximations (INLA) algorithm to provide fast, approximate Bayesian inference. METHODS: We simulated two-arm Bayesian adaptive trials with equal randomization that stratified participants into two disease severity states. The analysis used a proportional odds model, fit using INLA. Trials were stopped based on pre-specified posterior probability thresholds for superiority or futility, separately for each state. We calculated the type I error and power across 64 scenarios that varied the probability thresholds and the initial minimum sample size before commencing adaptive analyses. Two designs that maintained a type I error below 5%, a power above 80%, and a feasible mean sample size were evaluated further to determine the optimal design. RESULTS: Power generally increased as the initial sample size and the futility threshold increased. The chosen design had an initial recruitment of 500 and a superiority threshold of 0.9925, and futility threshold of 0.95. It maintained high power and was likely to reach a conclusion before exceeding a feasible sample size. CONCLUSIONS: We designed a Bayesian adaptive trial to evaluate novel strategies for ventilation using the INLA algorithm to efficiently evaluate a wide range of designs through simulation.


Assuntos
Algoritmos , Teorema de Bayes , Respiração Artificial , Insuficiência Respiratória , Humanos , Respiração Artificial/métodos , Insuficiência Respiratória/terapia , Projetos de Pesquisa , Tamanho da Amostra , Ensaios Clínicos Adaptados como Assunto/métodos , Cadeias de Markov , Simulação por Computador , Doença Aguda , Método de Monte Carlo
6.
Clin Trials ; : 17407745241247334, 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38752434

RESUMO

BACKGROUND: Clinical trials are increasingly using Bayesian methods for their design and analysis. Inference in Bayesian trials typically uses simulation-based approaches such as Markov Chain Monte Carlo methods. Markov Chain Monte Carlo has high computational cost and can be complex to implement. The Integrated Nested Laplace Approximations algorithm provides approximate Bayesian inference without the need for computationally complex simulations, making it more efficient than Markov Chain Monte Carlo. The practical properties of Integrated Nested Laplace Approximations compared to Markov Chain Monte Carlo have not been considered for clinical trials. Using data from a published clinical trial, we aim to investigate whether Integrated Nested Laplace Approximations is a feasible and accurate alternative to Markov Chain Monte Carlo and provide practical guidance for trialists interested in Bayesian trial design. METHODS: Data from an international Bayesian multi-platform adaptive trial that compared therapeutic-dose anticoagulation with heparin to usual care in non-critically ill patients hospitalized for COVID-19 were used to fit Bayesian hierarchical generalized mixed models. Integrated Nested Laplace Approximations was compared to two Markov Chain Monte Carlo algorithms, implemented in the software JAGS and stan, using packages available in the statistical software R. Seven outcomes were analysed: organ-support free days (an ordinal outcome), five binary outcomes related to survival and length of hospital stay, and a time-to-event outcome. The posterior distributions for the treatment and sex effects and the variances for the hierarchical effects of age, site and time period were obtained. We summarized these posteriors by calculating the mean, standard deviations and the 95% equitailed credible intervals and presenting the results graphically. The computation time for each algorithm was recorded. RESULTS: The average overlap of the 95% credible interval for the treatment and sex effects estimated using Integrated Nested Laplace Approximations was 96% and 97.6% compared with stan, respectively. The graphical posterior densities for these effects overlapped for all three algorithms. The posterior mean for the variance of the hierarchical effects of age, site and time estimated using Integrated Nested Laplace Approximations are within the 95% credible interval estimated using Markov Chain Monte Carlo but the average overlap of the credible interval is lower, 77%, 85.6% and 91.3%, respectively, for Integrated Nested Laplace Approximations compared to stan. Integrated Nested Laplace Approximations and stan were easily implemented in clear, well-established packages in R, while JAGS required the direct specification of the model. Integrated Nested Laplace Approximations was between 85 and 269 times faster than stan and 26 and 1852 times faster than JAGS. CONCLUSION: Integrated Nested Laplace Approximations could reduce the computational complexity of Bayesian analysis in clinical trials as it is easy to implement in R, substantially faster than Markov Chain Monte Carlo methods implemented in JAGS and stan, and provides near identical approximations to the posterior distributions for the treatment effect. Integrated Nested Laplace Approximations was less accurate when estimating the posterior distribution for the variance of hierarchical effects, particularly for the proportional odds model, and future work should determine if the Integrated Nested Laplace Approximations algorithm can be adjusted to improve this estimation.

8.
Pharmacoeconomics ; 42(5): 479-486, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38583100

RESUMO

Value of Information (VOI) analyses calculate the economic value that could be generated by obtaining further information to reduce uncertainty in a health economic decision model. VOI has been suggested as a tool for research prioritisation and trial design as it can highlight economically valuable avenues for future research. Recent methodological advances have made it increasingly feasible to use VOI in practice for research; however, there are critical differences between the VOI approach and the standard methods used to design research studies such as clinical trials. We aimed to highlight key differences between the research design approach based on VOI and standard clinical trial design methods, in particular the importance of considering the full decision context. We present two hypothetical examples to demonstrate that VOI methods are only accurate when (1) all feasible comparators are included in the decision model when designing research, and (2) all comparators are retained in the decision model once the data have been collected and a final treatment recommendation is made. Omitting comparators from either the design or analysis phase of research when using VOI methods can lead to incorrect trial designs and/or treatment recommendations. Overall, we conclude that incorrectly specifying the health economic model by ignoring potential comparators can lead to misleading VOI results and potentially waste scarce research resources.


Assuntos
Ensaios Clínicos como Assunto , Técnicas de Apoio para a Decisão , Modelos Econômicos , Projetos de Pesquisa , Humanos , Ensaios Clínicos como Assunto/economia , Ensaios Clínicos como Assunto/métodos , Análise Custo-Benefício , Incerteza , Tomada de Decisões
9.
Pharmacoeconomics ; 42(7): 783-795, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38607519

RESUMO

BACKGROUND AND OBJECTIVE: Decision models for health technology assessment (HTA) are largely submitted to HTA agencies using commercial software, which has known limitations. The use of the open-source programming language R has been suggested because of its efficiency, transparency, reproducibility, and ability to consider complex analyses. However, its use in HTA remains limited. This qualitative study aimed to explore the main reasons for this slow uptake of R in HTA and identify tangible facilitators. METHODS: We undertook two semi-structured focus group discussions with 24 key stakeholders from government agencies, consultancy, pharmaceutical companies, and academia. Two 1.5-hour discussions reflected on barriers identified in a previous study and highlighted additional barriers. Discussions were recorded and semi-transcribed, and data were organized and summarized into key themes. RESULTS: Human resources constraints were identified as a key barrier, including a lack of training, prioritization and collaboration, and resistance to change. Another key barrier was the lack of acceptance, or clear guidance, around submissions in R by HTA agencies. Participants also highlighted a lack of communication around accepted packages and decision model structures, and between HTA agencies on standard decision modeling structures. CONCLUSIONS: There is a need for standardization, which can facilitate decision model sharing, coding homogeneity, and improved country adaptations. The creation of training materials and tailored workshops was identified as a key short-term facilitator. Increased communication and engagement of stakeholders could also facilitate the use of R by identifying needs and opportunities, encouraging HTA agencies to address structural barriers, and increasing incentives to use R.


Assuntos
Técnicas de Apoio para a Decisão , Grupos Focais , Avaliação da Tecnologia Biomédica , Humanos , Software , Reprodutibilidade dos Testes , Tomada de Decisões , Pesquisa Qualitativa
10.
BMC Med Res Methodol ; 24(1): 32, 2024 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-38341552

RESUMO

BACKGROUND: When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Model-based standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariate-adjusted estimate of the marginal treatment effect. METHODS: The standard approach to model-based standardization involves maximum-likelihood estimation and use of the non-parametric bootstrap. We introduce a novel, general-purpose, model-based standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization (MIM). MIM consists of two main stages: the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. RESULTS: We conduct a simulation study to benchmark the finite-sample performance of MIM in conjunction with a parametric outcome model. The simulations provide proof-of-principle in scenarios with binary outcomes, continuous-valued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to model-based standardization. CONCLUSION: We demonstrate that multiple imputation can be used to marginalize over a target covariate distribution, providing appropriate inference with a correctly specified parametric outcome model and offering statistical performance comparable to that of the standard approach to model-based standardization.


Assuntos
Modelos Estatísticos , Humanos , Teorema de Bayes , Modelos Lineares , Simulação por Computador , Modelos Logísticos , Padrões de Referência
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA