Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 94
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Biom J ; 66(3): e2300237, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38637319

RESUMO

In this paper, we consider online multiple testing with familywise error rate (FWER) control, where the probability of committing at least one type I error will remain under control while testing a possibly infinite sequence of hypotheses over time. Currently, adaptive-discard (ADDIS) procedures seem to be the most promising online procedures with FWER control in terms of power. Now, our main contribution is a uniform improvement of the ADDIS principle and thus of all ADDIS procedures. This means, the methods we propose reject as least as much hypotheses as ADDIS procedures and in some cases even more, while maintaining FWER control. In addition, we show that there is no other FWER controlling procedure that enlarges the event of rejecting any hypothesis. Finally, we apply the new principle to derive uniform improvements of the ADDIS-Spending and ADDIS-Graph.


Assuntos
Modelos Estatísticos , Probabilidade
2.
Biometrics ; 79(4): 2806-2810, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37459202

RESUMO

This comment builds on the familywise expected loss (FWEL) framework suggested by Maurer, Bretz, and Xun in 2022. By representing the populationwise error rate (PWER) as FWEL, it is illustrated how the FWEL framework can be extended to clinical trials with multiple and overlapping populations and the PWER can be generalized to more general losses. The comment also addresses the question of how to deal with midtrial changes in the posttrial risks and related losses that are caused by data-driven decisions. Focusing on multiarm trials with the possibility of dropping treatments midtrial, we suggest to switch from control of the unconditional expected loss to control of the conditional expected loss that is related to the actual risks and is conditional on the sample event that causes the change in the risks. The problem and here suggested solution is also motivated with a sequence of independent trials for a hitherto incurable disease which ends when an efficient treatment is found. No multiplicity adjustment is applied in this case and we show how this can be justified by the consideration of the changing out-trial risks and with control of conditional type I error rates and losses.


Assuntos
Projetos de Pesquisa , Interpretação Estatística de Dados
3.
BMC Musculoskelet Disord ; 24(1): 221, 2023 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-36959595

RESUMO

INTRODUCTION: Hip and knee osteoarthritis are associated with functional limitations, pain and restrictions in quality of life and the ability to work. Furthermore, with growing prevalence, osteoarthritis is increasingly causing (in)direct costs. Guidelines recommend exercise therapy and education as primary treatment strategies. Available options for treatment based on physical activity promotion and lifestyle change are often insufficiently provided and used. In addition, the quality of current exercise programmes often does not meet the changing care needs of older people with comorbidities and exercise adherence is a challenge beyond personal physiotherapy. The main objective of this study is to investigate the short- and long-term (cost-)effectiveness of the SmArt-E programme in people with hip and/or knee osteoarthritis in terms of pain and physical functioning compared to usual care. METHODS: This study is designed as a multicentre randomized controlled trial with a target sample size of 330 patients. The intervention is based on the e-Exercise intervention from the Netherlands, consists of a training and education programme and is conducted as a blended care intervention over 12 months. We use an app to support independent training and the development of self-management skills. The primary and secondary hypotheses are that participants in the SmArt-E intervention will have less pain (numerical rating scale) and better physical functioning (Hip Disability and Osteoarthritis Outcome Score, Knee Injury and Osteoarthritis Outcome Score) compared to participants in the usual care group after 12 and 3 months. Other secondary outcomes are based on domains of the Osteoarthritis Research Society International (OARSI). The study will be accompanied by a process evaluation. DISCUSSION: After a positive evaluation, SmArt-E can be offered in usual care, flexibly addressing different care situations. The desired sustainability and the support of the participants' behavioural change are initiated via the app through audio-visual contact with their physiotherapists. Furthermore, the app supports the repetition and consolidation of learned training and educational content. For people with osteoarthritis, the new form of care with proven effectiveness can lead to a reduction in underuse and misuse of care as well as contribute to a reduction in (in)direct costs. TRIAL REGISTRATION: German Clinical Trials Register, DRKS00028477. Registered on August 10, 2022.


Assuntos
Osteoartrite do Quadril , Osteoartrite do Joelho , Idoso , Humanos , Terapia por Exercício/métodos , Estudos Multicêntricos como Assunto , Osteoartrite do Joelho/complicações , Dor , Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto , Smartphone , Resultado do Tratamento , Ensaios Clínicos Pragmáticos como Assunto
4.
Pharm Stat ; 22(5): 836-845, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37217198

RESUMO

Formal proof of efficacy of a drug requires that in a prospective experiment, superiority over placebo, or either superiority or at least non-inferiority to an established standard, is demonstrated. Traditionally one primary endpoint is specified, but various diseases exist where treatment success needs to be based on the assessment of two primary endpoints. With co-primary endpoints, both need to be "significant" as a prerequisite to claim study success. Here, no adjustment of the study-wise type-1-error is needed, but sample size is often increased to maintain the pre-defined power. Studies that use an at-least-one concept have been proposed where study success is claimed if superiority for at least one of the endpoints is demonstrated. This is sometimes also called the dual primary endpoint concept, and an appropriate adjustment of the study-wise type-1-error is required. This concept is not covered in the European Guideline on multiplicity because study success can be claimed if one endpoint shows significant superiority, despite a possible deterioration in the other. In line with Röhmel's strategy, we discuss an alternative approach including non-inferiority hypotheses testing that avoids obvious contradictions to proper decision-making. This approach leads back to the co-primary endpoint assessment, and has the advantage that minimum requirements for endpoints can be modeled flexibly for several practical needs. Our simulations show that, if planning assumptions are correct, the proposed additional requirements improve interpretation with only a limited impact on power, that is, on sample size.


Assuntos
Estudos Prospectivos , Humanos , Tamanho da Amostra , Resultado do Tratamento
5.
Stat Med ; 41(25): 5033-5045, 2022 11 10.
Artigo em Inglês | MEDLINE | ID: mdl-35979723

RESUMO

For indications where only unstable reference treatments are available and use of placebo is ethically justified, three-arm "gold standard" designs with an experimental, reference and placebo arm are recommended for non-inferiority trials. In such designs, the demonstration of efficacy of the reference or experimental treatment is a requirement. They have the disadvantage that only little can be concluded from the trial if the reference fails to be efficacious. To overcome this, we investigate novel single-stage, adaptive test strategies where non-inferiority is tested only if the reference shows sufficient efficacy and otherwise δ $$ \delta $$ -superiority of the experimental treatment over placebo is tested. With a properly chosen superiority margin, δ $$ \delta $$ -superiority indirectly shows non-inferiority. We optimize the sample size for several decision rules and find that the natural, data driven test strategy, which tests non-inferiority if the reference's efficacy test is significant, leads to the smallest overall and placebo sample sizes. We proof that under specific constraints on the sample sizes, this procedure controls the family-wise error rate. All optimal sample sizes are found to meet this constraint. We finally show how to account for a relevant placebo drop-out rate in an efficient way and apply the new test strategy to a real life data set.


Assuntos
Projetos de Pesquisa , Humanos , Tamanho da Amostra
6.
Stat Med ; 41(5): 891-909, 2022 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-35075684

RESUMO

Major advances have been made regarding the utilization of machine learning techniques for disease diagnosis and prognosis based on complex and high-dimensional data. Despite all justified enthusiasm, overoptimistic assessments of predictive performance are still common in this area. However, predictive models and medical devices based on such models should undergo a throughout evaluation before being implemented into clinical practice. In this work, we propose a multiple testing framework for (comparative) phase III diagnostic accuracy studies with sensitivity and specificity as co-primary endpoints. Our approach challenges the frequent recommendation to strictly separate model selection and evaluation, that is, to only assess a single diagnostic model in the evaluation study. We show that our parametric simultaneous test procedure asymptotically allows strong control of the family-wise error rate. A multiplicity correction is also available for point and interval estimates. Moreover, we demonstrate in an extensive simulation study that our multiple testing strategy on average leads to a better final diagnostic model and increased statistical power. To plan such studies, we propose a Bayesian approach to determine the optimal number of models to evaluate simultaneously. For this purpose, our algorithm optimizes the expected final model performance given previous (hold-out) data from the model development phase. We conclude that an assessment of multiple promising diagnostic models in the same evaluation study has several advantages when suitable adjustments for multiple comparisons are employed.


Assuntos
Algoritmos , Aprendizado de Máquina , Teorema de Bayes , Humanos , Prognóstico , Sensibilidade e Especificidade
7.
BMC Med Res Methodol ; 22(1): 115, 2022 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-35439947

RESUMO

BACKGROUND: The sample size calculation in a confirmatory diagnostic accuracy study is performed for co-primary endpoints because sensitivity and specificity are considered simultaneously. The initial sample size calculation in an unpaired and paired diagnostic study is based on assumptions about, among others, the prevalence of the disease and, in the paired design, the proportion of discordant test results between the experimental and the comparator test. The choice of the power for the individual endpoints impacts the sample size and overall power. Uncertain assumptions about the nuisance parameters can additionally affect the sample size. METHODS: We develop an optimal sample size calculation considering co-primary endpoints to avoid an overpowered study in the unpaired and paired design. To adjust assumptions about the nuisance parameters during the study period, we introduce a blinded adaptive design for sample size re-estimation for the unpaired and the paired study design. A simulation study compares the adaptive design to the fixed design. For the paired design, the new approach is compared to an existing approach using an example study. RESULTS: Due to blinding, the adaptive design does not inflate type I error rates. The adaptive design reaches the target power and re-estimates nuisance parameters without any relevant bias. Compared to the existing approach, the proposed methods lead to a smaller sample size. CONCLUSIONS: We recommend the application of the optimal sample size calculation and a blinded adaptive design in a confirmatory diagnostic accuracy study. They compensate inefficiencies of the sample size calculation and support to reach the study aim.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Simulação por Computador , Humanos , Prevalência , Tamanho da Amostra , Sensibilidade e Especificidade
8.
BMC Health Serv Res ; 21(1): 727, 2021 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-34301241

RESUMO

BACKGROUND: Studies revealed the importance to assess dementia care dyads, composed of persons with dementia and their primary informal caregivers, in a differentiated way and to tailor support services to particular living and care circumstances. Therefore, this study aims first to identify classes of dementia care dyads that differ according to sociodemographic, care-related and dementia-specific characteristics and second, to compare these classes with regard to healthcare-related outcomes. METHODS: We used data from the cross-sectional German DemNet-D study (n = 551) and conducted a latent class analysis to investigate different classes of dementia care dyads. In addition, we compared these classes with regard to the use of health care services, caregiver burden (BIZA-D), general health of the informal caregiver (EQ-VAS) as well as quality of life (QoL-AD) and social participation (SACA) of the person with dementia. Furthermore, we compared the stability of the home-based care arrangements. RESULTS: Six different classes of dementia care dyads were identified, based on best Bayesian Information Criterion (BIC), significant likelihood ratio test (p <  0.001), high entropy (0.87) and substantive interpretability. Classes were labelled as "adult child parent relationship & younger informal caregiver", "adult child parent relationship & middle aged informal caregiver", "non family relationship & younger informal caregiver", "couple & male informal caregiver of older age", "couple & female informal caregiver of older age", "couple & younger informal caregiver". The classes showed significant differences regarding health care service use. Caregiver burden, quality of life of the person with dementia and stability of the care arrangement differed also significantly between the classes. CONCLUSION: Based on a latent class analysis this study indicates differences between classes of informal dementia care dyads. The findings may give direction for better tailoring of support services to particular circumstances to improve healthcare-related outcomes of persons with dementia and informal caregivers.


Assuntos
Demência , Qualidade de Vida , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Teorema de Bayes , Cuidadores , Estudos Transversais , Atenção à Saúde , Demência/terapia , Análise de Classes Latentes
9.
Stat Med ; 39(30): 4551-4573, 2020 12 30.
Artigo em Inglês | MEDLINE | ID: mdl-33105519

RESUMO

In late stage drug development, the experimental drug is tested in a diverse study population within the relevant indication. In order to receive marketing authorization, robust evidence for the therapeutic efficacy is crucial requiring investigation of treatment effects in well-defined subgroups. Conventionally, consistency analyses in subgroups have been performed by means of interaction tests. However, the interaction test can only reject the null hypothesis of equivalence and not confirm consistency. Simulation studies suggest that the interaction test has low power but can also be oversensitive depending on sample size-leading in combination with the actually ill-posed null hypothesis to findings regardless of clinical relevance. In order to overcome these disadvantages in the setup of binary endpoints, we propose to use a consistency test based on the interval inclusion principle, which is able to reject heterogeneity and confirm consistency of subgroup-specific treatment effects while controlling the type I error. This homogeneity test is based upon the deviation between overall treatment effect and subgroup-specific effects on the odds ratio scale and is compared with an equivalence test based on the ratio of both subgroup-specific effects. Performance of these consistency tests is assessed in a simulation study. In addition, the consistency tests are outlined for the relative risk regression. The proposed homogeneity test reaches sufficient power in realistic scenarios with small interactions. As expected, power decreases for unbalanced subgroups, lower sample sizes, and narrower margins. Severe interactions are covered by the null hypothesis and are more likely to be rejected the stronger they are.


Assuntos
Modelos Logísticos , Ensaios Clínicos como Assunto , Interpretação Estatística de Dados , Humanos , Razão de Chances , Tamanho da Amostra
10.
Stat Med ; 38(28): 5350-5360, 2019 12 10.
Artigo em Inglês | MEDLINE | ID: mdl-31621938

RESUMO

Considering a study design with two experimental treatments, a reference treatment and a placebo, we extend a previous approach considering the ratios of effects to a procedure for analyzing multiple ratios. The technical framework for constructing tests and compatible simultaneous confidence intervals is set in a general manner. Besides a single step procedure and its extension to a stepdown procedure, also, an informative stepwise procedure in the spirit of our previous work is developed. The latter is especially interesting, because noninferiority studies require informative confidence intervals to infer more information than just noninferiority at the prespecified margin. Results from a simulation study for the three methods are shown. We also argue that an extension to more than two experimental treatments is straightforward.


Assuntos
Ensaios Clínicos como Assunto/estatística & dados numéricos , Intervalos de Confiança , Bioestatística , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Modelos Estatísticos , Projetos de Pesquisa/estatística & dados numéricos , Tamanho da Amostra
11.
Biom J ; 61(1): 83-100, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30203492

RESUMO

Characterizing an appropriate dose-response relationship and identifying the right dose in a clinical trial are two main goals of early drug-development. MCP-Mod is one of the pioneer approaches developed within the last 10 years that combines the modeling techniques with multiple comparison procedures to address the above goals in clinical drug development. The MCP-Mod approach begins with a set of potential dose-response models, tests for a significant dose-response effect (proof of concept, PoC) using multiple linear contrasts tests and selects the "best" model among those with a significant contrast test. A disadvantage of the method is that the parameter values of the candidate models need to be fixed a priori for the contrasts tests. This may lead to a loss in power and unreliable model selection. For this reason, several variations of the MCP-Mod approach and a hierarchical model selection approach have been suggested where the parameter values need not be fixed in the proof of concept testing step and can be estimated after the model selection step. This paper provides a numerical comparison of the different MCP-Mod variants and the hierarchical model selection approach with regard to their ability of detecting the dose-response trend, their potential to select the correct model and their accuracy in estimating the dose response shape and minimum effective dose. Additionally, as one of the approaches is based on two-sided model comparisons only, we make it more consistent with the common goals of a PoC study, by extending it to one-sided comparisons between the constant and alternative candidate models in the proof of concept step.


Assuntos
Biometria/métodos , Relação Dose-Resposta a Droga , Modelos Estatísticos
12.
Stat Med ; 37(29): 4507-4524, 2018 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-30191578

RESUMO

Adaptive survival trials are particularly important for enrichment designs in oncology and other life-threatening diseases. Current statistical methodology for adaptive survival trials provide type I error rate control only under restrictions. For instance, if we use stage-wise P values based on increments of the log-rank test, then the information used for the interim decisions need to be restricted to the primary survival endpoint. However, it is often desirable to base interim decisions also on correlated short-term endpoints like tumor response. Alternative statistical approaches based on a patient-wise splitting of the data require unnatural restrictions on the follow-up times and do not permit to efficiently account for an early rejection of the primary null hypothesis. We therefore suggest new approaches that enable us to use discrete surrogate endpoints (like tumor response status) and also to incorporate interim rejection boundaries. The new approaches are based on weighted Kaplan-Meier estimates and thereby have additional advantages. They permit us to account for nonproportional hazards and are robust against informative censoring based on the surrogate endpoint. We will show that nonproportionality is an intrinsic and relevant issue in enrichment designs. Moreover, informative censoring based on the surrogate endpoint is likely because of withdrawals and treatment switches after insufficient treatment response. It is shown and illustrated how nonparametric tests based on weighted Kaplan-Meier estimates can be used in closed combination tests for adaptive enrichment designs, such that type I error rate control is achieved and justified asymptotically.


Assuntos
Estatísticas não Paramétricas , Análise de Sobrevida , Ensaios Clínicos como Assunto/métodos , Determinação de Ponto Final/métodos , Humanos , Estimativa de Kaplan-Meier , Modelos Estatísticos
13.
Eur Arch Psychiatry Clin Neurosci ; 268(6): 611-619, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28791485

RESUMO

In Germany, a regional social health insurance fund provides an integrated care program for patients with schizophrenia (IVS). Based on routine data of the social health insurance, this evaluation examined the effectiveness and cost-effectiveness of the IVS compared to the standard care (control group, CG). The primary outcome was the reduction of psychiatric inpatient treatment (days in hospital), and secondary outcomes were schizophrenia-related inpatient treatment, readmission rates, and costs. To reduce selection bias, a propensity score matching was performed. The matched sample included 752 patients. Mean number of psychiatric and schizophrenia-related hospital days of patients receiving IVS (2.3 ± 6.5, 1.7 ± 5.0) per quarter was reduced, but did not differ statistically significantly from CG (2.7 ± 7.6, 1.9 ± 6.2; p = 0.772, p = 0.352). Statistically significant between-group differences were found in costs per quarter per person caused by outpatient treatment by office-based psychiatrists (IVS: €74.18 ± 42.30, CG: €53.20 ± 47.96; p < 0.001), by psychiatric institutional outpatient departments (IVS: €4.83 ± 29.57, CG: €27.35 ± 76.48; p < 0.001), by medication (IVS: €471.75 ± 493.09, CG: €429.45 ± 532.73; p = 0.015), and by psychiatric outpatient nursing (IVS: €3.52 ± 23.83, CG: €12.67 ± 57.86, p = 0.045). Mean total psychiatric costs per quarter per person in IVS (€1117.49 ± 1662.73) were not significantly lower than in CG (€1180.09 ± 1948.24; p = 0.150). No statistically significant differences in total schizophrenia-related costs per quarter per person were detected between IVS (€979.46 ± 1358.79) and CG (€989.45 ± 1611.47; p = 0.084). The cost-effectiveness analysis showed cost savings of €148.59 per reduced psychiatric and €305.40 per reduced schizophrenia-related hospital day. However, limitations, especially non-inclusion of costs related to management of the IVS and additional home treatment within the IVS, restrict the interpretation of the results. Therefore, the long-term impact of this IVS deserves further evaluation.


Assuntos
Assistência Ambulatorial , Análise Custo-Benefício , Prestação Integrada de Cuidados de Saúde , Hospitalização , Hospitais Psiquiátricos , Seguro Saúde , Ambulatório Hospitalar , Esquizofrenia , Adulto , Assistência Ambulatorial/economia , Assistência Ambulatorial/estatística & dados numéricos , Prestação Integrada de Cuidados de Saúde/economia , Prestação Integrada de Cuidados de Saúde/estatística & dados numéricos , Feminino , Alemanha , Hospitalização/economia , Hospitalização/estatística & dados numéricos , Hospitais Psiquiátricos/economia , Hospitais Psiquiátricos/estatística & dados numéricos , Humanos , Seguro Saúde/economia , Seguro Saúde/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Ambulatório Hospitalar/economia , Ambulatório Hospitalar/estatística & dados numéricos , Esquizofrenia/economia , Esquizofrenia/terapia
14.
J Biopharm Stat ; 28(4): 735-749, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29072549

RESUMO

The growing role of targeted medicine has led to an increased focus on the development of actionable biomarkers. Current penalized selection methods that are used to identify biomarker panels for classification in high-dimensional data, however, often result in highly complex panels that need careful pruning for practical use. In the framework of regularization methods, a penalty that is a weighted sum of the L1 and L0 norm has been proposed to account for the complexity of the resulting model. In practice, the limitation of this penalty is that the objective function is non-convex, non-smooth, the optimization is computationally intensive and the application to high-dimensional settings is challenging. In this paper, we propose a stepwise forward variable selection method which combines the L0 with L1 or L2 norms. The penalized likelihood criterion that is used in the stepwise selection procedure results in more parsimonious models, keeping only the most relevant features. Simulation results and a real application show that our approach exhibits a comparable performance with common selection methods with respect to the prediction performance while minimizing the number of variables in the selected model resulting in a more parsimonious model as desired.


Assuntos
Simulação por Computador/estatística & dados numéricos , Bases de Dados Factuais , Modelos Biológicos , Biomarcadores , Humanos
15.
Lifetime Data Anal ; 23(3): 339-352, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-26969674

RESUMO

In clinical trials survival endpoints are usually compared using the log-rank test. Sequential methods for the log-rank test and the Cox proportional hazards model are largely reported in the statistical literature. When the proportional hazards assumption is violated the hazard ratio is ill-defined and the power of the log-rank test depends on the distribution of the censoring times. The average hazard ratio was proposed as an alternative effect measure, which has a meaningful interpretation in the case of non-proportional hazards, and is equal to the hazard ratio, if the hazards are indeed proportional. In the present work we prove that the average hazard ratio based sequential test statistics are asymptotically multivariate normal with the independent increments property. This allows for the calculation of group-sequential boundaries using standard methods and existing software. The finite sample characteristics of the new method are examined in a simulation study in a proportional and a non-proportional hazards setting.


Assuntos
Modelos de Riscos Proporcionais , Análise de Sobrevida , Humanos
16.
Stat Med ; 35(6): 922-41, 2016 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-26459506

RESUMO

There has been increasing interest in trials that allow for design adaptations like sample size reassessment or treatment selection at an interim analysis. Ignoring the adaptive and multiplicity issues in such designs leads to an inflation of the type 1 error rate, and treatment effect estimates based on the maximum likelihood principle become biased. Whereas the methodological issues concerning hypothesis testing are well understood, it is not clear how to deal with parameter estimation in designs were adaptation rules are not fixed in advanced so that, in practice, the maximum likelihood estimate (MLE) is used. It is therefore important to understand the behavior of the MLE in such designs. The investigation of Bias and mean squared error (MSE) is complicated by the fact that the adaptation rules need not be fully specified in advance and, hence, are usually unknown. To investigate Bias and MSE under such circumstances, we search for the sample size reassessment and selection rules that lead to the maximum Bias or maximum MSE. Generally, this leads to an overestimation of Bias and MSE, which can be reduced by imposing realistic constraints on the rules like, for example, a maximum sample size. We consider designs that start with k treatment groups and a common control and where selection of a single treatment and control is performed at the interim analysis with the possibility to reassess each of the sample sizes. We consider the case of unlimited sample size reassessments as well as several realistically restricted sample size reassessment rules.


Assuntos
Viés , Ensaios Clínicos como Assunto/estatística & dados numéricos , Projetos de Pesquisa , Tamanho da Amostra , Ensaios Clínicos como Assunto/métodos , Humanos , Funções Verossimilhança , Modelos Estatísticos
17.
Stat Med ; 34(8): 1317-33, 2015 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-25640198

RESUMO

The planning of an oncology clinical trial with a seamless phase II/III adaptive design is discussed. Two regimens of an experimental treatment are compared to a control at an interim analysis, and the most-promising regimen is selected to continue, together with control, until the end of the study. Because the primary endpoint is expected to be immature at the interim regimen selection analysis, designs that incorporate primary as well as surrogate endpoints in the regimen selection process are considered. The final testing of efficacy at the end of the study comparing the selected regimen to the control with respect to the primary endpoint uses all relevant data collected both before and after the regimen selection analysis. Several approaches for testing the primary hypothesis are assessed with regard to power and type I error rate. Because the operating characteristics of these designs depend on the specific regimen selection rules considered, benchmark scenarios are proposed in which a perfect surrogate and no surrogate is used at the regimen selection analysis. The operating characteristics of these benchmark scenarios provide a range where those of the actual study design are expected to lie. A discussion on family-wise error rate control for testing primary and key secondary endpoints as well as an assessment of bias in the final treatment effect estimate for the selected regimen are also presented.


Assuntos
Antineoplásicos/administração & dosagem , Ensaios Clínicos Fase II como Assunto/métodos , Ensaios Clínicos Fase III como Assunto/métodos , Relação Dose-Resposta a Droga , Determinação de Ponto Final/métodos , Projetos de Pesquisa , Neoplasias Gástricas/tratamento farmacológico , Viés , Simulação por Computador , Interpretação Estatística de Dados , Desenho de Fármacos , Determinação de Ponto Final/estatística & dados numéricos , Humanos , Metástase Neoplásica , Neoplasias Gástricas/patologia , Análise de Sobrevida , Resultado do Tratamento
18.
Biom J ; 57(4): 712-9, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25951332

RESUMO

The fallback procedure is an extension of the hierarchical test procedure relaxing the predefined hierarchical order. It can be applied for example in dose-finding studies. If interest is in extending the fallback procedure to simultaneous confidence intervals, one finds proposals in the literature, which have, however, the drawback that noninformative rejections may arise. A noninformative rejection means that the confidence interval of a rejected null hypothesis contains all parameters of the alternative and thus gives no useful information about the true value of the effect parameter. We present a modification of the fallback procedure with corresponding simultaneous confidence intervals that is informative in every case where a hypothesis is rejected. The main idea consists of splitting the level between the null hypotheses and a nested family of informative hypotheses constituting the alternative. The splitting weights depend continuously on the parameter. The new method is represented by a simple graph and can be easily implemented by an explicit algorithm. We give an example and compare our approach with an existing extension of the fallback procedure to simultaneous confidence intervals by simulations in the context of a dose-finding clinical trial. As a result, we see that the problem of noninformative rejections can be completely removed by the informative fallback procedure, while the involved power loss can be controlled by careful planning.


Assuntos
Intervalos de Confiança , Estatística como Assunto/métodos , Algoritmos , Gráficos por Computador
19.
Stat Med ; 33(19): 3365-86, 2014 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-24782358

RESUMO

Step-down tests uniformly improve single-step tests with regard to power and the average number of rejected hypotheses. However, when extended to simultaneous confidence intervals (SCIs), the resulting SCIs often provide no additional information to the sheer hypothesis test. We speak, in this case, of a non-informative rejection. Non-informative rejections are particularly problematic in clinical trials with multiple treatments, where an informative rejection is required to obtain useful estimates of the treatment effects. The extension of single-step tests to confidence intervals does not have this deficiency. As a consequence, step-down tests, when extended to SCIs, do not uniformly improve single-step tests with regard to informative rejections. To overcome this deficiency, we suggest the construction of a new class of simultaneous confidence intervals that uniformly improve the Bonferroni and Holm SCIs with regard to informative rejections. This can be achieved using a dual family of weighted Bonferroni tests, with the weights depending continuously on the parameter values. We provide a simple algorithm for these computations and show that the resulting lower confidence bounds have an attractive shrinkage property. The method is extended to union-intersection tests, such as the Dunnett procedure, and is investigated in a comparative simulation study. We further illustrate the utility of the method with an example from a real clinical trial in which two experimental treatments are compared with an active comparator with respect to non-inferiority and superiority.


Assuntos
Intervalos de Confiança , Interpretação Estatística de Dados , Algoritmos , Anticoagulantes/uso terapêutico , Antitrombinas/administração & dosagem , Fibrilação Atrial/tratamento farmacológico , Benzimidazóis/administração & dosagem , Bioestatística , Dabigatrana , Humanos , Modelos Estatísticos , Modelos de Riscos Proporcionais , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Acidente Vascular Cerebral/prevenção & controle , Varfarina/uso terapêutico , beta-Alanina/administração & dosagem , beta-Alanina/análogos & derivados
20.
Stat Med ; 33(3): 388-400, 2014 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-23873666

RESUMO

Point estimation for the selected treatment in a two-stage drop-the-loser trial is not straightforward because a substantial bias can be induced in the standard maximum likelihood estimate (MLE) through the first stage selection process. Research has generally focused on alternative estimation strategies that apply a bias correction to the MLE; however, such estimators can have a large mean squared error. Carreras and Brannath (Stat. Med. 32:1677-90) have recently proposed using a special form of shrinkage estimation in this context. Given certain assumptions, their estimator is shown to dominate the MLE in terms of mean squared error loss, which provides a very powerful argument for its use in practice. In this paper, we suggest the use of a more general form of shrinkage estimation in drop-the-loser trials that has parallels with model fitting in the area of meta-analysis. Several estimators are identified and are shown to perform favourably to Carreras and Brannath's original estimator and the MLE. However, they necessitate either explicit estimation of an additional parameter measuring the heterogeneity between treatment effects or a quite unnatural prior distribution for the treatment effects that can only be specified after the first stage data has been observed. Shrinkage methods are a powerful tool for accurately quantifying treatment effects in multi-arm clinical trials, and further research is needed to understand how to maximise their utility.


Assuntos
Teorema de Bayes , Ensaios Clínicos como Assunto/métodos , Funções Verossimilhança , Metanálise como Assunto , Projetos de Pesquisa , Resultado do Tratamento , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa