Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Biopharm Stat ; 34(1): 90-110, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-36757196

RESUMEN

The graphical approach has been proposed as a general framework for clinical trial designs involving multiple hypotheses, where decisions are made only based on the observed marginal p-values. The graphical approach starts from a graph that includes all hypotheses as vertices and gradually removes some vertices when their corresponding hypotheses are rejected. In this paper, we propose a reverse graphical approach, which starts from a set of singleton graphs and gradually adds vertices into graphs until rejection of a set of hypotheses is made. Proofs of familywise error rate control are provided. A simulation study is conducted for statistical power analysis, and a case study is included to illustrate how the proposed approach can be applied to clinical studies.


Asunto(s)
Ensayos Clínicos como Asunto , Proyectos de Investigación
2.
Contemp Clin Trials ; 129: 107185, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37059263

RESUMEN

BACKGROUND: In confirmatory clinical trials, it is critical to have appropriate control of multiplicity for multiple comparisons or endpoints. When multiplicity-related issues arise from different sources (e.g., multiple endpoints, multiple treatment arms, multiple interim data-cuts and other factors), it can become complicated to control the family-wise type I error rate (FWER). Therefore, it is crucial for statisticians to fully understand the multiplicity adjustment methods and the objectives of the analysis regarding study power, sample size and feasibility in order to identify the proper multiplicity adjustment strategy. METHODS: In the context of multiplicity adjustment of multiple dose levels and multiple endpoints in a confirmatory trial, we proposed a modified truncated Hochberg procedure in combination with a fixed-sequence hierarchical testing procedure to strongly control the FWER. In this paper, we provided a brief review of the mathematical framework of the regular Hochberg procedure, the truncated Hochberg procedure and the proposed modified truncated Hochberg procedure. An ongoing phase 3 confirmatory trial for pediatric functional constipation was used as a real case application to illustrate how the proposed modified truncated Hochberg procedure will be implemented. A simulation study was conducted to demonstrate that the study was adequately powered and the FWER was strongly controlled. CONCLUSION: This work is expected to facilitate the understanding and selection of adjustment methods for statisticians.


Asunto(s)
Proyectos de Investigación , Humanos , Niño , Interpretación Estadística de Datos , Simulación por Computador , Tamaño de la Muestra
3.
J Biopharm Stat ; 33(5): 596-610, 2023 09 03.
Artículo en Inglés | MEDLINE | ID: mdl-36607042

RESUMEN

There are various multiple comparison procedures used in confirmatory clinical studies and exploratory research for multiplicity adjustment. Among them are the Hochberg and Benjamini-Hochberg procedures. A common misconception is that these procedures control the type I error rate properly if the test statistics are independent or positively correlated. In fact, a much stronger positive dependence assumption needs to be satisfied to guarantee the type I error rate control. We give a comprehensive review of the dependence conditions used in multiple testing procedures. We show that a weaker positive dependence assumption may result an inflation of type I error rate by a factor of 2 and discuss the type I error rate control under certain negative dependence conditions.


Asunto(s)
Proyectos de Investigación
4.
Methods Mol Biol ; 2426: 1-24, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36308682

RESUMEN

In proteomic differential analysis, FDR control is often performed through a multiple test correction (i.e., the adjustment of the original p-values). In this protocol, we apply a recent and alternative method, based on so-called knockoff filters. It shares interesting conceptual similarities with the target-decoy competition procedure, classically used in proteomics for FDR control at peptide identification. To provide practitioners with a unified understanding of FDR control in proteomics, we apply the knockoff procedure on real and simulated quantitative datasets. Leveraging these comparisons, we propose to adapt the knockoff procedure to better fit the specificities of quantitative proteomic data (mainly very few samples). Performances of knockoff procedure are compared with those of the classical Benjamini-Hochberg procedure, hereby shedding a new light on the strengths and weaknesses of target-decoy competition.


Asunto(s)
Péptidos , Proteómica , Proteómica/métodos , Algoritmos
5.
Ther Innov Regul Sci ; 57(2): 304-315, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36280651

RESUMEN

When simultaneous comparisons are performed, a procedure must be employed to control the overall level (also known as the Type I Error rate). Hochberg's stepwise testing procedure is often used and here determination of the sample size needed to achieve a specified power for two pairwise comparisons when observations follow a normal distribution is addressed. Three different scenarios are considered: subsets defined by a baseline criterion, two treatments compared to a control, or one set of subjects nested within the other. The solutions for these three scenarios differ and are examined. The sample sizes for the differences in success probabilities for binomial distributions are presented using the asymptotic normality. The sample sizes and power using Hochberg's procedure are compared to the corresponding results using the Bonferroni approach.


Asunto(s)
Proyectos de Investigación , Humanos , Tamaño de la Muestra
6.
Stat Methods Med Res ; 27(12): 3560-3576, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-28504080

RESUMEN

Many statistical studies report p-values for inferential purposes. In several scenarios, the stochastic aspect of p-values is neglected, which may contribute to drawing wrong conclusions in real data experiments. The stochastic nature of p-values makes their use to examine the performance of given testing procedures or associations between investigated factors to be difficult. We turn our focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we prove that the EPV can be considered in the context of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. The ROC-based framework provides a new and efficient methodology for investigating and constructing statistical decision-making procedures, including: (1) evaluation and visualization of properties of the testing mechanisms, considering, e.g. partial EPVs; (2) developing optimal tests via the minimization of EPVs; (3) creation of novel methods for optimally combining multiple test statistics. We demonstrate that the proposed EPV-based approach allows us to maximize the integrated power of testing algorithms with respect to various significance levels. In an application, we use the proposed method to construct the optimal test and analyze a myocardial infarction disease dataset. We outline the usefulness of the "EPV/ROC" technique for evaluating different decision-making procedures, their constructions and properties with an eye towards practical applications.


Asunto(s)
Investigación Biomédica/estadística & datos numéricos , Curva ROC , Algoritmos , Biomarcadores , Interpretación Estadística de Datos , Humanos
7.
Clin Trials ; 13(6): 651-659, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27439306

RESUMEN

BACKGROUND/AIMS: Factorial analyses of 2 × 2 trial designs are known to be problematic unless one can be sure that there is no interaction between the treatments (A and B). Instead, we consider non-factorial analyses of a factorial trial design that addresses clinically relevant questions of interest without any assumptions on the interaction. Primary questions of interest are as follows: (1) is A better than the control treatment C, (2) is B better than C, (3) is the combination of A and B (AB) better than C, and (4) is AB better than A, B, and C. METHODS: A simple three-step procedure that tests the first three primary questions of interest using a Bonferroni adjustment at the first step is proposed. A Hochberg procedure on the four primary questions is also considered. The two procedures are evaluated and compared in limited simulations. Published results from three completed trials with factorial designs are re-evaluated using the two procedures. RESULTS: Both suggested procedures (that answer multiple questions) require a 50%-60% increase in per arm sample size over a two-arm design asking a single question. The simulations suggest a slight advantage to the three-step procedure in terms of power (for the primary and secondary questions). The proposed procedures would have formally addressed the questions arising in the highlighted published trials arguably more simply than the pre-specified factorial analyses used. CONCLUSION: Factorial trial designs are an efficient way to evaluate two treatments, alone and in combination. In situations where a statistical interaction between the treatment effects cannot be assumed to be 0, simple non-factorial analyses are possible that directly assess the questions of interest without the zero interaction assumption.


Asunto(s)
Investigación Biomédica , Proyectos de Investigación , Estadística como Asunto , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto
8.
Stat Med ; 35(1): 5-20, 2016 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-26278421

RESUMEN

There is much interest in using the Hochberg procedure (HP) for statistical tests on primary endpoints of confirmatory clinical trials. The procedure is simple to use and enjoys more power than the Bonferroni and the Holm procedures. However, the HP is not assumption free like the other two procedures. It controls the familywise type I error rate when test statistics (used for statistical tests) are independent or if dependent satisfy a conditionally independent formulation. Otherwise, its properties for dependent tests at present are not fully understood. Consequently, its use for confirmatory trials, especially for their primary endpoints, remains worrisome. Confirmatory trials are typically designed with 1-2 primary endpoints. Therefore, a question was raised at the Food and Drug Administration as to whether the HP is a valid test for the simple case of performing treatment-to-control comparisons on two primary endpoints when their test statistics are not independent. Confirmatory trials for statistical tests normally use simple test statistics, such as the normal Z, student's t, and chi-square. The literature does include some work on the HP for dependent cases covering these test statistics, but concerns remain regarding its use for confirmatory trials for which endpoint tests are mostly of the dependent kind. The purpose of this paper is therefore to revisit this procedure and provide sufficient details for better understanding of its performance for dependent cases related to the aforementioned question. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.


Asunto(s)
Ensayos Clínicos como Asunto/estadística & datos numéricos , Modelos Estadísticos , Bioestadística/métodos , Distribución de Chi-Cuadrado , Humanos , Análisis Multivariante , Reproducibilidad de los Resultados
9.
Psychon Bull Rev ; 23(2): 640-7, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26374437

RESUMEN

Many psychologists do not realize that exploratory use of the popular multiway analysis of variance harbors a multiple-comparison problem. In the case of two factors, three separate null hypotheses are subject to test (i.e., two main effects and one interaction). Consequently, the probability of at least one Type I error (if all null hypotheses are true) is 14 % rather than 5 %, if the three tests are independent. We explain the multiple-comparison problem and demonstrate that researchers almost never correct for it. To mitigate the problem, we describe four remedies: the omnibus F test, control of the familywise error rate, control of the false discovery rate, and preregistration of the hypotheses.


Asunto(s)
Análisis de Varianza , Investigación Biomédica/normas , Interpretación Estadística de Datos , Psicología/normas , Humanos
10.
Biom J ; 57(1): 144-58, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25410394

RESUMEN

In many applications, researchers are interested in making q pairwise comparisons among k test groups on the basis of m outcome variables. Often, m is very large. For example, such situations arise in gene expression microarray studies involving several experimental groups. Researchers are often not only interested in identifying differentially expressed genes between a given pair of experimental groups, but are also interested in making directional inferences such as whether a gene is up- or downregulated in one treatment group relative to another. In such situations, in addition to the usual errors such as false positive (Type I error) and false negative (Type II error), one may commit directional error (Type III error). For example, in a dose response microarray study, a gene may be declared to be upregulated in the high dose group compared to the low dose group when it is not. In this paper, we introduce a mixed directional false discovery rate (mdFDR) controlling procedure using weighted p-values to select positives in different directions. The weights are defined as the inverse of two times the proportion of either positive or negative discoveries. The proposed procedure has been proved mathematically to control the mdFDR at level α and to have a larger power (which is defined as the expected proportion of nontrue null hypotheses) than the GSP10 procedure proposed by Guo et al. (2010). Simulation studies and real data analysis are also conducted to show the outperformance of the proposed procedure than the GSP10 procedure.


Asunto(s)
Biometría/métodos , Acetaminofén/efectos adversos , Animales , Relación Dosis-Respuesta a Droga , Reacciones Falso Negativas , Reacciones Falso Positivas , Hígado/efectos de los fármacos , Hígado/metabolismo , Ratas , Factores de Tiempo , Transcriptoma/efectos de los fármacos
11.
Stat Med ; 33(8): 1321-35, 2014 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-24307257

RESUMEN

We generalize a multistage procedure for parallel gatekeeping to what we refer to as k-out-of-n gatekeeping in which at least k out of n hypotheses ( 1 ⩽ k ⩽ n) in a gatekeeper family must be rejected in order to test the hypotheses in the following family. This gatekeeping restriction arises in certain types of clinical trials; for example, in rheumatoid arthritis trials, it is required that efficacy be shown on at least three of the four primary endpoints. We provide a unified theory of multistage procedures for arbitrary k, with k = 1 corresponding to parallel gatekeeping and k = n to serial gatekeeping. The theory provides an insight into the construction of truncated separable multistage procedures using the closure method. Explicit formulae for calculating the adjusted p-values are given. The proposed procedure is simpler to apply for this particular problem using a stepwise algorithm than the mixture procedure and the graphical procedure with memory using entangled graphs.


Asunto(s)
Algoritmos , Ensayos Clínicos como Asunto/métodos , Interpretación Estadística de Datos , Antiinflamatorios/farmacología , Artritis Reumatoide/tratamiento farmacológico , Articulaciones/efectos de los fármacos
12.
Ann Stat ; 39(1): 556-583, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25018568

RESUMEN

Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Sidák procedure for FWER control and the Benjamini-Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p-values, as is the case, for example, with the Sidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p-value based procedures whose theoretical validity is contingent on each of these p-value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional "large M, small n" data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA