Your browser doesn't support javascript.
loading
Fast and powerful conditional randomization testing via distillation.
Liu, Molei; Katsevich, Eugene; Janson, Lucas; Ramdas, Aaditya.
Afiliación
  • Liu M; Department of Biostatistics, Harvard Chan School of Public Health, 677 Huntington Avenue, Boston, Massachusetts 02115, U.S.A.
  • Katsevich E; Department of Statistics and Data Science, Wharton School of the University of Pennsylvania, 265 South 37th Street, Philadelphia, Pennsylvania 19104, U.S.A.
  • Janson L; Department of Statistics, Harvard University, One Oxford Street, Cambridge, Massachusetts 02138, U.S.A.
  • Ramdas A; Department of Statistics & Data Science, Carnegie Mellon University, 132H Baker Hall, Pittsburgh, Pennsylvania 15213, U.S.A.
Biometrika ; 109(2): 277-293, 2022 Jun.
Article en En | MEDLINE | ID: mdl-37416628
ABSTRACT
We consider the problem of conditional independence testing given a response Y and covariates (X,Z), we test the null hypothesis that Y⫫X∣Z. The conditional randomization test was recently proposed as a way to use distributional information about X∣Z to exactly and nonasymptotically control Type-I error using any test statistic in any dimensionality without assuming anything about Y∣(X,Z). This flexibility, in principle, allows one to derive powerful test statistics from complex prediction algorithms while maintaining statistical validity. Yet the direct use of such advanced test statistics in the conditional randomization test is prohibitively computationally expensive, especially with multiple testing, due to the requirement to recompute the test statistic many times on resampled data. We propose the distilled conditional randomization test, a novel approach to using state-of-the-art machine learning algorithms in the conditional randomization test while drastically reducing the number of times those algorithms need to be run, thereby taking advantage of their power and the conditional randomization test's statistical guarantees without suffering the usual computational expense. In addition to distillation, we propose a number of other tricks, like screening and recycling computations, to further speed up the conditional randomization test without sacrificing its high power and exact validity. Indeed, we show in simulations that all our proposals combined lead to a test that has similar power to the most powerful existing conditional randomization test implementations, but requires orders of magnitude less computation, making it a practical tool even for large datasets. We demonstrate these benefits on a breast cancer dataset by identifying biomarkers related to cancer stage.
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Tipo de estudio: Clinical_trials / Prognostic_studies Idioma: En Año: 2022 Tipo del documento: Article

Texto completo: 1 Banco de datos: MEDLINE Tipo de estudio: Clinical_trials / Prognostic_studies Idioma: En Año: 2022 Tipo del documento: Article