Your browser doesn't support javascript.
loading
Knockoff boosted tree for model-free variable selection.
Jiang, Tao; Li, Yuanyuan; Motsinger-Reif, Alison A.
Afiliación
  • Jiang T; Department of Statistics, Bioinformatics Research Center, North Carolina State University, Raleigh, NC 27695, USA.
  • Li Y; Biostatistics & Computational Biology Branch, National Institute of Environmental Health Sciences, Durham, NC 27709, USA.
  • Motsinger-Reif AA; Biostatistics & Computational Biology Branch, National Institute of Environmental Health Sciences, Durham, NC 27709, USA.
Bioinformatics ; 37(7): 976-983, 2021 05 17.
Article en En | MEDLINE | ID: mdl-32966559
ABSTRACT
MOTIVATION The recently proposed knockoff filter is a general framework for controlling the false discovery rate (FDR) when performing variable selection. This powerful new approach generates a 'knockoff' of each variable tested for exact FDR control. Imitation variables that mimic the correlation structure found within the original variables serve as negative controls for statistical inference. Current applications of knockoff methods use linear regression models and conduct variable selection only for variables existing in model functions. Here, we extend the use of knockoffs for machine learning with boosted trees, which are successful and widely used in problems where no prior knowledge of model function is required. However, currently available importance scores in tree models are insufficient for variable selection with FDR control.

RESULTS:

We propose a novel strategy for conducting variable selection without prior model topology knowledge using the knockoff method with boosted tree models. We extend the current knockoff method to model-free variable selection through the use of tree-based models. Additionally, we propose and evaluate two new sampling methods for generating knockoffs, namely the sparse covariance and principal component knockoff methods. We test and compare these methods with the original knockoff method regarding their ability to control type I errors and power. In simulation tests, we compare the properties and performance of importance test statistics of tree models. The results include different combinations of knockoffs and importance test statistics. We consider scenarios that include main-effect, interaction, exponential and second-order models while assuming the true model structures are unknown. We apply our algorithm for tumor purity estimation and tumor classification using Cancer Genome Atlas (TCGA) gene expression data. Our results show improved discrimination between difficult-to-discriminate cancer types. AVAILABILITY AND IMPLEMENTATION The proposed algorithm is included in the KOBT package, which is available at https//cran.r-project.org/web/packages/KOBT/index.html. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Asunto(s)

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Algoritmos / Aprendizaje Automático Tipo de estudio: Prognostic_studies Idioma: En Revista: Bioinformatics Asunto de la revista: INFORMATICA MEDICA Año: 2021 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Algoritmos / Aprendizaje Automático Tipo de estudio: Prognostic_studies Idioma: En Revista: Bioinformatics Asunto de la revista: INFORMATICA MEDICA Año: 2021 Tipo del documento: Article País de afiliación: Estados Unidos