We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the association between one or several features and a given outcome, conditional on a reduced feature set. Building on the knockoff framework of Candès et al. (2018), we develop a novel testing procedure that works in conjunction with any valid knockoff sampler, supervised learning algorithm, and loss function. The CPI can be efficiently computed for high-dimensional data without any sparsity constraints. We demonstrate convergence criteria for the CPI and develop statistical inference procedures for evaluating its magnitude, significance, and precision. These tests aid in feature and model selection, extending traditional frequentist and Bayesian tec...
Summary This paper provides estimation and inference methods for the best linear pred...
A new definition for similarity between possibility distributions is introduced and discussed as a b...
While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al...
We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the assoc...
We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the assoc...
The algorithms for causal discovery and more broadly for learning the structure of graphical models ...
Conditional independence (CI) testing is an important tool in causal discovery. Generally, by using ...
Conditional independence tests have received special attention lately in machine learning and comput...
Conditional independence tests (CI tests) have received special at-tention lately in Machine Learnin...
<p>Independence screening is powerful for variable selection when the number of variables is massive...
Conditional independence (CI) tests underlie many approaches to model testing and structure learning...
We present and evaluate the Fast (conditional) Independence Test (FIT) -- a nonparametric conditiona...
Testing for conditional independence is a core part of constraint-based causal discovery. It is mai...
Conditional independence tests have received special attention lately in machine learning and comput...
Conditional independence testing is an important problem, especially in Bayesian network learning an...
Summary This paper provides estimation and inference methods for the best linear pred...
A new definition for similarity between possibility distributions is introduced and discussed as a b...
While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al...
We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the assoc...
We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the assoc...
The algorithms for causal discovery and more broadly for learning the structure of graphical models ...
Conditional independence (CI) testing is an important tool in causal discovery. Generally, by using ...
Conditional independence tests have received special attention lately in machine learning and comput...
Conditional independence tests (CI tests) have received special at-tention lately in Machine Learnin...
<p>Independence screening is powerful for variable selection when the number of variables is massive...
Conditional independence (CI) tests underlie many approaches to model testing and structure learning...
We present and evaluate the Fast (conditional) Independence Test (FIT) -- a nonparametric conditiona...
Testing for conditional independence is a core part of constraint-based causal discovery. It is mai...
Conditional independence tests have received special attention lately in machine learning and comput...
Conditional independence testing is an important problem, especially in Bayesian network learning an...
Summary This paper provides estimation and inference methods for the best linear pred...
A new definition for similarity between possibility distributions is introduced and discussed as a b...
While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al...