To perform inference after model selection, we propose controlling the selective type I error; i.e., the error rate of a test given that it was performed. By doing so, we recover long-run frequency properties among selected hypotheses analogous to those that apply in the classical (non-adaptive) context. Our proposal is closely related to data splitting and has a similar intuitive justification, but is more powerful. Exploiting the classical theory of Lehmann and Scheffe ́ (1955), we derive most powerful unbiased selective tests and confidence intervals for inference in exponential family models after arbitrary selection procedures. For linear regression, we derive new selective z-tests that generalize recent proposals for inference after m...
Conventional statistical inference requires that a model of how the data were generated be known bef...
summary:For a sequence of statistical experiments with a finite parameter set the asymptotic behavio...
In this thesis, I consider the problem of accounting for model uncertainty in a parametric regressio...
To perform inference after model selection, we propose controlling the selective type I error; i.e.,...
It is common practice in statistical data analysis to perform data-driven variable selection and der...
In the classical theory of statistical inference, data is assumed to be generated from a known model...
Thesis (Ph.D.)--University of Washington, 2018The field of post-selection inference focuses on devel...
Plug-in estimation and corresponding refinements involving penalisation have been considered in vari...
We suggest general methods to construct asymptotically uniformly valid confidence intervals post-mod...
In statistical settings such as regression and time series, we can condition on observed informatio...
We study model selection strategies based on penalized empirical loss minimization. We point out a...
The development of the classical inferential theory of mathematical statistics is based on the philo...
We develop a framework for post model selection inference, via marginal screening, in linear regress...
We propose a new test, based on model selection methods, for testing that the expectation of a Gauss...
Forward Stepwise Selection is a widely used model selection algorithm. It is, however, hard to do in...
Conventional statistical inference requires that a model of how the data were generated be known bef...
summary:For a sequence of statistical experiments with a finite parameter set the asymptotic behavio...
In this thesis, I consider the problem of accounting for model uncertainty in a parametric regressio...
To perform inference after model selection, we propose controlling the selective type I error; i.e.,...
It is common practice in statistical data analysis to perform data-driven variable selection and der...
In the classical theory of statistical inference, data is assumed to be generated from a known model...
Thesis (Ph.D.)--University of Washington, 2018The field of post-selection inference focuses on devel...
Plug-in estimation and corresponding refinements involving penalisation have been considered in vari...
We suggest general methods to construct asymptotically uniformly valid confidence intervals post-mod...
In statistical settings such as regression and time series, we can condition on observed informatio...
We study model selection strategies based on penalized empirical loss minimization. We point out a...
The development of the classical inferential theory of mathematical statistics is based on the philo...
We develop a framework for post model selection inference, via marginal screening, in linear regress...
We propose a new test, based on model selection methods, for testing that the expectation of a Gauss...
Forward Stepwise Selection is a widely used model selection algorithm. It is, however, hard to do in...
Conventional statistical inference requires that a model of how the data were generated be known bef...
summary:For a sequence of statistical experiments with a finite parameter set the asymptotic behavio...
In this thesis, I consider the problem of accounting for model uncertainty in a parametric regressio...