For the Gaussian sequence model, we obtain non-asymp-totic minimax rates of estimation of the linear, quadratic and the 2-norm functionals on classes of sparse vectors and construct optimal estimators that attain these rates. The main object of interest is the class B 0 (s) of s-sparse vectors θ = (θ 1 ,. .. , θ d), for which we also provide completely adaptive estimators (independent of s and of the noise variance σ) having logarithmically slower rates than the minimax ones. Furthermore, we obtain the minimax rates on the q-balls B q (r) = {θ ∈ R d : q ≤ r} where 0 < q ≤ 2, and q = d i=1 |θ i | q 1/q. This analysis shows that there are, in general, three zones in the rates of convergence that we call the sparse zone, the dense zone and the...
International audienceWe consider two problems of estimation in high-dimensional Gaussian models. Th...
In this thesis we study adaptive methods of estimation for two particular types of statistical prob...
We derive nonasymptotic bounds for the minimax risk of variable selection under expected Hamming los...
For the Gaussian sequence model, we obtain non-asymp-totic minimax rates of estimation of the linear...
International audienceWe consider the problem of estimation of a linear functional in the Gaussian s...
In this paper, we observe a sparse mean vector through Gaussian noise and we aim at estimating some ...
We consider the problem of testing the hypothesis that the parameter of linear regression model is 0...
We study the problem of estimation of the value N_gamma(theta) = sum(i=1)^d |theta_i|^gamma for 0 0...
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
Abstract. High-dimensional statistical tests often ignore correlations to gain simplicity and stabil...
Given a heterogeneous Gaussian sequence model with unknown mean $\theta \in \mathbb R^d$ and known c...
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leadin...
We consider estimating the predictive density under Kullback-Leibler loss in an ℓ0 sparse Gaussian s...
We consider the estimation of quadratic functionals in a Gaussian sequence model where the eigenvalu...
Consider the standard Gaussian linear regression model Y = X theta(0) + epsilon, where Y is an eleme...
International audienceWe consider two problems of estimation in high-dimensional Gaussian models. Th...
In this thesis we study adaptive methods of estimation for two particular types of statistical prob...
We derive nonasymptotic bounds for the minimax risk of variable selection under expected Hamming los...
For the Gaussian sequence model, we obtain non-asymp-totic minimax rates of estimation of the linear...
International audienceWe consider the problem of estimation of a linear functional in the Gaussian s...
In this paper, we observe a sparse mean vector through Gaussian noise and we aim at estimating some ...
We consider the problem of testing the hypothesis that the parameter of linear regression model is 0...
We study the problem of estimation of the value N_gamma(theta) = sum(i=1)^d |theta_i|^gamma for 0 0...
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
Abstract. High-dimensional statistical tests often ignore correlations to gain simplicity and stabil...
Given a heterogeneous Gaussian sequence model with unknown mean $\theta \in \mathbb R^d$ and known c...
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leadin...
We consider estimating the predictive density under Kullback-Leibler loss in an ℓ0 sparse Gaussian s...
We consider the estimation of quadratic functionals in a Gaussian sequence model where the eigenvalu...
Consider the standard Gaussian linear regression model Y = X theta(0) + epsilon, where Y is an eleme...
International audienceWe consider two problems of estimation in high-dimensional Gaussian models. Th...
In this thesis we study adaptive methods of estimation for two particular types of statistical prob...
We derive nonasymptotic bounds for the minimax risk of variable selection under expected Hamming los...