International audienceIn this paper, we consider a high-dimensional non-parametric regression model with fixed design and i.i.d. random errors. We propose a powerful estimator by exponential weighted aggregation (EWA) with a group-analysis sparsity promoting prior on the weights. We prove that our estimator satisfies a sharp group-analysis sparse oracle inequality with a small remainder term ensuring its good theoretical performances. We also propose a forward-backward proximal Langevin Monte-Carlo algorithm to sample from the target distribution (which is not smooth nor log-concave) and derive its guarantees. In turn, this allows us to implement our estimator and validate it on some numerical experiments
Aggregating estimators using exponential weights depending on their risk appears optimal in expectat...
Aggregating estimators using exponential weights depending on their risk appears optimal in expectat...
We consider the sparse regression model where the number of parameters $p$ is larger than the sample...
International audienceIn this paper, we consider a high-dimensional non-parametric regression model ...
International audienceIn this paper, we consider a high-dimensional non-parametric regression model ...
Abstract. We consider the sparse regression model where the number of pa-rameters p is larger than t...
We consider the sparse regression model where the number of parameters $p$ is larger than the sample...
We consider the sparse regression model where the number of parameters $p$ is larger than the sample...
Short version published in COLT 2009International audienceWe consider the problem of regression lear...
Short version published in COLT 2009International audienceWe consider the problem of regression lear...
Short version published in COLT 2009International audienceWe consider the problem of regression lear...
Short version published in COLT 2009International audienceWe consider the problem of regression lear...
We consider the problem of regression learning for deterministic design and independent random er-ro...
AbstractWe consider the problem of regression learning for deterministic design and independent rand...
Abstract. Consider a regression model with fixed design and Gaussian noise where the regression func...
Aggregating estimators using exponential weights depending on their risk appears optimal in expectat...
Aggregating estimators using exponential weights depending on their risk appears optimal in expectat...
We consider the sparse regression model where the number of parameters $p$ is larger than the sample...
International audienceIn this paper, we consider a high-dimensional non-parametric regression model ...
International audienceIn this paper, we consider a high-dimensional non-parametric regression model ...
Abstract. We consider the sparse regression model where the number of pa-rameters p is larger than t...
We consider the sparse regression model where the number of parameters $p$ is larger than the sample...
We consider the sparse regression model where the number of parameters $p$ is larger than the sample...
Short version published in COLT 2009International audienceWe consider the problem of regression lear...
Short version published in COLT 2009International audienceWe consider the problem of regression lear...
Short version published in COLT 2009International audienceWe consider the problem of regression lear...
Short version published in COLT 2009International audienceWe consider the problem of regression lear...
We consider the problem of regression learning for deterministic design and independent random er-ro...
AbstractWe consider the problem of regression learning for deterministic design and independent rand...
Abstract. Consider a regression model with fixed design and Gaussian noise where the regression func...
Aggregating estimators using exponential weights depending on their risk appears optimal in expectat...
Aggregating estimators using exponential weights depending on their risk appears optimal in expectat...
We consider the sparse regression model where the number of parameters $p$ is larger than the sample...