The information bottleneck framework provides a systematic approach to learning representations that compress nuisance information in the input and extract semantically meaningful information about predictions. However, the choice of a prior distribution that fixes the dimensionality across all the data can restrict the flexibility of this approach for learning robust representations. We present a novel sparsity-inducing spike-slab categorical prior that uses sparsity as a mechanism to provide the flexibility that allows each data point to learn its own dimension distribution. In addition, it provides a mechanism for learning a joint distribution of the latent variable and the sparsity and hence can account for the complete uncertainty in t...
International audienceWe propose a novel classification technique whose aim is to select an appropri...
Sparse modeling for signal processing and machine learning has been at the focus of scientific resea...
The use of L1 regularisation for sparse learn-ing has generated immense research inter-est, with man...
International audienceStructuring the latent space in probabilistic deep generative models, e.g., va...
Sparsity plays an essential role in a number of modern algorithms. This thesis examines how we can i...
The articles in this special section focus on learning adaptive models. Over the past few years, spa...
The use of L1 regularisation for sparse learning has generated immense research interest, with succe...
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt ...
Identifying meaningful and independent factors of variation in a dataset is a challenging learning t...
Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network ...
In the context of statistical machine learning, sparse learning is a procedure that seeks a reconcil...
International audienceStructured sparsity has recently emerged in statistics, machine learning and s...
The rapid development of modern information technology has significantly facilitated the generation,...
Learning sparsity pattern in high dimension is a great challenge in both implementation and theory. ...
This thesis deals with the difficulties in classification problems caused by three types of sparsity...
International audienceWe propose a novel classification technique whose aim is to select an appropri...
Sparse modeling for signal processing and machine learning has been at the focus of scientific resea...
The use of L1 regularisation for sparse learn-ing has generated immense research inter-est, with man...
International audienceStructuring the latent space in probabilistic deep generative models, e.g., va...
Sparsity plays an essential role in a number of modern algorithms. This thesis examines how we can i...
The articles in this special section focus on learning adaptive models. Over the past few years, spa...
The use of L1 regularisation for sparse learning has generated immense research interest, with succe...
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt ...
Identifying meaningful and independent factors of variation in a dataset is a challenging learning t...
Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network ...
In the context of statistical machine learning, sparse learning is a procedure that seeks a reconcil...
International audienceStructured sparsity has recently emerged in statistics, machine learning and s...
The rapid development of modern information technology has significantly facilitated the generation,...
Learning sparsity pattern in high dimension is a great challenge in both implementation and theory. ...
This thesis deals with the difficulties in classification problems caused by three types of sparsity...
International audienceWe propose a novel classification technique whose aim is to select an appropri...
Sparse modeling for signal processing and machine learning has been at the focus of scientific resea...
The use of L1 regularisation for sparse learn-ing has generated immense research inter-est, with man...