Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it translates to learning to learn itself. The presentation unfolds in two parts. In the first part, we establish learning theoretic results for learning sparse representations. The second part introduces new multi-task and meta-learning paradigms for representation learning. On the sparse representations front, our main pursuits are generalization error bounds to support a supervised dictionary learning model for Lasso-style sparse coding. Such predictive sparse coding algorithms have been applied with much success in the lit...
This dissertation focuses on sparse representation and dictionary learning, with three relative topi...
Due to the capability of effectively learning intrinsic structures from high-dimensional data, techn...
Standard machine learning approaches thrive on learning from huge amounts of labeled training data, ...
We investigate the use of sparse coding and dictionary learning in the context of multi-task and tra...
Proceedings, Part XXInternational audienceIn this paper, we consider the framework of multi-task rep...
International audienceIn this paper, we review the recent advances in meta-learning theory and show ...
Sparse coding is a crucial subroutine in algorithms for various signal processing, deep learning, an...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Progress in Machine Learning is being driven by continued growth in model size, training data and al...
Sparsity plays an essential role in a number of modern algorithms. This thesis examines how we can i...
Motivated by recent developments on meta-learning with linear contextual bandit tasks, we study the ...
Finding neural network weights that generalize well from small datasets is difficult. A promising ap...
A natural progression in machine learning research is to automate and learn from data increasingly m...
We investigate sparse representations for control in reinforcement learning. While these representat...
This dissertation focuses on sparse representation and dictionary learning, with three relative topi...
Due to the capability of effectively learning intrinsic structures from high-dimensional data, techn...
Standard machine learning approaches thrive on learning from huge amounts of labeled training data, ...
We investigate the use of sparse coding and dictionary learning in the context of multi-task and tra...
Proceedings, Part XXInternational audienceIn this paper, we consider the framework of multi-task rep...
International audienceIn this paper, we review the recent advances in meta-learning theory and show ...
Sparse coding is a crucial subroutine in algorithms for various signal processing, deep learning, an...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Progress in Machine Learning is being driven by continued growth in model size, training data and al...
Sparsity plays an essential role in a number of modern algorithms. This thesis examines how we can i...
Motivated by recent developments on meta-learning with linear contextual bandit tasks, we study the ...
Finding neural network weights that generalize well from small datasets is difficult. A promising ap...
A natural progression in machine learning research is to automate and learn from data increasingly m...
We investigate sparse representations for control in reinforcement learning. While these representat...
This dissertation focuses on sparse representation and dictionary learning, with three relative topi...
Due to the capability of effectively learning intrinsic structures from high-dimensional data, techn...
Standard machine learning approaches thrive on learning from huge amounts of labeled training data, ...