We consider a general non-linear model where the signal is a finite mixture of an unknown, possibly increasing, number of features issued from a continuous dictionary parameterized by a real nonlinear parameter. The signal is observed with Gaussian (possibly correlated) noise in either a continuous or a discrete setup. We propose an off-the-grid optimization method, that is, a method which does not use any discretization scheme on the parameter space, to estimate both the non-linear parameters of the features and the linear parameters of the mixture. We use recent results on the geometry of off-the-grid methods to give minimal separation on the true underlying non-linear parameters such that interpolating certificate functions can be constr...
4pThe main focus of this work is the estimation of a complex valued signal assumed to have a sparse ...
In the last decade, machine learning algorithms have been substantially developed and they have gain...
We consider machine learning techniques to develop low-latency approximate solutions for a class of ...
We consider a general non-linear model where the signal is a finite mixture of an unknown, possibly ...
In this paper we observe a set, possibly a continuum, of signals corrupted by noise. Each signal is ...
We consider a model where a signal (discrete or continuous) is observed with an additive Gaussian no...
This thesis is motivated by the perspective of connecting compressed sensing and machine learning, a...
We address the problem of learning a Gaussian mixture from a set of noisy data points. Each input po...
An algorithm for learning an overcomplete dictionary using a Cauchy mixture model for sparse decompo...
In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-i...
Abstract—We propose two novel approaches for the recovery of an (approximately) sparse signal from n...
We consider the estimation of an independent and identically distributed (i.i.d.) (possibly non-Gaus...
In this work we are interested in the problems of supervised learning and variable selection when th...
This paper addresses the problem of identifying a lower dimensional space where observed data can be...
In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negativ...
4pThe main focus of this work is the estimation of a complex valued signal assumed to have a sparse ...
In the last decade, machine learning algorithms have been substantially developed and they have gain...
We consider machine learning techniques to develop low-latency approximate solutions for a class of ...
We consider a general non-linear model where the signal is a finite mixture of an unknown, possibly ...
In this paper we observe a set, possibly a continuum, of signals corrupted by noise. Each signal is ...
We consider a model where a signal (discrete or continuous) is observed with an additive Gaussian no...
This thesis is motivated by the perspective of connecting compressed sensing and machine learning, a...
We address the problem of learning a Gaussian mixture from a set of noisy data points. Each input po...
An algorithm for learning an overcomplete dictionary using a Cauchy mixture model for sparse decompo...
In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-i...
Abstract—We propose two novel approaches for the recovery of an (approximately) sparse signal from n...
We consider the estimation of an independent and identically distributed (i.i.d.) (possibly non-Gaus...
In this work we are interested in the problems of supervised learning and variable selection when th...
This paper addresses the problem of identifying a lower dimensional space where observed data can be...
In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negativ...
4pThe main focus of this work is the estimation of a complex valued signal assumed to have a sparse ...
In the last decade, machine learning algorithms have been substantially developed and they have gain...
We consider machine learning techniques to develop low-latency approximate solutions for a class of ...