Sparse solutions to the linear inverse problem Ax = y and the determination of an environmentally adapted overcomplete dictio-nary (the columns of A) depend upon the choice of a “regulariz-ing function ” d(x) in several recently proposed procedures. We discuss the interpretation of d(x) within a Bayesian framework, and the desirable properties that “good ” (i.e., sparsity ensuring) regularizing functions, d(x) might have. These properties are: Schur-concavity (d(x) is consistent with majorization); concav-ity (d(x) has sparse minima); parameterizability (d(x) is drawn from a large, parameterizable class); and factorizability of the gra-dient of d(x) in a certain manner. The last property (which nat-urally leads one to consider separable reg...
In order to find sparse approximations of signals, an appropriate generative model for the signal cl...
We formulate the sparse classification problem of n samples with p features as a binary convex optim...
In a series of recent results, several authors have shown that both l¹-minimization (Basis Pursuit) ...
Measures for sparse best–basis selection are analyzed and shown to fit into a general framework base...
Regularization techniques are widely employed in optimization-based approaches for solving ill-posed...
Sparsity plays a key role in machine learning for several reasons including interpretability. Interp...
Huber's criterion can be used for robust joint estimation of regression and scale parameters in the ...
We develop an improved algorithm for solving blind sparse linear inverse problems where both the dic...
International audienceThis paper investigates the theoretical guarantees of L1-analysis regularizati...
Analysis sparsity is a common prior in inverse problem or machine learning including special cases s...
Many problems in signal processing and statistical inference are based on finding a sparse solution ...
Sparse principal component analysis is a very active research area in the last decade. It produces c...
Abstract—Many practical methods for finding maximally sparse coefficient expansions involve solving ...
In the paper we propose a new type of regularization procedure for training sparse Bayesian methods ...
The pioneering work on parameter orthogonalization by Cox and Reid (1987) is presented as an inducem...
In order to find sparse approximations of signals, an appropriate generative model for the signal cl...
We formulate the sparse classification problem of n samples with p features as a binary convex optim...
In a series of recent results, several authors have shown that both l¹-minimization (Basis Pursuit) ...
Measures for sparse best–basis selection are analyzed and shown to fit into a general framework base...
Regularization techniques are widely employed in optimization-based approaches for solving ill-posed...
Sparsity plays a key role in machine learning for several reasons including interpretability. Interp...
Huber's criterion can be used for robust joint estimation of regression and scale parameters in the ...
We develop an improved algorithm for solving blind sparse linear inverse problems where both the dic...
International audienceThis paper investigates the theoretical guarantees of L1-analysis regularizati...
Analysis sparsity is a common prior in inverse problem or machine learning including special cases s...
Many problems in signal processing and statistical inference are based on finding a sparse solution ...
Sparse principal component analysis is a very active research area in the last decade. It produces c...
Abstract—Many practical methods for finding maximally sparse coefficient expansions involve solving ...
In the paper we propose a new type of regularization procedure for training sparse Bayesian methods ...
The pioneering work on parameter orthogonalization by Cox and Reid (1987) is presented as an inducem...
In order to find sparse approximations of signals, an appropriate generative model for the signal cl...
We formulate the sparse classification problem of n samples with p features as a binary convex optim...
In a series of recent results, several authors have shown that both l¹-minimization (Basis Pursuit) ...