We identify and prove a general principle: $L_1$ sparsity can be achieved using a redundant parametrization plus $L_2$ penalty. Our results lead to a simple algorithm, \textit{spred}, that seamlessly integrates $L_1$ regularization into any modern deep learning framework. Practically, we demonstrate (1) the efficiency of \textit{spred} in optimizing conventional tasks such as lasso and sparse coding, (2) benchmark our method for nonlinear feature selection of six gene selection tasks, and (3) illustrate the usage of the method for achieving structured and unstructured sparsity in deep learning in an end-to-end manner. Conceptually, our result bridges the gap in understanding the inductive bias of the redundant parametrization common in deep...
The rapid development of modern information technology has significantly facilitated the generation,...
Optimization is a crucial scientific tool used throughout applied mathematics. In optimization one t...
The use of L1 regularisation for sparse learn-ing has generated immense research inter-est, with man...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Model sparsification in deep learning promotes simpler, more interpretable models with fewer paramet...
Deep learning has been empirically successful in recent years thanks to the extremely over-parameter...
Sparsity is commonly produced from model compression (i.e., pruning), which eliminates unnecessary p...
The newly-emerging sparse representation-based classifier (SRC) shows great potential for pattern cl...
This paper introduces a novel approach for recovering sparse signals using sorted L1/L2 minimization...
Sparsity is a highly desired feature in deep neural networks (DNNs) since it ensures numerical effic...
The newly-emerging sparse representation-based classifier (SRC) shows great potential for pattern cl...
In this paper we propose a general framework to characterize and solve the optimization problems und...
International audienceWe propose a new greedy sparse approximation algorithm, called SLS for Single ...
Neural networks are becoming increasingly popular in applications, but our mathematical understandin...
The elements in the real world are often sparsely connected. For example, in a social network, each ...
The rapid development of modern information technology has significantly facilitated the generation,...
Optimization is a crucial scientific tool used throughout applied mathematics. In optimization one t...
The use of L1 regularisation for sparse learn-ing has generated immense research inter-est, with man...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Model sparsification in deep learning promotes simpler, more interpretable models with fewer paramet...
Deep learning has been empirically successful in recent years thanks to the extremely over-parameter...
Sparsity is commonly produced from model compression (i.e., pruning), which eliminates unnecessary p...
The newly-emerging sparse representation-based classifier (SRC) shows great potential for pattern cl...
This paper introduces a novel approach for recovering sparse signals using sorted L1/L2 minimization...
Sparsity is a highly desired feature in deep neural networks (DNNs) since it ensures numerical effic...
The newly-emerging sparse representation-based classifier (SRC) shows great potential for pattern cl...
In this paper we propose a general framework to characterize and solve the optimization problems und...
International audienceWe propose a new greedy sparse approximation algorithm, called SLS for Single ...
Neural networks are becoming increasingly popular in applications, but our mathematical understandin...
The elements in the real world are often sparsely connected. For example, in a social network, each ...
The rapid development of modern information technology has significantly facilitated the generation,...
Optimization is a crucial scientific tool used throughout applied mathematics. In optimization one t...
The use of L1 regularisation for sparse learn-ing has generated immense research inter-est, with man...