We propose a novel general algorithm LHAC that efficiently uses second-order information to train a class of large-scale `1-regularized problems. Our method ex-ecutes cheap iterations while achieving fast local convergence rate by exploiting the special structure of a low-rank matrix, constructed via quasi-Newton approximation of the Hessian of the smooth loss function. A greedy active-set strategy, based on the largest violations in the dual constraints, is employed to maintain a working set that iteratively estimates the complement of the optimal active set. This allows for smaller size of subproblems and eventually identifies the optimal active set. Empirical comparisons confirm that LHAC is highly competitive with several recently propo...
Standard regularization methods that are used to compute solutions to ill-posed inverse problems req...
Regularization techniques are widely employed in optimization-based approaches for solving ill-posed...
We consider solving minimization problems with L_1-regularization: min ||x||_1 + mu f(x) particularl...
Low-rank approximation a b s t r a c t Advances of modern science and engineering lead to unpreceden...
Recently, Yuan et al. (2010) conducted a comprehensive comparison on software for L1-regularized cla...
Abstract. We present a second order algorithm for solving optimization problems involving the sparsi...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
In this paper, we revisited the classical technique of Regularized Least Squares (RLS) for the class...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
The `1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statist...
Sparse representation and low-rank approximation are fundamental tools in fields of signal processin...
The use of non-convex sparse regularization has attracted much interest when estimating a very spars...
The L1-regularized models are widely used for sparse regression or classification tasks. In this pap...
International audienceIn this paper, we study large-scale convex optimization algorithms based on th...
We propose a computational framework named iterative local adaptive majorize-minimization (I-LAMM) t...
Standard regularization methods that are used to compute solutions to ill-posed inverse problems req...
Regularization techniques are widely employed in optimization-based approaches for solving ill-posed...
We consider solving minimization problems with L_1-regularization: min ||x||_1 + mu f(x) particularl...
Low-rank approximation a b s t r a c t Advances of modern science and engineering lead to unpreceden...
Recently, Yuan et al. (2010) conducted a comprehensive comparison on software for L1-regularized cla...
Abstract. We present a second order algorithm for solving optimization problems involving the sparsi...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
In this paper, we revisited the classical technique of Regularized Least Squares (RLS) for the class...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
The `1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statist...
Sparse representation and low-rank approximation are fundamental tools in fields of signal processin...
The use of non-convex sparse regularization has attracted much interest when estimating a very spars...
The L1-regularized models are widely used for sparse regression or classification tasks. In this pap...
International audienceIn this paper, we study large-scale convex optimization algorithms based on th...
We propose a computational framework named iterative local adaptive majorize-minimization (I-LAMM) t...
Standard regularization methods that are used to compute solutions to ill-posed inverse problems req...
Regularization techniques are widely employed in optimization-based approaches for solving ill-posed...
We consider solving minimization problems with L_1-regularization: min ||x||_1 + mu f(x) particularl...