Abstract. Regularized logistic regression is a very useful classification method, but for large-scale data, its distributed training has not been investigated much. In this work, we propose a distributed Newton method for training logistic re-gression. Many interesting techniques are discussed for reducing the communi-cation cost and speeding up the computation. Experiments show that the proposed method is competitive with or even faster than state-of-the-art approaches such as Alternating Direction Method of Multipliers (ADMM) and Vowpal Wabbit (VW). We have released an MPI-based implementation for public use.
Due to the rapid growth of data and computational resources, distributed optimization has become an ...
Abstract—Logistic regression and linear SVM are useful methods for large-scale classification. Howev...
The recent years have witnessed advances in parallel algorithms for large scale optimization problem...
Regularized logistic regression is a very successful clas-sification method, but for large-scale dat...
Solving logistic regression with L1-regularization in distributed settings is an im-portant problem....
The presented work studies an application of a technique known as a semismooth Newton (SSN) method t...
Regularized Multinomial Logistic regression has emerged as one of the most common methods for perfor...
Large-scale logistic regression arises in many applications such as document classification and natu...
Recently, Yuan et al. (2010) conducted a comprehensive comparison on software for L1-regularized cla...
Considering two-class classification, this paper aims to perform further study on the success of Tru...
Sparse logistic regression has been developed tremendously in recent two decades, from its originati...
Abstract—Shared-memory systems such as regular desktops now possess enough memory to store large dat...
International audienceIn this paper, we study large-scale convex optimization algorithms based on th...
<p>Multiclass logistic regression (MLR) is a fundamental machine learning model to do multiclass cla...
We propose a new distributed algorithm for em-pirical risk minimization in machine learning. The alg...
Due to the rapid growth of data and computational resources, distributed optimization has become an ...
Abstract—Logistic regression and linear SVM are useful methods for large-scale classification. Howev...
The recent years have witnessed advances in parallel algorithms for large scale optimization problem...
Regularized logistic regression is a very successful clas-sification method, but for large-scale dat...
Solving logistic regression with L1-regularization in distributed settings is an im-portant problem....
The presented work studies an application of a technique known as a semismooth Newton (SSN) method t...
Regularized Multinomial Logistic regression has emerged as one of the most common methods for perfor...
Large-scale logistic regression arises in many applications such as document classification and natu...
Recently, Yuan et al. (2010) conducted a comprehensive comparison on software for L1-regularized cla...
Considering two-class classification, this paper aims to perform further study on the success of Tru...
Sparse logistic regression has been developed tremendously in recent two decades, from its originati...
Abstract—Shared-memory systems such as regular desktops now possess enough memory to store large dat...
International audienceIn this paper, we study large-scale convex optimization algorithms based on th...
<p>Multiclass logistic regression (MLR) is a fundamental machine learning model to do multiclass cla...
We propose a new distributed algorithm for em-pirical risk minimization in machine learning. The alg...
Due to the rapid growth of data and computational resources, distributed optimization has become an ...
Abstract—Logistic regression and linear SVM are useful methods for large-scale classification. Howev...
The recent years have witnessed advances in parallel algorithms for large scale optimization problem...