Regularized logistic regression is a very successful clas-sification method, but for large-scale data, its distributed training has not been investigated much. In this work, we propose a distributed Newton method for training logistic regression. Many interesting techniques are dis-cussed for reducing the communication cost. Experi-ments show that the proposed method is faster than state of the art approaches such as alternating direction method of multipliers (ADMM).
We propose a new distributed algorithm for em-pirical risk minimization in machine learning. The alg...
Due to the rapid growth of data and computational resources, distributed optimization has become an ...
We present a novel Newton-type method for distributed optimization, which is particularly well suite...
Abstract. Regularized logistic regression is a very useful classification method, but for large-scal...
Solving logistic regression with L1-regularization in distributed settings is an im-portant problem....
The presented work studies an application of a technique known as a semismooth Newton (SSN) method t...
Large-scale logistic regression arises in many applications such as document classification and natu...
<p>Regularized Multinomial Logistic regression has emerged as one of the most common methods for per...
Sparse logistic regression has been developed tremendously in recent two decades, from its originati...
Recently, Yuan et al. (2010) conducted a comprehensive comparison on software for L1-regularized cla...
International audienceIn this paper, we study large-scale convex optimization algorithms based on th...
Abstract—Shared-memory systems such as regular desktops now possess enough memory to store large dat...
Considering two-class classification, this paper aims to perform further study on the success of Tru...
<p>Multiclass logistic regression (MLR) is a fundamental machine learning model to do multiclass cla...
Logistic regression is a well-known statistical model which is commonly used in the situation where ...
We propose a new distributed algorithm for em-pirical risk minimization in machine learning. The alg...
Due to the rapid growth of data and computational resources, distributed optimization has become an ...
We present a novel Newton-type method for distributed optimization, which is particularly well suite...
Abstract. Regularized logistic regression is a very useful classification method, but for large-scal...
Solving logistic regression with L1-regularization in distributed settings is an im-portant problem....
The presented work studies an application of a technique known as a semismooth Newton (SSN) method t...
Large-scale logistic regression arises in many applications such as document classification and natu...
<p>Regularized Multinomial Logistic regression has emerged as one of the most common methods for per...
Sparse logistic regression has been developed tremendously in recent two decades, from its originati...
Recently, Yuan et al. (2010) conducted a comprehensive comparison on software for L1-regularized cla...
International audienceIn this paper, we study large-scale convex optimization algorithms based on th...
Abstract—Shared-memory systems such as regular desktops now possess enough memory to store large dat...
Considering two-class classification, this paper aims to perform further study on the success of Tru...
<p>Multiclass logistic regression (MLR) is a fundamental machine learning model to do multiclass cla...
Logistic regression is a well-known statistical model which is commonly used in the situation where ...
We propose a new distributed algorithm for em-pirical risk minimization in machine learning. The alg...
Due to the rapid growth of data and computational resources, distributed optimization has become an ...
We present a novel Newton-type method for distributed optimization, which is particularly well suite...