We develop a new distributed algorithm to solve the ridge regression problem with feature partitioning of the observation matrix. The proposed algorithm, named D-Ridge, is based on the alternating direction method of multipliers (ADMM) and estimates the parameters when the observation matrix is distributed among different agents with feature (or vertical) partitioning. We formulate the associated ridge regression problem as a distributed convex optimization problem and utilize the ADMM to obtain an iterative solution. Numerical results demonstrate that D-Ridge converges faster than its diffusion-based contender does
<p>Fitting statistical models is computationally challenging when the sample size or the dimension o...
In a distributed optimization problem, the complete problem information is not available at a single...
The presented work studies an application of a technique known as a semismooth Newton (SSN) method t...
We study the distributed machine learning problem where the n feature-response pairs are partitioned...
We propose LOCO, a distributed algorithm which solves large-scale ridge reg-ression. LOCO randomly a...
Many problems of recent interest in statistics and machine learning can be posed in the framework of...
Summarization: Lately, in engineering it has been necessary to develop algorithms that handle “big d...
Fitting statistical models is computationally challenging when the sample size or the dimension of t...
In this paper we study a dual version of the Ridge Regression procedure. It allows us to perform non...
We develop a new algorithm for distributed learning with non-smooth regularizers and feature partiti...
This paper describes a general purpose method for solving convex optimization problems in a distribu...
This paper introduces a dual-regularized ADMM approach to distributed, time-varying optimization. Th...
We propose a new distributed algorithm based on alternating direction method of multipliers (ADMM) t...
We propose a fast algorithm for ridge regression when the number of features is much larger than the...
Ridge regression method is an improved method when the assumptions of independence of the explanator...
<p>Fitting statistical models is computationally challenging when the sample size or the dimension o...
In a distributed optimization problem, the complete problem information is not available at a single...
The presented work studies an application of a technique known as a semismooth Newton (SSN) method t...
We study the distributed machine learning problem where the n feature-response pairs are partitioned...
We propose LOCO, a distributed algorithm which solves large-scale ridge reg-ression. LOCO randomly a...
Many problems of recent interest in statistics and machine learning can be posed in the framework of...
Summarization: Lately, in engineering it has been necessary to develop algorithms that handle “big d...
Fitting statistical models is computationally challenging when the sample size or the dimension of t...
In this paper we study a dual version of the Ridge Regression procedure. It allows us to perform non...
We develop a new algorithm for distributed learning with non-smooth regularizers and feature partiti...
This paper describes a general purpose method for solving convex optimization problems in a distribu...
This paper introduces a dual-regularized ADMM approach to distributed, time-varying optimization. Th...
We propose a new distributed algorithm based on alternating direction method of multipliers (ADMM) t...
We propose a fast algorithm for ridge regression when the number of features is much larger than the...
Ridge regression method is an improved method when the assumptions of independence of the explanator...
<p>Fitting statistical models is computationally challenging when the sample size or the dimension o...
In a distributed optimization problem, the complete problem information is not available at a single...
The presented work studies an application of a technique known as a semismooth Newton (SSN) method t...