Learning theory of distributed algorithms has recently attracted enormous attention in the machine learning community. However, most of existing works focus on learning problem with pointwise loss and does not consider the communication among local processors. In this paper, we propose a new distributed pairwise ranking with communication (called DLSRank-C) based on the Newton-Raphson iteration, and establish its learning rate analysis in probability. Theoretical and empirical assessments demonstrate the effectiveness of DLSRank-C under mild conditions
In this paper, a distributed stochastic approximation algorithm is studied. Applications of such alg...
Distributed machine learning is the problem of inferring a desired relation when the training data i...
We present and study a distributed optimization algorithm by employing a stochas-tic dual coordinate...
Ranking a set of numbers plays a key role in many application areas such as signal processing, stati...
Distributed machine learning bridges the traditional fields of distributed systems and machine learn...
The spread of computer networks, from sensor networks to the Internet, creates an ever-growing need ...
We study the question of how a local learning algorithm, executed by multiple distributed agents, ca...
Learning sparse combinations is a frequent theme in machine learning. In this paper, we study its as...
Learning sparse combinations is a frequent theme in machine learning. In this paper, we study its as...
Abstract: We consider the classical TD(0) algorithm implemented on a net-work of agents wherein the ...
We present a randomized parallel list ranking algorithm for distributed memory multiprocessors. A si...
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems....
We consider two variants of the classical gossip algorithm. The first variant is a version of asynch...
We present a novel Newton-type method for dis-tributed optimization, which is particularly well suit...
AbstractWe study the problem of label ranking, a machine learning task that consists of inducing a m...
In this paper, a distributed stochastic approximation algorithm is studied. Applications of such alg...
Distributed machine learning is the problem of inferring a desired relation when the training data i...
We present and study a distributed optimization algorithm by employing a stochas-tic dual coordinate...
Ranking a set of numbers plays a key role in many application areas such as signal processing, stati...
Distributed machine learning bridges the traditional fields of distributed systems and machine learn...
The spread of computer networks, from sensor networks to the Internet, creates an ever-growing need ...
We study the question of how a local learning algorithm, executed by multiple distributed agents, ca...
Learning sparse combinations is a frequent theme in machine learning. In this paper, we study its as...
Learning sparse combinations is a frequent theme in machine learning. In this paper, we study its as...
Abstract: We consider the classical TD(0) algorithm implemented on a net-work of agents wherein the ...
We present a randomized parallel list ranking algorithm for distributed memory multiprocessors. A si...
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems....
We consider two variants of the classical gossip algorithm. The first variant is a version of asynch...
We present a novel Newton-type method for dis-tributed optimization, which is particularly well suit...
AbstractWe study the problem of label ranking, a machine learning task that consists of inducing a m...
In this paper, a distributed stochastic approximation algorithm is studied. Applications of such alg...
Distributed machine learning is the problem of inferring a desired relation when the training data i...
We present and study a distributed optimization algorithm by employing a stochas-tic dual coordinate...