In this paper, we propose an iterative scheme for distributed Byzantineresilient estimation of a gradient associated with a black-box model. Our algorithm is based on simultaneous perturbation, secure state estimation and two-timescale stochastic approximations. We also show the performance of our algorithm through numerical experiments
This paper studies a distributed policy gradient in collaborative multi-agent reinforcement learning...
The first part of this dissertation considers distributed learning problems over networked agents. T...
Privacy and Byzantine resilience (BR) are two crucial requirements of modern-day distributed machine...
In this paper, we propose an iterative scheme for distributed Byzantine-resilient estimation of a gr...
International audienceWe study distributed stochastic gradient (D-SG) method and its accelerated var...
For many data-intensive real-world applications, such as recognizing objects from images, detecting ...
Asynchronous distributed machine learning solutions have proven very effective so far, but always as...
We present AGGREGATHOR, a framework that implements state-of-the-art robust (Byzantine-resilient) di...
This work focuses on decentralized stochastic optimization in the presence of Byzantine attacks. Dur...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
This paper studies the problem of distributed stochastic optimization in an adversarial setting wher...
This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descen...
The problem of distributed optimization requires a group of networked agents to compute a parameter ...
This article considers solving an overdetermined system of linear equations in peer-to-peer multiage...
In this paper, we propose (i) a novel distributed algorithm for consensus optimization over networks...
This paper studies a distributed policy gradient in collaborative multi-agent reinforcement learning...
The first part of this dissertation considers distributed learning problems over networked agents. T...
Privacy and Byzantine resilience (BR) are two crucial requirements of modern-day distributed machine...
In this paper, we propose an iterative scheme for distributed Byzantine-resilient estimation of a gr...
International audienceWe study distributed stochastic gradient (D-SG) method and its accelerated var...
For many data-intensive real-world applications, such as recognizing objects from images, detecting ...
Asynchronous distributed machine learning solutions have proven very effective so far, but always as...
We present AGGREGATHOR, a framework that implements state-of-the-art robust (Byzantine-resilient) di...
This work focuses on decentralized stochastic optimization in the presence of Byzantine attacks. Dur...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
This paper studies the problem of distributed stochastic optimization in an adversarial setting wher...
This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descen...
The problem of distributed optimization requires a group of networked agents to compute a parameter ...
This article considers solving an overdetermined system of linear equations in peer-to-peer multiage...
In this paper, we propose (i) a novel distributed algorithm for consensus optimization over networks...
This paper studies a distributed policy gradient in collaborative multi-agent reinforcement learning...
The first part of this dissertation considers distributed learning problems over networked agents. T...
Privacy and Byzantine resilience (BR) are two crucial requirements of modern-day distributed machine...