We consider the problem of minimizing block-separable (non-smooth) convex functions subject to linear constraints. While the Alternating Direction Method of Multipliers (ADMM) for two-block linear constraints has been intensively studied both theoretically and empirically, in spite of some preliminary work, effective generalizations of ADMM to multiple blocks is still unclear. In this paper, we propose a parallel randomized block coordinate method named Parallel Direction Method of Multipliers (PDMM) to solve optimization problems with multi-block linear constraints. At each iteration, PDMM randomly updates some blocks in parallel, behaving like parallel randomized block coordinate descent. We establish the global convergence and the iterat...
In this paper we consider a block-structured convex optimization model, where in the objec-tive the ...
In this work we propose a distributed randomized block coordinate descent method for mini-mizing a c...
This paper proposes a method for parallel block coordinate-wise minimization of convex functions. Ea...
We consider the problem of minimizing block-separable (non-smooth) convex functions subject to linea...
We consider the problem of minimizing block-separable convex functions subject to linear con-straint...
Abstract. The alternating direction method of multipliers (ADMM) is a benchmark for solving a linear...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/87...
Abstract. This paper introduces a parallel and distributed extension to the alternating direc-tion m...
This work is motivated by a simple question: how to find a relatively good solution to a very large ...
Abstract Many problems in machine learning and other fields can be (re)for-mulated as linearly const...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
Many problems in machine learning and other fields can be (re)formulated as linearly constrained sep...
Abstract This paper introduces a symmetric version of the generalized alternating direction method o...
We describe how the powerful “Divide and Concur ” algorithm for constraint satisfac-tion can be deri...
The Augmented Lagragian Method (ALM) and Alternating Direction Method of Multiplier (ADMM) have been...
In this paper we consider a block-structured convex optimization model, where in the objec-tive the ...
In this work we propose a distributed randomized block coordinate descent method for mini-mizing a c...
This paper proposes a method for parallel block coordinate-wise minimization of convex functions. Ea...
We consider the problem of minimizing block-separable (non-smooth) convex functions subject to linea...
We consider the problem of minimizing block-separable convex functions subject to linear con-straint...
Abstract. The alternating direction method of multipliers (ADMM) is a benchmark for solving a linear...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/87...
Abstract. This paper introduces a parallel and distributed extension to the alternating direc-tion m...
This work is motivated by a simple question: how to find a relatively good solution to a very large ...
Abstract Many problems in machine learning and other fields can be (re)for-mulated as linearly const...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
Many problems in machine learning and other fields can be (re)formulated as linearly constrained sep...
Abstract This paper introduces a symmetric version of the generalized alternating direction method o...
We describe how the powerful “Divide and Concur ” algorithm for constraint satisfac-tion can be deri...
The Augmented Lagragian Method (ALM) and Alternating Direction Method of Multiplier (ADMM) have been...
In this paper we consider a block-structured convex optimization model, where in the objec-tive the ...
In this work we propose a distributed randomized block coordinate descent method for mini-mizing a c...
This paper proposes a method for parallel block coordinate-wise minimization of convex functions. Ea...