Many problems in machine learning can be solved by rounding the solution of an appropriate linear program (LP). This paper shows that we can recover solutions of comparable quality by rounding an approximate LP solution instead of the ex-act one. These approximate LP solutions can be computed efficiently by applying a parallel stochastic-coordinate-descent method to a quadratic-penalty formula-tion of the LP. We derive worst-case runtime and solution quality guarantees of this scheme using novel perturbation and convergence analysis. Our experiments demonstrate that on such combinatorial problems as vertex cover, independent set and multiway-cut, our approximate rounding scheme is up to an order of mag-nitude faster than Cplex (a commercial...
Discrete labeling problems are often solved by formulating them as an integer program, and relaxing ...
The aim of this thesis is to develop scalable numerical optimization methods that can be used to add...
When implementing the gradient descent method in low precision, the employment of stochastic roundin...
We give a general method for rounding linear programs that combines the commonly used iterated round...
We introduce a new technique called oblivious rounding-a variant of randomized rounding that avoids ...
Rounding linear programs using techniques from discrepancy is a recent approach that has been very s...
Integer programming (IP) is an important and challenging problem. Approximate methods have shown pro...
The perceptron algorithm, developed mainly in the machine learning literature, is a simple greedy me...
This thesis discusses the application of optimizations to machine learning algorithms. In particular...
Several important NP-hard combinatorial optimization problems can be posed as packing/covering integ...
Packing and covering linear programs (LP) are an important class of problems that bridges computer s...
In this paper we propose new efficient gradient schemes for two non-trivial classes of linear progra...
Stochastic rounding rounds a real number to the next larger or smaller floating-point number with pr...
Several important NP-hard combinatorial optimization problems can be posed as packing/covering integ...
Discrete labeling problems are often solved by formulating them as an integer program, and relaxing ...
The aim of this thesis is to develop scalable numerical optimization methods that can be used to add...
When implementing the gradient descent method in low precision, the employment of stochastic roundin...
We give a general method for rounding linear programs that combines the commonly used iterated round...
We introduce a new technique called oblivious rounding-a variant of randomized rounding that avoids ...
Rounding linear programs using techniques from discrepancy is a recent approach that has been very s...
Integer programming (IP) is an important and challenging problem. Approximate methods have shown pro...
The perceptron algorithm, developed mainly in the machine learning literature, is a simple greedy me...
This thesis discusses the application of optimizations to machine learning algorithms. In particular...
Several important NP-hard combinatorial optimization problems can be posed as packing/covering integ...
Packing and covering linear programs (LP) are an important class of problems that bridges computer s...
In this paper we propose new efficient gradient schemes for two non-trivial classes of linear progra...
Stochastic rounding rounds a real number to the next larger or smaller floating-point number with pr...
Several important NP-hard combinatorial optimization problems can be posed as packing/covering integ...
Discrete labeling problems are often solved by formulating them as an integer program, and relaxing ...
The aim of this thesis is to develop scalable numerical optimization methods that can be used to add...
When implementing the gradient descent method in low precision, the employment of stochastic roundin...