In this paper, we show how the connections between max-product message passing for max-product and linear programming relaxations allow for a more efficient exact algorithm for the MAP problem. Our proposed algorithm uses column generation to pass messages only on a small subset of the possible assignments to each variable, while guaranteeing to find the exact solution. This algorithm is three times faster than Viterbi decoding for part-of-speech tagging on WSJ data and equivalently fast as beam search with a beam of size two while being exact. The empirical performance of column generation depends on how quickly we can rule out entire sets of assignments to the edges of the chain, which is done by bounding the contribution of the pairwise...
This paper revisits optimal decoding for statis-tical machine translation using IBM Model 4. We show...
Computing maximum a posteriori (MAP) estimation in graphical models is an important inference proble...
Many probabilistic inference and learning tasks involve summations over exponentially large sets. Re...
Linear chains and trees are basic building blocks in many applications of graphi-cal models, and the...
Linear chains and trees are basic building blocks in many applications of graphical models. Although...
We present a novel message passing algorithm for approximating the MAP prob-lem in graphical models....
Maximum A Posteriori inference in graphical models is often solved via message-passing algorithms, s...
Linear programming (LP) relaxation for MAP inference over (factor) graphic models is one of the fund...
Given a graphical model, one of the most use-ful queries is to find the most likely configura-tion o...
In this paper, we investigate the use of message-passing algorithms for the problem of finding the m...
Linear programming (LP) relaxation for MAP inference over (fac-tor) graphic models is one of the fun...
We develop and analyze methods for computing provably optimal maximum a posteriori (MAP) configurati...
Message passing algorithms powered by the distributive law of mathematics are efficient in finding a...
The approximate MAP inference over (factor) graphic models is of great importance in many applicatio...
This paper considers the average complexity of maximum likelihood (ML) decoding of convolutional cod...
This paper revisits optimal decoding for statis-tical machine translation using IBM Model 4. We show...
Computing maximum a posteriori (MAP) estimation in graphical models is an important inference proble...
Many probabilistic inference and learning tasks involve summations over exponentially large sets. Re...
Linear chains and trees are basic building blocks in many applications of graphi-cal models, and the...
Linear chains and trees are basic building blocks in many applications of graphical models. Although...
We present a novel message passing algorithm for approximating the MAP prob-lem in graphical models....
Maximum A Posteriori inference in graphical models is often solved via message-passing algorithms, s...
Linear programming (LP) relaxation for MAP inference over (factor) graphic models is one of the fund...
Given a graphical model, one of the most use-ful queries is to find the most likely configura-tion o...
In this paper, we investigate the use of message-passing algorithms for the problem of finding the m...
Linear programming (LP) relaxation for MAP inference over (fac-tor) graphic models is one of the fun...
We develop and analyze methods for computing provably optimal maximum a posteriori (MAP) configurati...
Message passing algorithms powered by the distributive law of mathematics are efficient in finding a...
The approximate MAP inference over (factor) graphic models is of great importance in many applicatio...
This paper considers the average complexity of maximum likelihood (ML) decoding of convolutional cod...
This paper revisits optimal decoding for statis-tical machine translation using IBM Model 4. We show...
Computing maximum a posteriori (MAP) estimation in graphical models is an important inference proble...
Many probabilistic inference and learning tasks involve summations over exponentially large sets. Re...