The standard approach to max-margin parameter learning for Markov random fields (MRFs) involves incrementally adding the most violated constraints during each iteration of the algorithm. This requires exact MAP inference, which is intractable for many classes of MRF. In this paper, we propose an exact MAP inference algorithm for binary MRFs containing a class of higher-order models, known as lower linear envelope potentials. Our algorithm is polynomial in the number of variables and number of linear envelope functions. With tractable inference in hand, we show how the parameters and corresponding feature vectors can be represented in a max-margin framework for efficiently learning lower linear envelope potentials
Conditional random field (CRFs) is a popu-lar and effective approach to structured pre-diction. When...
We consider the problem of learning Bayesian network classifiers that maximize the margin over a set...
Infinite hidden Markov models (iHMMs) are nonparametric Bayesian extensions of hidden Markov models ...
In this thesis, we introduce a new class of embarrassingly parallel parameter learning algorithms fo...
We propose a cutting-plane style algorithm for finding the maximum a posteriori (MAP) state and appr...
We introduce a new embarrassingly parallel parameter learning algorithm for Markov random fields whi...
International audienceWe propose a new family of latent variable models called max-margin min-entrop...
Markov Random Field, or MRF, models are a powerful tool for modeling images. While much progress has...
We focus on the problem of maximum a posteriori (MAP) inference in Markov random fields with binary ...
We focus on the problem of maximum a posteriori (MAP) inference in Markov random fields with binary ...
Learning Markov random field (MRF) models is notoriously hard due to the presence of a global norm...
We study the problem of learning parameters of a Markov Random Field (MRF) from observations and pr...
In this thesis, we give a new class of outer bounds on the marginal polytope, and propose a cutting-...
We present a new approach for the discriminative training of continuous-valued Markov Random Field (...
Theoretically, the Markov boundary (MB) is the optimal solution for feature selection. However, exis...
Conditional random field (CRFs) is a popu-lar and effective approach to structured pre-diction. When...
We consider the problem of learning Bayesian network classifiers that maximize the margin over a set...
Infinite hidden Markov models (iHMMs) are nonparametric Bayesian extensions of hidden Markov models ...
In this thesis, we introduce a new class of embarrassingly parallel parameter learning algorithms fo...
We propose a cutting-plane style algorithm for finding the maximum a posteriori (MAP) state and appr...
We introduce a new embarrassingly parallel parameter learning algorithm for Markov random fields whi...
International audienceWe propose a new family of latent variable models called max-margin min-entrop...
Markov Random Field, or MRF, models are a powerful tool for modeling images. While much progress has...
We focus on the problem of maximum a posteriori (MAP) inference in Markov random fields with binary ...
We focus on the problem of maximum a posteriori (MAP) inference in Markov random fields with binary ...
Learning Markov random field (MRF) models is notoriously hard due to the presence of a global norm...
We study the problem of learning parameters of a Markov Random Field (MRF) from observations and pr...
In this thesis, we give a new class of outer bounds on the marginal polytope, and propose a cutting-...
We present a new approach for the discriminative training of continuous-valued Markov Random Field (...
Theoretically, the Markov boundary (MB) is the optimal solution for feature selection. However, exis...
Conditional random field (CRFs) is a popu-lar and effective approach to structured pre-diction. When...
We consider the problem of learning Bayesian network classifiers that maximize the margin over a set...
Infinite hidden Markov models (iHMMs) are nonparametric Bayesian extensions of hidden Markov models ...