We present posterior regularization, a probabilistic framework for structured, weakly supervised learning. Our framework efficiently incorporates indirect supervision via constraints on posterior distributions of probabilistic models with latent variables. Posterior regularization separates model complexity from the complexity of structural constraints it is desired to satisfy. By directly imposing decomposable regularization on the posterior moments of latent variables during learning, we retain the computational efficiency of the unconstrained model while ensuring desired constraints hold in expectation. We present an efficient algorithm for learning with posterior regularization and illustrate its versatility on a diverse set of structur...
As one principal approach to machine learning and cognitive science, the probabilistic framework has...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
A strong inductive bias is essential in unsupervised grammar induction. In this paper, we explore a ...
We present posterior regularization, a probabilistic framework for structured, weakly supervised lea...
We present Posterior Regularization, a probabilistic framework for structured, weakly supervised lea...
We present posterior regularization, a probabilistic framework for structured, weakly supervised lea...
Supervised machine learning techniques have been very successful for a variety of tasks and domains ...
Supervised machine learning techniques have been very successful for a variety of tasks and domains ...
<p>Existing Bayesian models, especially nonparametric Bayesian methods, rely on specially conceived ...
The expectation maximization (EM) algorithm is a widely used maximum likelihood estimation procedure...
Existing Bayesian models, especially nonparametric Bayesian methods, rely on specially conceived pri...
<p>We introduce a framework for unsupervised learning of structured predictors with overlapping, glo...
We introduce a framework for unsupervised learning of structured predictors with overlapping, global...
Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to in...
Word-level alignment of bilingual text is a critical resource for a growing variety of tasks. Probab...
As one principal approach to machine learning and cognitive science, the probabilistic framework has...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
A strong inductive bias is essential in unsupervised grammar induction. In this paper, we explore a ...
We present posterior regularization, a probabilistic framework for structured, weakly supervised lea...
We present Posterior Regularization, a probabilistic framework for structured, weakly supervised lea...
We present posterior regularization, a probabilistic framework for structured, weakly supervised lea...
Supervised machine learning techniques have been very successful for a variety of tasks and domains ...
Supervised machine learning techniques have been very successful for a variety of tasks and domains ...
<p>Existing Bayesian models, especially nonparametric Bayesian methods, rely on specially conceived ...
The expectation maximization (EM) algorithm is a widely used maximum likelihood estimation procedure...
Existing Bayesian models, especially nonparametric Bayesian methods, rely on specially conceived pri...
<p>We introduce a framework for unsupervised learning of structured predictors with overlapping, glo...
We introduce a framework for unsupervised learning of structured predictors with overlapping, global...
Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to in...
Word-level alignment of bilingual text is a critical resource for a growing variety of tasks. Probab...
As one principal approach to machine learning and cognitive science, the probabilistic framework has...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
A strong inductive bias is essential in unsupervised grammar induction. In this paper, we explore a ...