We develop a fully discriminative learning approach for supervised Latent Dirich-let Allocation (LDA) model using Back Propagation (i.e., BP-sLDA), which max-imizes the posterior probability of the prediction variable given the input doc-ument. Different from traditional variational learning or Gibbs sampling ap-proaches, the proposed learning method applies (i) the mirror descent algorithm for maximum a posterior inference and (ii) back propagation over a deep architec-ture together with stochastic gradient/mirror descent for model parameter estima-tion, leading to scalable and end-to-end discriminative learning of the model. As a byproduct, we also apply this technique to develop a new learning method for the traditional unsupervised LDA ...
<p>A supervised topic model can use side information such as ratings or labels associated with docum...
Supervised topic models simultaneously model the latent topic structure of large collections of docu...
Deep Neural Networks (DNNs) can be used as function approximators in Reinforcement Learning (RL). On...
We develop a fully discriminative learning approach for supervised Latent Dirich-let Allocation (LDA...
Despite many years of research into latent Dirichlet allocation (LDA), applying LDA to collections o...
Latent Dirichlet allocation (LDA) is an important probabilistic generative model and has usually use...
We present an extension to the Hierarchical Dirichlet Process (HDP), which allows for the inclusion ...
consists of computing the posterior distribution of the unob-served variables, P (pi, z1:N |I) 1. Le...
Top-performing deep architectures are trained on mas-sive amounts of labeled data. In the absence of...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised ...
Latent Dirichlet Allocation (LDA) represents perhaps the most famous topic model, employed in many d...
We present an extension to the Hierarchical Dirichlet Process (HDP), which allows for the inclusion ...
We present an extension to the Hierarchical Dirichlet Process (HDP), which allows for the inclusion ...
The regularization principals [31] lead approximation schemes to deal with various learning problems...
Despite of remarkable progress on deep learning, its hardware implementation beyond deep learning ac...
<p>A supervised topic model can use side information such as ratings or labels associated with docum...
Supervised topic models simultaneously model the latent topic structure of large collections of docu...
Deep Neural Networks (DNNs) can be used as function approximators in Reinforcement Learning (RL). On...
We develop a fully discriminative learning approach for supervised Latent Dirich-let Allocation (LDA...
Despite many years of research into latent Dirichlet allocation (LDA), applying LDA to collections o...
Latent Dirichlet allocation (LDA) is an important probabilistic generative model and has usually use...
We present an extension to the Hierarchical Dirichlet Process (HDP), which allows for the inclusion ...
consists of computing the posterior distribution of the unob-served variables, P (pi, z1:N |I) 1. Le...
Top-performing deep architectures are trained on mas-sive amounts of labeled data. In the absence of...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised ...
Latent Dirichlet Allocation (LDA) represents perhaps the most famous topic model, employed in many d...
We present an extension to the Hierarchical Dirichlet Process (HDP), which allows for the inclusion ...
We present an extension to the Hierarchical Dirichlet Process (HDP), which allows for the inclusion ...
The regularization principals [31] lead approximation schemes to deal with various learning problems...
Despite of remarkable progress on deep learning, its hardware implementation beyond deep learning ac...
<p>A supervised topic model can use side information such as ratings or labels associated with docum...
Supervised topic models simultaneously model the latent topic structure of large collections of docu...
Deep Neural Networks (DNNs) can be used as function approximators in Reinforcement Learning (RL). On...