Deep belief networks are a powerful way to model complex probability distributions. However, it is difficult to learn the structure of a belief network, particularly one with hidden units. The Indian buffet process has been used as a nonparametric Bayesian prior on the structure of a directed belief network with a single infinitely wide hidden layer. Here, we introduce the cascading Indian buffet process (CIBP), which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network framework to allow each unit to vary its behavior between discrete and continuous representations. We use Markov cha...
This paper addresses the duality between the deterministic feed-forward neural networks (FF-NNs) and...
Most learning algorithms assume that a data set is given initially. We address the com- mon situatio...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
Deep belief networks are a powerful way to model complex probability distributions. However, learnin...
We show how to use "complementary priors" to eliminate the explaining-away effects that make inferen...
This paper presents an extension of the cascading Indian buffet process (CIBP) intended to learning ...
Deep Belief Networks (DBN’s) are generative models that contain many layers of hidden vari-ables. Ef...
Gaussian processes are now widely used to perform key machine learning tasks such as nonlinear regre...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
Many models have been proposed to capture the statistical regularities in natural images patches. Th...
This paper introduces the Indian chefs process (ICP) as a Bayesian nonparametric prior on the joint ...
. Probabilistic networks (also known as Bayesian belief networks) allow a compact description of com...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
We study the mixtures of factorizing probability distributions represented as visi-ble marginal dist...
Abstract Most learning algorithms assume that a data set is given initially. We address the common s...
This paper addresses the duality between the deterministic feed-forward neural networks (FF-NNs) and...
Most learning algorithms assume that a data set is given initially. We address the com- mon situatio...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
Deep belief networks are a powerful way to model complex probability distributions. However, learnin...
We show how to use "complementary priors" to eliminate the explaining-away effects that make inferen...
This paper presents an extension of the cascading Indian buffet process (CIBP) intended to learning ...
Deep Belief Networks (DBN’s) are generative models that contain many layers of hidden vari-ables. Ef...
Gaussian processes are now widely used to perform key machine learning tasks such as nonlinear regre...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
Many models have been proposed to capture the statistical regularities in natural images patches. Th...
This paper introduces the Indian chefs process (ICP) as a Bayesian nonparametric prior on the joint ...
. Probabilistic networks (also known as Bayesian belief networks) allow a compact description of com...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
We study the mixtures of factorizing probability distributions represented as visi-ble marginal dist...
Abstract Most learning algorithms assume that a data set is given initially. We address the common s...
This paper addresses the duality between the deterministic feed-forward neural networks (FF-NNs) and...
Most learning algorithms assume that a data set is given initially. We address the com- mon situatio...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...