Deep belief networks are a powerful way to model complex probability distributions. However, learning the structure of a belief network, particularly one with hidden units, is difficult. The Indian buffet process has been used as a nonparametric Bayesian prior on the directed structure of a belief network with a single infinitely wide hidden layer. In this paper, we introduce the cascading Indian buffet process (CIBP), which provides a nonparametric prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network so each unit can additionally vary its behavior between discrete and continuous representations...
We study the mixtures of factorizing probability distributions represented as visi-ble marginal dist...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
This paper addresses the duality between the deterministic feed-forward neural networks (FF-NNs) and...
Deep belief networks are a powerful way to model complex probability distributions. However, it is d...
We show how to use "complementary priors" to eliminate the explaining-away effects that make inferen...
This paper presents an extension of the cascading Indian buffet process (CIBP) intended to learning ...
This paper introduces the Indian chefs process (ICP) as a Bayesian nonparametric prior on the joint ...
Deep Belief Networks (DBN’s) are generative models that contain many layers of hidden vari-ables. Ef...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
Abstract Most learning algorithms assume that a data set is given initially. We address the common s...
Many models have been proposed to capture the statistical regularities in natural images patches. Th...
Most learning algorithms assume that a data set is given initially. We address the com- mon situatio...
. Probabilistic networks (also known as Bayesian belief networks) allow a compact description of com...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
Gaussian processes are now widely used to perform key machine learning tasks such as nonlinear regre...
We study the mixtures of factorizing probability distributions represented as visi-ble marginal dist...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
This paper addresses the duality between the deterministic feed-forward neural networks (FF-NNs) and...
Deep belief networks are a powerful way to model complex probability distributions. However, it is d...
We show how to use "complementary priors" to eliminate the explaining-away effects that make inferen...
This paper presents an extension of the cascading Indian buffet process (CIBP) intended to learning ...
This paper introduces the Indian chefs process (ICP) as a Bayesian nonparametric prior on the joint ...
Deep Belief Networks (DBN’s) are generative models that contain many layers of hidden vari-ables. Ef...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
Abstract Most learning algorithms assume that a data set is given initially. We address the common s...
Many models have been proposed to capture the statistical regularities in natural images patches. Th...
Most learning algorithms assume that a data set is given initially. We address the com- mon situatio...
. Probabilistic networks (also known as Bayesian belief networks) allow a compact description of com...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
Gaussian processes are now widely used to perform key machine learning tasks such as nonlinear regre...
We study the mixtures of factorizing probability distributions represented as visi-ble marginal dist...
A new approach for learning Bayesian belief networks from raw data is presented. The approach is bas...
This paper addresses the duality between the deterministic feed-forward neural networks (FF-NNs) and...