Unsupervised learning algorithms aim to discover the structure hidden in the data, and to learn representations that are more suitable as input to a supervised machine than the raw input. Many unsupervised methods are based on reconstructing the input from the representation, while constraining the representation to have cer-tain desirable properties (e.g. low dimension, sparsity, etc). Others are based on approximating density by stochastically reconstructing the input from the repre-sentation. We describe a novel and efficient algorithm to learn sparse represen-tations, and compare it theoretically and experimentally with a similar machine trained probabilistically, namely a Restricted Boltzmann Machine. We propose a simple criterion to c...
Theoretical results suggest that in order to learn the kind of complicated functions that can repres...
Abstract—Hierarchical deep neural networks are currently popular learning models for imitating the h...
Complexity theory of circuits strongly suggests that deep architectures can be much more efcient (so...
We explore the training and usage of the Restricted Boltzmann Machine for unsupervised feature extra...
Sparse representation plays a critical role in vision problems, including generation and understandi...
In this paper we present a method for learning class-specific features for recognition. Recently a g...
It is widely believed that the success of deep networks lies in their ability to learn a meaningful ...
In this study, we provide a direct comparison of the Stochastic Maximum Likelihood algorithm and Con...
We are interested in exploring the possibility and benefits of structure learning for deep models. A...
In this paper we introduce a methodology for the simple integration of almost-independence informati...
In this paper we introduce a methodology for the simple integration of almost-independence informati...
Deep Belief Networks (DBN’s) are generative models that contain many layers of hidden vari-ables. Ef...
We show how to use "complementary priors" to eliminate the explaining-away effects that make inferen...
In recent years, sparse restricted Boltzmann machines have gained popularity as unsupervised feature...
In this paper we present a method for learning class-specific features for recognition. Recently a g...
Theoretical results suggest that in order to learn the kind of complicated functions that can repres...
Abstract—Hierarchical deep neural networks are currently popular learning models for imitating the h...
Complexity theory of circuits strongly suggests that deep architectures can be much more efcient (so...
We explore the training and usage of the Restricted Boltzmann Machine for unsupervised feature extra...
Sparse representation plays a critical role in vision problems, including generation and understandi...
In this paper we present a method for learning class-specific features for recognition. Recently a g...
It is widely believed that the success of deep networks lies in their ability to learn a meaningful ...
In this study, we provide a direct comparison of the Stochastic Maximum Likelihood algorithm and Con...
We are interested in exploring the possibility and benefits of structure learning for deep models. A...
In this paper we introduce a methodology for the simple integration of almost-independence informati...
In this paper we introduce a methodology for the simple integration of almost-independence informati...
Deep Belief Networks (DBN’s) are generative models that contain many layers of hidden vari-ables. Ef...
We show how to use "complementary priors" to eliminate the explaining-away effects that make inferen...
In recent years, sparse restricted Boltzmann machines have gained popularity as unsupervised feature...
In this paper we present a method for learning class-specific features for recognition. Recently a g...
Theoretical results suggest that in order to learn the kind of complicated functions that can repres...
Abstract—Hierarchical deep neural networks are currently popular learning models for imitating the h...
Complexity theory of circuits strongly suggests that deep architectures can be much more efcient (so...