We show that training common regularized autoencoders resembles clustering, because it amounts to fitting a density model whose mass is concentrated in the directions of the individ-ual weight vectors. We then propose a new ac-tivation function based on thresholding a linear function with zero bias (so it is truly linear not affine), and argue that this allows hidden units to “collaborate ” in order to define larger regions of uniform density. We show that the new activa-tion function makes it possible to train autoen-coders without an explicit regularization penalty, such as sparsification, contraction or denoising, by simply minimizing reconstruction error. Ex-periments in a variety of recognition tasks show that zero-bias autoencoders pe...
Autoencoders and their variations provide unsupervised models for learning low-dimensional represent...
The generally unsupervised nature of autoencoder models implies that the main training metric is for...
Autoencoders have emerged as a useful framework for unsupervised learning of internal representation...
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests ...
We introduce a simple new regularizer for auto-encoders whose hidden-unit ac-tivation functions cont...
Despite the recent progress in machine learning and deep learning, unsupervised learning still remai...
The main objective of an auto-encoder is to reconstruct the input signals via a feature representati...
We present a representation learning method that learns features at multiple dif-ferent levels of sc...
We present in this paper a novel approach for training deterministic auto-encoders. We show that by ...
The success of modern machine learning algorithms depends crucially on efficient data representation...
Autoencoders are certainly among the most studied and used Deep Learning models: the idea behind the...
Autoencoders (AE) are essential in learning representation of large data (like images) for dimension...
Autoencoders are data-specific compression algorithms learned automatically from examples. The predo...
Auto-encoders are one kinds of promising non-probabilistic representation learning paradigms that ca...
Deep neural networks are capable of learning powerful representations to tackle complex vision tasks...
Autoencoders and their variations provide unsupervised models for learning low-dimensional represent...
The generally unsupervised nature of autoencoder models implies that the main training metric is for...
Autoencoders have emerged as a useful framework for unsupervised learning of internal representation...
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests ...
We introduce a simple new regularizer for auto-encoders whose hidden-unit ac-tivation functions cont...
Despite the recent progress in machine learning and deep learning, unsupervised learning still remai...
The main objective of an auto-encoder is to reconstruct the input signals via a feature representati...
We present a representation learning method that learns features at multiple dif-ferent levels of sc...
We present in this paper a novel approach for training deterministic auto-encoders. We show that by ...
The success of modern machine learning algorithms depends crucially on efficient data representation...
Autoencoders are certainly among the most studied and used Deep Learning models: the idea behind the...
Autoencoders (AE) are essential in learning representation of large data (like images) for dimension...
Autoencoders are data-specific compression algorithms learned automatically from examples. The predo...
Auto-encoders are one kinds of promising non-probabilistic representation learning paradigms that ca...
Deep neural networks are capable of learning powerful representations to tackle complex vision tasks...
Autoencoders and their variations provide unsupervised models for learning low-dimensional represent...
The generally unsupervised nature of autoencoder models implies that the main training metric is for...
Autoencoders have emerged as a useful framework for unsupervised learning of internal representation...