International audienceA major issue in statistical machine learning is the design of a representa-tion, or feature space, facilitating the resolution of the learning task at hand. Sparse representations in particular facilitate discriminant learning: On the one hand, they are robust to noise. On the other hand, they disentangle the factors of variation mixed up in dense representations, favoring the separa-bility and interpretation of data. This chapter focuses on auto-associators (AAs), i.e. multi-layer neural networks trained to encode/decode the data and thus de facto defining a feature space. AAs, first investigated in the 80s, were recently reconsidered as building blocks for deep neural networks. This chapter surveys related work abou...
Autoencoders (AE) are essential in learning representation of large data (like images) for dimension...
Auto-encoders (AE) is a particular type of unsupervised neural networks that aim at providing a comp...
We show that training common regularized autoencoders resembles clustering, because it amounts to fi...
International audienceA major issue in statistical machine learning is the design of a representa-ti...
The authors present the results of their analysis of an auto-associator for use with sparse represen...
Many approaches to transform classification problems from non-linear to linear by feature transforma...
The main objective of an auto-encoder is to reconstruct the input signals via a feature representati...
Abstract-A sparse auto-encoder is one of effective algorithms for learning features from unlabeled d...
Despite the recent progress in machine learning and deep learning, unsupervised learning still remai...
Abstract—Hierarchical deep neural networks are currently popular learning models for imitating the h...
High dimensionality and the sheer size of unlabeled data available today demand new development in u...
Auto-encoders are one kinds of promising non-probabilistic representation learning paradigms that ca...
In Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), it was theoretically proven that autoencoder...
Convolutional sparse coding (CSC) can model local connections between image content and reduce the c...
Lemme A, Reinhart F, Steil JJ. Efficient online learning of a non-negative sparse autoencoder. In: ...
Autoencoders (AE) are essential in learning representation of large data (like images) for dimension...
Auto-encoders (AE) is a particular type of unsupervised neural networks that aim at providing a comp...
We show that training common regularized autoencoders resembles clustering, because it amounts to fi...
International audienceA major issue in statistical machine learning is the design of a representa-ti...
The authors present the results of their analysis of an auto-associator for use with sparse represen...
Many approaches to transform classification problems from non-linear to linear by feature transforma...
The main objective of an auto-encoder is to reconstruct the input signals via a feature representati...
Abstract-A sparse auto-encoder is one of effective algorithms for learning features from unlabeled d...
Despite the recent progress in machine learning and deep learning, unsupervised learning still remai...
Abstract—Hierarchical deep neural networks are currently popular learning models for imitating the h...
High dimensionality and the sheer size of unlabeled data available today demand new development in u...
Auto-encoders are one kinds of promising non-probabilistic representation learning paradigms that ca...
In Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), it was theoretically proven that autoencoder...
Convolutional sparse coding (CSC) can model local connections between image content and reduce the c...
Lemme A, Reinhart F, Steil JJ. Efficient online learning of a non-negative sparse autoencoder. In: ...
Autoencoders (AE) are essential in learning representation of large data (like images) for dimension...
Auto-encoders (AE) is a particular type of unsupervised neural networks that aim at providing a comp...
We show that training common regularized autoencoders resembles clustering, because it amounts to fi...