Low-complexity coding and decoding (Lococode), a novel approach to sensory coding, trains autoassociators (AAs) by Flat Minimum Search (FMS), a recent general method for finding low-complexity networks with high generalization capability. FMS works by minimizing both training error and required weight precision. We find that as a by-product Lococode separates nonlinear superpositions of sources without knowing their number. Assuming that the input data can be reduced to few simple causes (this is often the case with visual data), according to our theoretical analysis the hidden layer of an FMS-trained AA tends to code each input by a sparse code based on as few simple, independent features as possible. In experiments Lococode extracts optim...
The minimum description length (MDL) principle can be used to train the hidden units of a neural net...
Autoencoders are data-specific compression algorithms learned automatically from examples. The predo...
International audienceWe investigate the consequences of maximizing information transfer in a simple...
The Minimum Description Length principle (MDL) can be used to train the hidden units of a neural net...
An autoencoder network uses a set of recognition weights to convert an input vector into a code vect...
Most natural task-relevant variables are encoded in the early sensory cortex in a form that can only...
An autoencoder network uses a set of recognition weights to convert an input vector into a code vect...
Some recent work has investigated the dichotomy between compact coding using dimensionality reductio...
The principle of minimum description length suggests look-ing for the simplest network that works we...
Sparse coding is a basic task in many fields including signal processing, neuroscience and machine l...
The principle of minimum description length suggests looking for the simplest network that works wel...
Many approaches to transform classification problems from non-linear to linear by feature transforma...
Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal wit...
We describe a novel unsupervised method for learning sparse, overcomplete fea-tures. The model uses ...
The efficient coding hypothesis assumes that biological sensory systems use neural codes that are op...
The minimum description length (MDL) principle can be used to train the hidden units of a neural net...
Autoencoders are data-specific compression algorithms learned automatically from examples. The predo...
International audienceWe investigate the consequences of maximizing information transfer in a simple...
The Minimum Description Length principle (MDL) can be used to train the hidden units of a neural net...
An autoencoder network uses a set of recognition weights to convert an input vector into a code vect...
Most natural task-relevant variables are encoded in the early sensory cortex in a form that can only...
An autoencoder network uses a set of recognition weights to convert an input vector into a code vect...
Some recent work has investigated the dichotomy between compact coding using dimensionality reductio...
The principle of minimum description length suggests look-ing for the simplest network that works we...
Sparse coding is a basic task in many fields including signal processing, neuroscience and machine l...
The principle of minimum description length suggests looking for the simplest network that works wel...
Many approaches to transform classification problems from non-linear to linear by feature transforma...
Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal wit...
We describe a novel unsupervised method for learning sparse, overcomplete fea-tures. The model uses ...
The efficient coding hypothesis assumes that biological sensory systems use neural codes that are op...
The minimum description length (MDL) principle can be used to train the hidden units of a neural net...
Autoencoders are data-specific compression algorithms learned automatically from examples. The predo...
International audienceWe investigate the consequences of maximizing information transfer in a simple...