International audienceGradient backpropagation works well only if the initial weights are close a good solution. Pretraining the Deep Neural Networks (DNNs) by autoassociators in a greedy way is a tricky way to set appropriate initializations in deep learning. While in the literature, the pretraining solely in-volve the inputs while the information conveyed by the la-bels is ignored. In this paper, we present new pretraining algorithms for DNNs by embedding the information of la-bels : the input and hidden layers' weights are initialized in the usual way by autoassociators. To set the initial values of the output layer, a autoassociator embedding the output vector into a particular space is learned. This space shares the dimension of the la...
The vanishing gradient problem (i.e., gradients prematurely becoming extremely small during training...
A good weight initialization is crucial to accelerate the convergence of the weights in a neural net...
We propose a pre-training technique for recurrent neural networks based on linear autoencoder networ...
International audienceGradient backpropagation works well only if the initial weights are close a go...
Abstracf- Proper initialization of neural networks is critical for a successful training of its weig...
The objective of this thesis is to study unsupervised pre-training in convolutional neural networks ...
This thesis deals with pretraining deep networks by autoencoders. Components of neural networks are ...
A new method of initializing the weights in deep neural networks is proposed. The method follows two...
This study high lights on the subject of weight initialization in back-propagation feed-forward netw...
Neural networks require careful weight initialization to prevent signals from exploding or vanishing...
The importance of weight initialization when building a deep learning model is often underappreciate...
Training a neural network (NN) depends on multiple factors, including but not limited to the initial...
Over the last years, i-vectors have been the state-of-the-art approach in speaker recognition. Recen...
A method has been proposed for weight initialization in back-propagation feed-forward networks. Trai...
The activation function deployed in a deep neural network has great influence on the performance of ...
The vanishing gradient problem (i.e., gradients prematurely becoming extremely small during training...
A good weight initialization is crucial to accelerate the convergence of the weights in a neural net...
We propose a pre-training technique for recurrent neural networks based on linear autoencoder networ...
International audienceGradient backpropagation works well only if the initial weights are close a go...
Abstracf- Proper initialization of neural networks is critical for a successful training of its weig...
The objective of this thesis is to study unsupervised pre-training in convolutional neural networks ...
This thesis deals with pretraining deep networks by autoencoders. Components of neural networks are ...
A new method of initializing the weights in deep neural networks is proposed. The method follows two...
This study high lights on the subject of weight initialization in back-propagation feed-forward netw...
Neural networks require careful weight initialization to prevent signals from exploding or vanishing...
The importance of weight initialization when building a deep learning model is often underappreciate...
Training a neural network (NN) depends on multiple factors, including but not limited to the initial...
Over the last years, i-vectors have been the state-of-the-art approach in speaker recognition. Recen...
A method has been proposed for weight initialization in back-propagation feed-forward networks. Trai...
The activation function deployed in a deep neural network has great influence on the performance of ...
The vanishing gradient problem (i.e., gradients prematurely becoming extremely small during training...
A good weight initialization is crucial to accelerate the convergence of the weights in a neural net...
We propose a pre-training technique for recurrent neural networks based on linear autoencoder networ...