Neural networks learning theory draws a relationship between `learning with noise` and applying a regularization term in the cost function that is minimized during the training process on clean (nn-noisy) data. Application of regularizers and other robust training techniques are aimed at improving the generalization ability of connectionist models, reducing overfitting. This paper presents an application of a variant of the so called Segmental Neural Network (SNN) to the recognition of speaker independent isolated words with noise. The SNN is enhanced with the introduction of trainable amplitudes of activation functions (SNN-TA), that act as regularizers and increase robustness toward noise. Experimental results show that when training is a...
This paper describes a neural-net based isolated word recogniser that has a better performance on a ...
This paper describes an evaluation of Inhibition/Enhancement (In/En) network for noise robust automa...
This paper investigates the use of feed-forward multi-layer perceptrons trained by back-propagation ...
Neural network learning theory draws a relationship between “learning with noise” and applying a reg...
The Segmental Neural Network (SNN) architecture was introduced at BBN by Zavaliagkos et al. for resc...
We present he concept of a "Segmental Neural Net " (SNN) for phonetic modeling in continuo...
This paper presents new methods for training large neural networks for phoneme probability estimatio...
Spoken human–machine interaction in real-world environments requires acoustic models that are robust...
In the ‘missing data ’ approach to improving the robustness of automatic speech recognition to added...
In recent years recurrent neural network language models (RNNLMs) have been successfully applied to ...
Spoken human-machine interaction in real-world environments requires acoustic models that are robust...
In recent years recurrent neural network language models (RNNLMs) have been successfully applied to ...
In an effort to advance the state of the art in continuous peech recognition employing hidden Markov...
This paper investigates the use of feed-forward multi-layer perceptrons trained by back-propagation ...
This paper presents new methods for training large neural networks for phoneme probability estimatio...
This paper describes a neural-net based isolated word recogniser that has a better performance on a ...
This paper describes an evaluation of Inhibition/Enhancement (In/En) network for noise robust automa...
This paper investigates the use of feed-forward multi-layer perceptrons trained by back-propagation ...
Neural network learning theory draws a relationship between “learning with noise” and applying a reg...
The Segmental Neural Network (SNN) architecture was introduced at BBN by Zavaliagkos et al. for resc...
We present he concept of a "Segmental Neural Net " (SNN) for phonetic modeling in continuo...
This paper presents new methods for training large neural networks for phoneme probability estimatio...
Spoken human–machine interaction in real-world environments requires acoustic models that are robust...
In the ‘missing data ’ approach to improving the robustness of automatic speech recognition to added...
In recent years recurrent neural network language models (RNNLMs) have been successfully applied to ...
Spoken human-machine interaction in real-world environments requires acoustic models that are robust...
In recent years recurrent neural network language models (RNNLMs) have been successfully applied to ...
In an effort to advance the state of the art in continuous peech recognition employing hidden Markov...
This paper investigates the use of feed-forward multi-layer perceptrons trained by back-propagation ...
This paper presents new methods for training large neural networks for phoneme probability estimatio...
This paper describes a neural-net based isolated word recogniser that has a better performance on a ...
This paper describes an evaluation of Inhibition/Enhancement (In/En) network for noise robust automa...
This paper investigates the use of feed-forward multi-layer perceptrons trained by back-propagation ...