© 2017 IEEE. Training neural network acoustic models on limited quantities of data is a challenging task. A number of techniques have been proposed to improve generalisation. This paper investigates one such technique called stimulated training. It enables standard criteria such as cross-entropy to enforce spatial constraints on activations originating from different units. Having different regions being active depending on the input unit may help network to discriminate better and as a consequence yield lower error rates. This paper investigates stimulated training for automatic speech recognition of a number of languages representing different families, alphabets, phone sets and vocabulary sizes. In particular, it looks at ensembles of st...
This paper describes a neural-net based isolated word recogniser that has a better performance on a ...
This paper describes an end-to-end approach to perform keyword spotting with a pre-trained acoustic ...
International audienceThis paper reports on investigations using two techniques for language model t...
© 2017 IEEE. Training neural network acoustic models on limited quantities of data is a challenging ...
Training neural network acoustic models on limited quantities of data is a challenging task. A numbe...
Training neural network acoustic models on limited quantities of data is a challenging task. A numbe...
Deep neural networks (DNNs) and deep learning approaches yield state-of-the-art performance in a ran...
Recent works have shown Neural Network based Language Models (NNLMs) to be an effective modeling tec...
Recurrent neural network language models (RNNLMs) have becoming increasingly popular in many applica...
The development of a speech recognition system requires at least three resources: a large labeled sp...
Recurrent neural network language models (RNNLMs) have becoming increasingly popular in many applica...
Many of today's state-of-the-art automatic speech recognition (ASR) systems are based on hybrid hidd...
© 2014 IEEE. Deep neural networks (DNNs) have shown a great promise in exploiting out-of-language da...
This paper presents recent progress in developing speech-to-text (STT) and keyword spotting (KWS) sy...
Training deep neural network based Automatic Speech Recognition (ASR) models often requires thousand...
This paper describes a neural-net based isolated word recogniser that has a better performance on a ...
This paper describes an end-to-end approach to perform keyword spotting with a pre-trained acoustic ...
International audienceThis paper reports on investigations using two techniques for language model t...
© 2017 IEEE. Training neural network acoustic models on limited quantities of data is a challenging ...
Training neural network acoustic models on limited quantities of data is a challenging task. A numbe...
Training neural network acoustic models on limited quantities of data is a challenging task. A numbe...
Deep neural networks (DNNs) and deep learning approaches yield state-of-the-art performance in a ran...
Recent works have shown Neural Network based Language Models (NNLMs) to be an effective modeling tec...
Recurrent neural network language models (RNNLMs) have becoming increasingly popular in many applica...
The development of a speech recognition system requires at least three resources: a large labeled sp...
Recurrent neural network language models (RNNLMs) have becoming increasingly popular in many applica...
Many of today's state-of-the-art automatic speech recognition (ASR) systems are based on hybrid hidd...
© 2014 IEEE. Deep neural networks (DNNs) have shown a great promise in exploiting out-of-language da...
This paper presents recent progress in developing speech-to-text (STT) and keyword spotting (KWS) sy...
Training deep neural network based Automatic Speech Recognition (ASR) models often requires thousand...
This paper describes a neural-net based isolated word recogniser that has a better performance on a ...
This paper describes an end-to-end approach to perform keyword spotting with a pre-trained acoustic ...
International audienceThis paper reports on investigations using two techniques for language model t...