Abstract—In this paper, we develop a parallel structure for the time-delay neural network used in some speech recognition applications. The effectiveness of the design is illustrated by 1) extracting a window computing model from the time-delay neural systems; 2) building its pipelined architecture with parallel or serial processing stages; and 3) applying this parallel window computing to some typical speech recognition systems. An analysis of the complexity of the proposed design shows a greatly reduced complexity while maintaining a high throughput rate. Index Terms—Parallel computing, pipelined architecture, timedelay neural networks, speech recognition. characteristics of such neural speech recognition systems. A model for time-delay w...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
Despite the advances in computer technology which have been witnessed over the decades since Eniac w...
Abstract. It seems obvious that the massively parallel computations inherent in artificial neural ne...
The authors develop a parallel structure for the time-delay neural network used in some speech recog...
The authors propose a scheme that maps a time-delay neural network (TDNN) into a neurocomputer calle...
The thesis of the proposed research is that connectionist networks are adequate models for the probl...
An analog model neural network that can solve a general problem of recognizing patterns in a time-de...
We present a number of Time-Delay Neural Network (TDNN) based architectures for multi-speaker phonem...
This thesis describes the design and implementation of two pattern recognition systems on field-prog...
Research in Automatic Speech Recognition (ASR) has been very intense in recent years with focus give...
For years researchers have worked toward finding a way to allow people to talk to machines in the sa...
This thesis presents the implementation of three VLSI neural network systems: A chip for implementin...
As artificial neural networks (ANNs) gain popularity in a variety of application domains, it is crit...
The problem of speech recognition is one that lends itself to parallelization. A common method used ...
In this paper, the artificial neural networks are implemented to accomplish the English alphabet spe...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
Despite the advances in computer technology which have been witnessed over the decades since Eniac w...
Abstract. It seems obvious that the massively parallel computations inherent in artificial neural ne...
The authors develop a parallel structure for the time-delay neural network used in some speech recog...
The authors propose a scheme that maps a time-delay neural network (TDNN) into a neurocomputer calle...
The thesis of the proposed research is that connectionist networks are adequate models for the probl...
An analog model neural network that can solve a general problem of recognizing patterns in a time-de...
We present a number of Time-Delay Neural Network (TDNN) based architectures for multi-speaker phonem...
This thesis describes the design and implementation of two pattern recognition systems on field-prog...
Research in Automatic Speech Recognition (ASR) has been very intense in recent years with focus give...
For years researchers have worked toward finding a way to allow people to talk to machines in the sa...
This thesis presents the implementation of three VLSI neural network systems: A chip for implementin...
As artificial neural networks (ANNs) gain popularity in a variety of application domains, it is crit...
The problem of speech recognition is one that lends itself to parallelization. A common method used ...
In this paper, the artificial neural networks are implemented to accomplish the English alphabet spe...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
Despite the advances in computer technology which have been witnessed over the decades since Eniac w...
Abstract. It seems obvious that the massively parallel computations inherent in artificial neural ne...