In this paper, making use of the signal-flow-graph (SFG) representation and its known properties, we derive a new general method for backward gradient computation of a system output or cost function with respect to past (or present) system parameters. The system can be any causal, in general nonlinear and time-variant dynamic system represented by a SFG, in particular any feedforward or recurrent neural network. In this work we use discrete time notation, but the same theory holds for the continuous time case. The gradient is obtained by the analysis of two SFGs, the original one and its adjoint. This method can be used both for online and off-line learning. In the latter case using the mean square error cost function, our approach particul...
Error backpropagation in feedforward neural network models is a popular learning algorithm that has ...
In this paper, back propagation is reinvestigated for an efficient evaluation of the gradient in arb...
In this paper, back propagation is reinvestigated for an efficient evaluation of the gradient in arb...
A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be...
In this paper, we derive a new general method for both on-line and off-line backward gradient comput...
This research was supported by the Italian MURST. Please address comments to the first author. In th...
Introduction Deriving the appropriate gradient descent algorithm for a new network architecture or ...
We survey learning algorithms for recurrent neural networks with hidden units, and put the various t...
We survey learning algorithms for recurrent neural networks with hidden units, and put the various t...
We survey learning algorithms for recurrent neural networks with hidden units, and put the various t...
A novel variant of a familiar recurrent network learning algorithm is described. This algorithm is c...
for Neural Networks Deriving gradient algorithms for time-dependent neural network struc-tures typic...
Abstract—This paper introduces a general framework for de-scribing dynamic neural networks—the layer...
Deriving backpropagation algorithms for time-dependent neural network structures typically requires ...
We derive two second-order algorithms, based on the conjugate gradient method, for online training o...
Error backpropagation in feedforward neural network models is a popular learning algorithm that has ...
In this paper, back propagation is reinvestigated for an efficient evaluation of the gradient in arb...
In this paper, back propagation is reinvestigated for an efficient evaluation of the gradient in arb...
A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be...
In this paper, we derive a new general method for both on-line and off-line backward gradient comput...
This research was supported by the Italian MURST. Please address comments to the first author. In th...
Introduction Deriving the appropriate gradient descent algorithm for a new network architecture or ...
We survey learning algorithms for recurrent neural networks with hidden units, and put the various t...
We survey learning algorithms for recurrent neural networks with hidden units, and put the various t...
We survey learning algorithms for recurrent neural networks with hidden units, and put the various t...
A novel variant of a familiar recurrent network learning algorithm is described. This algorithm is c...
for Neural Networks Deriving gradient algorithms for time-dependent neural network struc-tures typic...
Abstract—This paper introduces a general framework for de-scribing dynamic neural networks—the layer...
Deriving backpropagation algorithms for time-dependent neural network structures typically requires ...
We derive two second-order algorithms, based on the conjugate gradient method, for online training o...
Error backpropagation in feedforward neural network models is a popular learning algorithm that has ...
In this paper, back propagation is reinvestigated for an efficient evaluation of the gradient in arb...
In this paper, back propagation is reinvestigated for an efficient evaluation of the gradient in arb...