In this paper, we present a formulation of the learning problem that allows deterministic nonmonotone learning behaviour to be generated, i.e. the values of the error function are allowed to increase temporarily although learning behaviour is progressively improved. This is achieved by introducing a nonmonotone strategy on the error function values. We present four training algorithms which are equipped with nonmonotone strategy and investigate their performance in symbolic sequence processing problems. Experimental results show that introducing nonmonotone mechanism can improve traditional learning strategies and make them more effective in the sequence problems tested
In the context of sequence processing, we study the relationship between single-layer feedforward ne...
It is often difficult to predict the optimal neural network size for a particular application. Const...
It is often difficult to predict the optimal neural network size for a particular application. Const...
In this paper, we present nonmonotone variants of the Levenberg–Marquardt (LM) method for training r...
In this paper we propose a nonmonotone approach to recurrent neural networks training for temporal s...
Recurrent networks constitute an elegant way of increasing the capacity of feedforward networks to d...
We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., de...
Recurrent networks constitute an elegant way of increasing the capacity of feedforward networks to d...
We present nonmonotone methods for feedforward neural network training, i.e., training methods in wh...
Sequence learning has a variety of different approaches. We can distinguish two fundamental approach...
Sequence processing involves several tasks such as clustering, classification, prediction, and trans...
In this paper we study nonmonotone learning rules, based on an acceptability criterion for the calcu...
Do you want your neural net algorithm to learn sequences? Do not lim-it yourself to conventional gra...
This thesis studies the introduction of a priori structure into the design of learning systems based...
We consider the problem of training input-output recurrent neural networks (RNN) for sequence labeli...
In the context of sequence processing, we study the relationship between single-layer feedforward ne...
It is often difficult to predict the optimal neural network size for a particular application. Const...
It is often difficult to predict the optimal neural network size for a particular application. Const...
In this paper, we present nonmonotone variants of the Levenberg–Marquardt (LM) method for training r...
In this paper we propose a nonmonotone approach to recurrent neural networks training for temporal s...
Recurrent networks constitute an elegant way of increasing the capacity of feedforward networks to d...
We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., de...
Recurrent networks constitute an elegant way of increasing the capacity of feedforward networks to d...
We present nonmonotone methods for feedforward neural network training, i.e., training methods in wh...
Sequence learning has a variety of different approaches. We can distinguish two fundamental approach...
Sequence processing involves several tasks such as clustering, classification, prediction, and trans...
In this paper we study nonmonotone learning rules, based on an acceptability criterion for the calcu...
Do you want your neural net algorithm to learn sequences? Do not lim-it yourself to conventional gra...
This thesis studies the introduction of a priori structure into the design of learning systems based...
We consider the problem of training input-output recurrent neural networks (RNN) for sequence labeli...
In the context of sequence processing, we study the relationship between single-layer feedforward ne...
It is often difficult to predict the optimal neural network size for a particular application. Const...
It is often difficult to predict the optimal neural network size for a particular application. Const...