We derive global H^∞ optimal training algorithms for neural networks. These algorithms guarantee the smallest possible prediction error energy over all possible disturbances of fixed energy, and are therefore robust with respect to model uncertainties and lack of statistical information on the exogenous signals. The ensuing estimators are infinite-dimensional, in the sense that updating the weight vector estimate requires knowledge of all previous weight esimates. A certain finite-dimensional approximation to these estimators is the backpropagation algorithm. This explains the local H6∞ optimality of backpropagation that has been previously demonstrated
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
We propose BlockProp, a neural network training algorithm. Unlike backpropagation, it does not rely ...
Multilayer Neural Networks (MNNs) are commonly trained using gradient descent-based methods, such as...
We derive global H 1 optimal training algorithms for neural networks. These algorithms guarantee t...
We derive global H^∞ optimal training algorithms for neural networks. These algorithms guarantee the...
We have recently shown that the widely known LMS algorithm is an H∞ optimal estimator. The H∞ crite...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
This report contains some remarks about the backpropagation method for neural net learning. We conce...
One of the most important aspects of any machine learning paradigm is how it scales according to pro...
We introduce a new, efficient, principled and backpropagation-compatible algorithm for learn-ing a p...
A new adaptive backpropagation (BP) algorithm based on Lyapunov stability theory for neural networks...
In this paper, we derive and prove the stability bounds of the momentum coefficient µ and the learni...
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
We propose BlockProp, a neural network training algorithm. Unlike backpropagation, it does not rely ...
Multilayer Neural Networks (MNNs) are commonly trained using gradient descent-based methods, such as...
We derive global H 1 optimal training algorithms for neural networks. These algorithms guarantee t...
We derive global H^∞ optimal training algorithms for neural networks. These algorithms guarantee the...
We have recently shown that the widely known LMS algorithm is an H∞ optimal estimator. The H∞ crite...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
This report contains some remarks about the backpropagation method for neural net learning. We conce...
One of the most important aspects of any machine learning paradigm is how it scales according to pro...
We introduce a new, efficient, principled and backpropagation-compatible algorithm for learn-ing a p...
A new adaptive backpropagation (BP) algorithm based on Lyapunov stability theory for neural networks...
In this paper, we derive and prove the stability bounds of the momentum coefficient µ and the learni...
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
We propose BlockProp, a neural network training algorithm. Unlike backpropagation, it does not rely ...
Multilayer Neural Networks (MNNs) are commonly trained using gradient descent-based methods, such as...