. In this contribution we present results of using possibly inaccurate knowledge of model derivatives as part of the training data for a multilayer perceptron network (MLP). Even simple constraints offer significant improvements and the resulting models give better prediction performance than traditional data driven MLP models. 1 Introduction An increasingly important application of neural networks is in modeling nonlinear plants for simulation and control purposes. Often the training must be done with measurements from normal operating situations. Then, depending on the operating statistics during the data collection, many important features of the process behavior may be lacking from the data. An important type of knowledge from any proce...
Neural network modeling for small datasets can be justified from a theoretical point of view accordi...
This paper studies complex dynamic neural network learning models. Backpropagation was used to train...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
In this contribution we present an algorithm for using possibly inaccurate knowledge of model deriva...
Two methods for representing data in a multi-layer perceptron (MLP) neural network are described and...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
In this contribution we present a method for constraining the learning of a Multi-Layer Perceptron n...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
Artificial neural networks are empirical models which adjust their internal parameters, using a suit...
Recurrent neural networks (RNNs) have been widely used to model nonlinear dynamic systems using time...
Most application work within neural computing continues to employ multi-layer perceptrons (MLP). Tho...
This thesis concerns the Multi-layer Perceptron (MLP) model, one of a variety of neural network mode...
Performance of model–based feedforward controllers is typically limited by the accuracy of the model...
Abstract. In this paper we address the important problem of optimizing regularization parameters in ...
Contains fulltext : 18618.pdf (publisher's version ) (Open Access)About a decade a...
Neural network modeling for small datasets can be justified from a theoretical point of view accordi...
This paper studies complex dynamic neural network learning models. Backpropagation was used to train...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
In this contribution we present an algorithm for using possibly inaccurate knowledge of model deriva...
Two methods for representing data in a multi-layer perceptron (MLP) neural network are described and...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
In this contribution we present a method for constraining the learning of a Multi-Layer Perceptron n...
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large...
Artificial neural networks are empirical models which adjust their internal parameters, using a suit...
Recurrent neural networks (RNNs) have been widely used to model nonlinear dynamic systems using time...
Most application work within neural computing continues to employ multi-layer perceptrons (MLP). Tho...
This thesis concerns the Multi-layer Perceptron (MLP) model, one of a variety of neural network mode...
Performance of model–based feedforward controllers is typically limited by the accuracy of the model...
Abstract. In this paper we address the important problem of optimizing regularization parameters in ...
Contains fulltext : 18618.pdf (publisher's version ) (Open Access)About a decade a...
Neural network modeling for small datasets can be justified from a theoretical point of view accordi...
This paper studies complex dynamic neural network learning models. Backpropagation was used to train...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...