In this paper we present a class of nonlinear neural network models and an associated learning algorithm that always converges to a set of network parameters (e.g., the connection weights) such that the error between the network trajectories and the desired trajectories vanishes, for all initial conditions and system inputs. Our models are the well known class of additive neural networks. We show that additive networks are one instance of the class of models whose dynamics can be decomposed into gradient and Hamiltonian portions. Furthermore, we use the Hamiltonian potential function to prove that additive networks possess bounded-input bounded-state stability for some minor restrictions on the node output functions. Also we present a condi...
A large number of current machine learning methods rely upon deep neural networks. Yet, viewing neur...
We wish to construct a realization theory of stable neural networks and use this theory to model the...
A decomposition approach is developed to analyze fixed-point dynamics in continuous-time neural netw...
This paper deals with a class of large-scale nonlinear dynamical systems, namely the additive neural...
Pre-PrintThe process of machine learning can be considered in two stages model selection and paramet...
The process of machine learning can be considered in two stages model selection and parameter estim...
The process of model learning can be considered in two stages: model selection and parameter estimat...
The process of machine learning can be considered in two stages: model selection and parameter estim...
This report presents a formalism that enables the dynamics of a broad class of neural networks to be...
Data-driven approximations of ordinary differential equations offer a promising alternative to class...
Because the dynamics of a neural network with symmetric interactions is similar to a gradient descen...
In designing a neural net, either for biological modeling, cognitive simulation, or numerical comput...
We investigate analog neural networks. They have continuous state variables that depend continuously...
Several learning algorithms have been derived for equilibrium points in recurrent neural networks. I...
UNM Technical Report No. EECE93 001This report presents a formalism that enables the dynamics of a b...
A large number of current machine learning methods rely upon deep neural networks. Yet, viewing neur...
We wish to construct a realization theory of stable neural networks and use this theory to model the...
A decomposition approach is developed to analyze fixed-point dynamics in continuous-time neural netw...
This paper deals with a class of large-scale nonlinear dynamical systems, namely the additive neural...
Pre-PrintThe process of machine learning can be considered in two stages model selection and paramet...
The process of machine learning can be considered in two stages model selection and parameter estim...
The process of model learning can be considered in two stages: model selection and parameter estimat...
The process of machine learning can be considered in two stages: model selection and parameter estim...
This report presents a formalism that enables the dynamics of a broad class of neural networks to be...
Data-driven approximations of ordinary differential equations offer a promising alternative to class...
Because the dynamics of a neural network with symmetric interactions is similar to a gradient descen...
In designing a neural net, either for biological modeling, cognitive simulation, or numerical comput...
We investigate analog neural networks. They have continuous state variables that depend continuously...
Several learning algorithms have been derived for equilibrium points in recurrent neural networks. I...
UNM Technical Report No. EECE93 001This report presents a formalism that enables the dynamics of a b...
A large number of current machine learning methods rely upon deep neural networks. Yet, viewing neur...
We wish to construct a realization theory of stable neural networks and use this theory to model the...
A decomposition approach is developed to analyze fixed-point dynamics in continuous-time neural netw...