In this paper we show that neural ODE analogs of recurrent (ODE-RNN) and Long Short-Term Memory (ODE-LSTM) networks can be algorithmically embedded into the class of polynomial systems. This embedding preserves input-output behavior and can suitably be extended to other neural DE architectures. We then use realization theory of polynomial systems to provide necessary conditions for an input-output map to be realizable by an ODE-LSTM and sufficient conditions for minimality of such systems. These results represent the first steps towards realization theory of recurrent neural ODE architectures, which is is expected be useful for model reduction and learning algorithm analysis of recurrent neural ODEs
Embedding nonlinear dynamical systems into artificial neural networks is a powerful new formalism fo...
Recursive neural networks are computational models that can be used to pro- cess structured data. In...
The design of recurrent neural networks (RNNs) to accurately process sequential inputs with long-tim...
In this paper we show that neural ODE analogs of recurrent (ODE-RNN) and Long Short-Term Memory (ODE...
A class of neural networks that gained particular interest in the last years are neural ordinary dif...
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time count...
In this article, we explore the effects of memory terms in continuous-layer Deep Residual Networks b...
Recurrent neural networks can learn complex transduction problems that require maintaining and activ...
In [Velasco et al., 2014], a new approach of the classical artificial neural network archi-tecture i...
This article studies the computational power of various discontinuous real computational models that...
We draw connections between Reservoir Computing (RC) and Ordinary Differential Equations, introducin...
This paper studies the computational power of various discontinuous real computa-tional models that ...
Since the 1980s, and particularly with the Hopfield model, recurrent neural networks or RNN became a...
A dynamical version of the classical McCulloch & Pitts’ neural model is introduced in this paper. In...
We introduce a new class of time-continuous recurrent neural network models. Instead of declaring a ...
Embedding nonlinear dynamical systems into artificial neural networks is a powerful new formalism fo...
Recursive neural networks are computational models that can be used to pro- cess structured data. In...
The design of recurrent neural networks (RNNs) to accurately process sequential inputs with long-tim...
In this paper we show that neural ODE analogs of recurrent (ODE-RNN) and Long Short-Term Memory (ODE...
A class of neural networks that gained particular interest in the last years are neural ordinary dif...
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time count...
In this article, we explore the effects of memory terms in continuous-layer Deep Residual Networks b...
Recurrent neural networks can learn complex transduction problems that require maintaining and activ...
In [Velasco et al., 2014], a new approach of the classical artificial neural network archi-tecture i...
This article studies the computational power of various discontinuous real computational models that...
We draw connections between Reservoir Computing (RC) and Ordinary Differential Equations, introducin...
This paper studies the computational power of various discontinuous real computa-tional models that ...
Since the 1980s, and particularly with the Hopfield model, recurrent neural networks or RNN became a...
A dynamical version of the classical McCulloch & Pitts’ neural model is introduced in this paper. In...
We introduce a new class of time-continuous recurrent neural network models. Instead of declaring a ...
Embedding nonlinear dynamical systems into artificial neural networks is a powerful new formalism fo...
Recursive neural networks are computational models that can be used to pro- cess structured data. In...
The design of recurrent neural networks (RNNs) to accurately process sequential inputs with long-tim...