Residual deep neural networks (ResNets) are mathematically described as interacting particle systems. In the case of infinitely many layers the ResNet leads to a system of coupled system of ordinary differential equations known as neural differential equations. For large scale input data we derive a mean--field limit and show well--posedness of the resulting description. Further, we analyze the existence of solutions to the training process by using both a controllability and an optimal control point of view. Numerical investigations based on the solution of a formal optimality system illustrate the theoretical findings
Recently, deep residual networks have been successfully applied in many computer vision and natural ...
International audienceThis paper addresses the understanding and characterization of residual networ...
We study the optimal control in a long time horizon of neural ordinary differential equations which ...
Neural networks have been very successful in many applications; we often, however, lack a theoretica...
Recently, neural networks (NN) with an infinite number of layers have been introduced. Especially f...
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time count...
Continuous-depth neural networks can be viewed as deep limits of discrete neural networks whose dyna...
Overparametrization is a key factor in the absence of convexity to explain global convergence of gra...
We investigate the asymptotic properties of deep Residual networks (ResNets) as the number of layers...
Deep ResNets are recognized for achieving state-of-the-art results in complex machine learning tasks...
In this paper, we study a regularised relaxed optimal control problem and, in particular, we are con...
Deep learning has become an important toolkit for data science and artificial intelligence. In contr...
We present a new multilevel minimization framework for the training of deep residual networks (ResNe...
Machine learning, and in particular neural network models, have revolutionized fields such as image,...
In recent years, deep learning has been connected with optimal control as a way to define a notion o...
Recently, deep residual networks have been successfully applied in many computer vision and natural ...
International audienceThis paper addresses the understanding and characterization of residual networ...
We study the optimal control in a long time horizon of neural ordinary differential equations which ...
Neural networks have been very successful in many applications; we often, however, lack a theoretica...
Recently, neural networks (NN) with an infinite number of layers have been introduced. Especially f...
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time count...
Continuous-depth neural networks can be viewed as deep limits of discrete neural networks whose dyna...
Overparametrization is a key factor in the absence of convexity to explain global convergence of gra...
We investigate the asymptotic properties of deep Residual networks (ResNets) as the number of layers...
Deep ResNets are recognized for achieving state-of-the-art results in complex machine learning tasks...
In this paper, we study a regularised relaxed optimal control problem and, in particular, we are con...
Deep learning has become an important toolkit for data science and artificial intelligence. In contr...
We present a new multilevel minimization framework for the training of deep residual networks (ResNe...
Machine learning, and in particular neural network models, have revolutionized fields such as image,...
In recent years, deep learning has been connected with optimal control as a way to define a notion o...
Recently, deep residual networks have been successfully applied in many computer vision and natural ...
International audienceThis paper addresses the understanding and characterization of residual networ...
We study the optimal control in a long time horizon of neural ordinary differential equations which ...