Improving adversarial robustness of neural networks remains a major challenge. Fundamentally, training a neural network via gradient descent is a parameter estimation problem. In adaptive control, maintaining persistency of excitation (PoE) is integral to ensuring convergence of parameter estimates in dynamical systems to their true values. We show that parameter estimation with gradient descent can be modeled as a sampling of an adaptive linear time-varying continuous system. Leveraging this model, and with inspiration from Model-Reference Adaptive Control (MRAC), we prove a sufficient condition to constrain gradient descent updates to reference persistently excited trajectories converging to the true parameters. The sufficient condition i...
This paper investigates algorithms to automatically adapt the learning rate of neural networks (NNs)...
Neural networks are ubiquitous components of Machine Learning (ML) algorithms. However, training the...
In this paper, based on the deterministic learning mechanism, we present an alternative systematic s...
Training models that are multi-layer or recursive, such as neural networks or dynamical system model...
This paper discusses the stabilizability of arti®cial neural networks trained by utilizing the gradi...
Abstract: This paper presents a method for stabilizing and robustifying the artificial neural networ...
In this work, a novel and model-based artificial neural network (ANN) training method is developed s...
We study a model for learning periodic signals in recurrent neural networks proposed by Doya and Yos...
One of the main goal of Artificial Intelligence is to develop models capable of providing valuable p...
. The paper proposes a general framework which encompasses the training of neural networks and the a...
We propose a computationally-friendly adaptive learning rate schedule, ``AdaLoss", which directly us...
In this chapter, we describe the basic concepts behind the functioning of recurrent neural networks ...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
The ability of feed-forward neural net architectures to learn continuous-valued mappings in the pres...
International audienceThis work proposes a new learning strategy for training a feedforward neural n...
This paper investigates algorithms to automatically adapt the learning rate of neural networks (NNs)...
Neural networks are ubiquitous components of Machine Learning (ML) algorithms. However, training the...
In this paper, based on the deterministic learning mechanism, we present an alternative systematic s...
Training models that are multi-layer or recursive, such as neural networks or dynamical system model...
This paper discusses the stabilizability of arti®cial neural networks trained by utilizing the gradi...
Abstract: This paper presents a method for stabilizing and robustifying the artificial neural networ...
In this work, a novel and model-based artificial neural network (ANN) training method is developed s...
We study a model for learning periodic signals in recurrent neural networks proposed by Doya and Yos...
One of the main goal of Artificial Intelligence is to develop models capable of providing valuable p...
. The paper proposes a general framework which encompasses the training of neural networks and the a...
We propose a computationally-friendly adaptive learning rate schedule, ``AdaLoss", which directly us...
In this chapter, we describe the basic concepts behind the functioning of recurrent neural networks ...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
The ability of feed-forward neural net architectures to learn continuous-valued mappings in the pres...
International audienceThis work proposes a new learning strategy for training a feedforward neural n...
This paper investigates algorithms to automatically adapt the learning rate of neural networks (NNs)...
Neural networks are ubiquitous components of Machine Learning (ML) algorithms. However, training the...
In this paper, based on the deterministic learning mechanism, we present an alternative systematic s...