In this paper, we explore the relation between distributionally robust learning and different forms of regularization to enforce robustness of deep neural networks. In particular, starting from a concrete min-max distributionally robust problem, and using tools from optimal transport theory, we derive first-order and second-order approximations to the distributionally robust problem in terms of appropriate regularized risk minimization problems. In the context of deep ResNet models, we identify the structure of the resulting regularization problems as mean-field optimal control problems where the number and dimension of state variables are within a dimension-free factor of the dimension of the original unrobust problem. Using the Pontryagin...
The goal of regression and classification methods in supervised learning is to minimize the empirica...
IEEE Deep neural network-based systems are now state-of-the-art in many robotics tasks, but their ap...
49 pages, 2 figures; to be presented at the 37th Annual Conference on Neural Information Processing ...
Despite superior performance in many situations, deep neural networks are often vulnerable to advers...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is show...
Machine learning models, especially deep neural networks, have achieved impressive performance acros...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
It has been shown that neural network classifiers are not robust. This raises concerns about their u...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
International audienceRobustness of deep neural networks is a critical issue in practical applicatio...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
The goal of regression and classification methods in supervised learning is to minimize the empirica...
IEEE Deep neural network-based systems are now state-of-the-art in many robotics tasks, but their ap...
49 pages, 2 figures; to be presented at the 37th Annual Conference on Neural Information Processing ...
Despite superior performance in many situations, deep neural networks are often vulnerable to advers...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is show...
Machine learning models, especially deep neural networks, have achieved impressive performance acros...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
It has been shown that neural network classifiers are not robust. This raises concerns about their u...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
International audienceRobustness of deep neural networks is a critical issue in practical applicatio...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
The goal of regression and classification methods in supervised learning is to minimize the empirica...
IEEE Deep neural network-based systems are now state-of-the-art in many robotics tasks, but their ap...
49 pages, 2 figures; to be presented at the 37th Annual Conference on Neural Information Processing ...