Regularization plays an important role in machine learning systems. We propose a novel methodology for model regularization using random projection. We demonstrate the technique on neural networks, since such models usually comprise a very large number of parameters, calling for strong regularizers. It has been shown recently that neural networks are sensitive to two kinds of samples: (i) adversarial samples, which are generated by imperceptible perturbations of previously correctly-classified samples - yet the network will misclassify them; and (ii) fooling samples, which are completely unrecognizable, yet the network will classify them with extremely high confidence. In this paper, we show how robust neural networks can be trained using r...
We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher linear discri...
Machine learning algorithms are invented to learn from data and to use data to perform predictions a...
In this paper, we explore the relation between distributionally robust learning and different forms ...
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is show...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Neumann K, Emmerich C, Steil JJ. Regularization by Intrinsic Plasticity and its Synergies with Recur...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial ...
Traditional machine learning operates under the assumption that training and testing data are drawn ...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
One of the main goal of Artificial Intelligence is to develop models capable of providing valuable p...
Deep neural networks have proven remarkably effective at solving many classification problems, but h...
Modern machine learning (ML) algorithms are being applied today to a rapidly increasing number of ta...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher linear discri...
Machine learning algorithms are invented to learn from data and to use data to perform predictions a...
In this paper, we explore the relation between distributionally robust learning and different forms ...
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is show...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Neumann K, Emmerich C, Steil JJ. Regularization by Intrinsic Plasticity and its Synergies with Recur...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial ...
Traditional machine learning operates under the assumption that training and testing data are drawn ...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
One of the main goal of Artificial Intelligence is to develop models capable of providing valuable p...
Deep neural networks have proven remarkably effective at solving many classification problems, but h...
Modern machine learning (ML) algorithms are being applied today to a rapidly increasing number of ta...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher linear discri...
Machine learning algorithms are invented to learn from data and to use data to perform predictions a...
In this paper, we explore the relation between distributionally robust learning and different forms ...