International audienceWe propose the first general PAC-Bayesian generalization bounds for adversarial robustness, that estimate, at test time, how much a model will be invariant to imperceptible perturbations in the input. Instead of deriving a worst-case analysis of the risk of a hypothesis over all the possible perturbations, we leverage the PAC-Bayesian framework to bound the averaged risk on the perturbations for majority votes (over the whole class of hypotheses). Our theoretically founded analysis has the advantage to provide general bounds (i) that are valid for any kind of attacks (i.e., the adversarial attacks), (ii) that are tight thanks to the PAC-Bayesian framework, (iii) that can be directly minimized during the learning phase ...
International audienceWe provide two main contributions in PAC-Bayesian theory for domain adaptation...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
Adversarial Training is proved to be an efficient method to defend against adversarial examples, bei...
International audienceWe propose the first general PAC-Bayesian generalization bounds for adversaria...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Recent research in robust optimization has shown an overfitting-like phenomenon in which models trai...
Risk bounds, which are also called generalisation bounds in the statistical learning literature, are...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
International audiencePAC-Bayesian learning bounds are of the utmost interest to the learning commun...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a ma...
International audienceWe provide two main contributions in PAC-Bayesian theory for domain adaptation...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
Adversarial Training is proved to be an efficient method to defend against adversarial examples, bei...
International audienceWe propose the first general PAC-Bayesian generalization bounds for adversaria...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Recent research in robust optimization has shown an overfitting-like phenomenon in which models trai...
Risk bounds, which are also called generalisation bounds in the statistical learning literature, are...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
International audiencePAC-Bayesian learning bounds are of the utmost interest to the learning commun...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a ma...
International audienceWe provide two main contributions in PAC-Bayesian theory for domain adaptation...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
Adversarial Training is proved to be an efficient method to defend against adversarial examples, bei...