We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack settings. We begin by noticing how the distinction between model prediction and decision in Bayesian settings naturally leads us to give two different notions of adversarial robustness. The former, probabilistic adversarial robustness, is concerned with the behaviour of the posterior distribution, formally characterising its worst-case attack uncertainty. On the other hand, adversarial robustness is concerned with local stability of the model decision, and is strictly correlated with bounds on the predictive posterior distribution of the model. In the first part of this thesis we show how, by relying on the Borell-TIS inequality, the computa...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is a...
Gaussian processes (GP) are a widely-adopted tool used to sequentially optimize black-box functions,...
The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive ...
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and ...
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and ...
Bayesian machine learning (ML) models have long been advocated as an important tool for safe artific...
We investigate adversarial robustness of Gaussian Process classification (GPC) models. Specifically,...
We investigate adversarial robustness of Gaussian Process classification (GPC) models. Specifically,...
Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples...
In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustnes...
Gaussian process models constitute a class of probabilistic statistical models in which a Gaussian p...
This thesis puts forward methods for computing local robustness of probabilistic neural networks, s...
© 2018 Curran Associates Inc.All rights reserved. In this paper, we consider the problem of Gaussian...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is a...
Gaussian processes (GP) are a widely-adopted tool used to sequentially optimize black-box functions,...
The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive ...
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and ...
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and ...
Bayesian machine learning (ML) models have long been advocated as an important tool for safe artific...
We investigate adversarial robustness of Gaussian Process classification (GPC) models. Specifically,...
We investigate adversarial robustness of Gaussian Process classification (GPC) models. Specifically,...
Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples...
In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustnes...
Gaussian process models constitute a class of probabilistic statistical models in which a Gaussian p...
This thesis puts forward methods for computing local robustness of probabilistic neural networks, s...
© 2018 Curran Associates Inc.All rights reserved. In this paper, we consider the problem of Gaussian...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is a...
Gaussian processes (GP) are a widely-adopted tool used to sequentially optimize black-box functions,...
The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability...