Bayesian machine learning (ML) models have long been advocated as an important tool for safe artificial intelligence. Yet, little is known about their vulnerability against adversarial attacks. Such attacks aim to cause undesired model behaviour (e.g. misclassification) by crafting small perturbations to regular inputs which appear to be insignificant to humans (e.g. slight blurring of image data). This fairly recent phenomenon has undermined the suitability of many ML models for deployment in safety critical applications. In this thesis, we investigate how robust Bayesian ML models are against adversarial attacks, focussing on Gaussian process (GP) and Bayesian neural network (BNN) classification models. In particular, for GP classifica...
Machine learning (ML) classification is increasingly used in safety-critical systems. Protecting ML ...
Machine learning is used in myriad aspects, both in academic research and in everyday life, includin...
A well-trained neural network is very accurate when classifying data into different categories. Howe...
Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
This thesis puts forward methods for computing local robustness of probabilistic neural networks, s...
It is widely known that state-of-the-art machine learning models — including vision and language mod...
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and ...
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and ...
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the ...
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learnin...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
We present a new algorithm to train a robust malware detector. Malware is a prolific problem and mal...
Machine learning (ML) classification is increasingly used in safety-critical systems. Protecting ML ...
Machine learning is used in myriad aspects, both in academic research and in everyday life, includin...
A well-trained neural network is very accurate when classifying data into different categories. Howe...
Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
This thesis puts forward methods for computing local robustness of probabilistic neural networks, s...
It is widely known that state-of-the-art machine learning models — including vision and language mod...
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and ...
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and ...
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the ...
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learnin...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
We present a new algorithm to train a robust malware detector. Malware is a prolific problem and mal...
Machine learning (ML) classification is increasingly used in safety-critical systems. Protecting ML ...
Machine learning is used in myriad aspects, both in academic research and in everyday life, includin...
A well-trained neural network is very accurate when classifying data into different categories. Howe...