Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification. Many defenses have led to an arms race-we thus study a promising, recent trend in this setting, Bayesian uncertainty measures. These measures allow a classifier to provide principled confidence and uncertainty for an input, where the latter refers to how usual the input is. We focus on Gaussian processes (GP), a classifier providing such principled uncertainty and confidence measures. Using correctly classified benign data as comparison, GP's intrinsic uncertainty and confidence deviate for misclassified benign samples and misclassified adversarial examples. We therefore introduce high-confid...
How and when can we depend on machine learning systems to make decisions for human-being? This is pr...
Machine learning and artificial intelligence will be deeply embedded in the intelligent systems huma...
The classification performance of deep neural networks has begun to asymptote at near-perfect levels...
Bayesian machine learning (ML) models have long been advocated as an important tool for safe artific...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
Model uncertainty has gained popularity in machine learning due to the overconfident predictions de...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is a...
Assessing uncertainty is an important step towards ensuring the safety and reliability of machine le...
We propose a novel method to capture data points near decision boundary in neural network that are ...
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive ...
The past decade of artifcial intelligence and deep learning has made tremendous progress in highly ...
Abstract Machine Learning is a powerful tool to reveal and exploit correlations in a multi-dimension...
Uncertainty estimation (UE) techniques -- such as the Gaussian process (GP), Bayesian neural network...
This paper presents a novel framework for image classification which comprises a convolutional neura...
How and when can we depend on machine learning systems to make decisions for human-being? This is pr...
Machine learning and artificial intelligence will be deeply embedded in the intelligent systems huma...
The classification performance of deep neural networks has begun to asymptote at near-perfect levels...
Bayesian machine learning (ML) models have long been advocated as an important tool for safe artific...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
Model uncertainty has gained popularity in machine learning due to the overconfident predictions de...
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack ...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is a...
Assessing uncertainty is an important step towards ensuring the safety and reliability of machine le...
We propose a novel method to capture data points near decision boundary in neural network that are ...
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive ...
The past decade of artifcial intelligence and deep learning has made tremendous progress in highly ...
Abstract Machine Learning is a powerful tool to reveal and exploit correlations in a multi-dimension...
Uncertainty estimation (UE) techniques -- such as the Gaussian process (GP), Bayesian neural network...
This paper presents a novel framework for image classification which comprises a convolutional neura...
How and when can we depend on machine learning systems to make decisions for human-being? This is pr...
Machine learning and artificial intelligence will be deeply embedded in the intelligent systems huma...
The classification performance of deep neural networks has begun to asymptote at near-perfect levels...