We present a latent variable generalisation of neural network softmax classification trained with cross-entropy loss, referred to as variational classification (VC). Our approach offers a novel probabilistic perspective on the highly familiar softmax classification model, to which it relates similarly to how variational and traditional autoencoders relate. We derive a training objective based on the evidence lower bound (ELBO) that is non-trivial to optimize, and therefore propose an adversarial approach to maximise it. We show that VC addresses an inherent inconsistency within softmax classification, whilst also allowing more flexible choices of prior distributions in the latent space in place of implicit assumptions revealed within off-th...
The Variational AutoEncoder (VAE) learns simultaneously an inference and a generative model, but onl...
In this paper, based on an asymptotic analysis of the Softmax layer, we show that when training neur...
In spite of the dominant performances of deep neural networks, recent works have shown that they are...
Adversarial examples easily mislead vision systems based on deep neural networks (DNNs) trained with...
Variational autoencoders (VAE) have recently become one of the most interesting developments in deep...
Hebbian plasticity in winner-take-all (WTA) networks is highly attractive for neuromorphic on-chip l...
Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights in...
In the field of pattern classification, the training of convolutional neural network classifiers is ...
The central objective function of a variational autoencoder (VAE) is its variational lower bound (th...
Clustering algorithms are an important part of modern data analysis. The K-means and EM clustering a...
Hebbian plasticity in winner-take-all (WTA) networks is highly attractive for neuromorphic on-chip l...
This paper was accepted for publication to Machine Learning (Springer). Overfitting data is a well-k...
Enabling machine learning classifiers to defer their decision to a downstream expert when the expert...
Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights in...
We present an approach on training classifiers or regressors using the latent embedding of variation...
The Variational AutoEncoder (VAE) learns simultaneously an inference and a generative model, but onl...
In this paper, based on an asymptotic analysis of the Softmax layer, we show that when training neur...
In spite of the dominant performances of deep neural networks, recent works have shown that they are...
Adversarial examples easily mislead vision systems based on deep neural networks (DNNs) trained with...
Variational autoencoders (VAE) have recently become one of the most interesting developments in deep...
Hebbian plasticity in winner-take-all (WTA) networks is highly attractive for neuromorphic on-chip l...
Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights in...
In the field of pattern classification, the training of convolutional neural network classifiers is ...
The central objective function of a variational autoencoder (VAE) is its variational lower bound (th...
Clustering algorithms are an important part of modern data analysis. The K-means and EM clustering a...
Hebbian plasticity in winner-take-all (WTA) networks is highly attractive for neuromorphic on-chip l...
This paper was accepted for publication to Machine Learning (Springer). Overfitting data is a well-k...
Enabling machine learning classifiers to defer their decision to a downstream expert when the expert...
Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights in...
We present an approach on training classifiers or regressors using the latent embedding of variation...
The Variational AutoEncoder (VAE) learns simultaneously an inference and a generative model, but onl...
In this paper, based on an asymptotic analysis of the Softmax layer, we show that when training neur...
In spite of the dominant performances of deep neural networks, recent works have shown that they are...