This post entails the code and few-shot benchmarks Omniglot and Mini-Imagenet, addressed for our submitted to the ICLR 2022 paper "Stochastic Deep Networks with Linear competing units for Model-Agnostic Meta-Learning". We provide also the main paper as well as the supplementary material. Abstract: This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-takes-all (LWTA) activations. This type of network units result in sparse representations from each model layer, as the units are organized into blocks where only one unit generates a non-zero output. The main operating principle of the introduced units lies on stochastic arguments, as the network performs posterior sampling over competing units to s...
In order to make predictions with high accuracy, conventional deep learning systems require large tr...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
This post entails the code and few-shot benchmarks Omniglot and Mini-Imagenet, addressed for our acc...
This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-tak...
This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-tak...
This entry accommodates the main paper "Local Competition and Stochasticity for Adversarial Robustne...
This work addresses adversarial robustness in deep learning by considering deep networks with stoch...
This work aims to address the long-established problem of learning diversified representations. To t...
A natural progression in machine learning research is to automate and learn from data increasingly m...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
Deep learning has achieved state-of-the-art performance on many machine learning tasks. But the deep...
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potenti...
This entry accommodates the main paper "Stochastic Local Winner-Takes-All Networks Enable Profound A...
Model-based reinforcement learning is expected to be a method that can safely acquire the optimal po...
In order to make predictions with high accuracy, conventional deep learning systems require large tr...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
This post entails the code and few-shot benchmarks Omniglot and Mini-Imagenet, addressed for our acc...
This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-tak...
This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-tak...
This entry accommodates the main paper "Local Competition and Stochasticity for Adversarial Robustne...
This work addresses adversarial robustness in deep learning by considering deep networks with stoch...
This work aims to address the long-established problem of learning diversified representations. To t...
A natural progression in machine learning research is to automate and learn from data increasingly m...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
Deep learning has achieved state-of-the-art performance on many machine learning tasks. But the deep...
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potenti...
This entry accommodates the main paper "Stochastic Local Winner-Takes-All Networks Enable Profound A...
Model-based reinforcement learning is expected to be a method that can safely acquire the optimal po...
In order to make predictions with high accuracy, conventional deep learning systems require large tr...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...