International audienceShallow supervised 1-hidden layer neural networks have a number of favorable properties that make them easier to interpret, analyze, and optimize than their deep counterparts, but lack their representational power. Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks. Contrary to previous approaches using shallow networks, we focus on problems where deep learning is reported as critical for success. We thus study CNNs on image classification tasks using the large-scale ImageNet dataset and the CIFAR-10 dataset. Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary p...
Complexity theory of circuits strongly suggests that deep architectures can be much more efcient (so...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
Evidence is mounting that CNNs are currently the most efficient and successful way to learn visual r...
International audienceShallow supervised 1-hidden layer neural networks have a number of favorable p...
We propose a new method for creating computationally efficient convolutional neural networks (CNNs) ...
We propose a new method for creating computationally efficient convolutional neural networks (CNNs) ...
We present a neural network architecture and a training algorithm designed to enable very rapid trai...
2014 We give algorithms with provable guarantees that learn a class of deep nets in the generative m...
In this paper, we propose a novel approach for efficient training of deep neural networks in a botto...
Deep (machine) learning in recent years has significantly increased the predictive modeling strength...
This research project investigates the role of key factors that led to the resurgence of deep CNNs ...
This thesis presents two principled approaches to improve the performance of convolutional neural ne...
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution im...
This dissertation is on the analysis and applications of a constructive architecture for training De...
Classification performance based on ImageNet is the de-facto standard metric for CNN development. In...
Complexity theory of circuits strongly suggests that deep architectures can be much more efcient (so...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
Evidence is mounting that CNNs are currently the most efficient and successful way to learn visual r...
International audienceShallow supervised 1-hidden layer neural networks have a number of favorable p...
We propose a new method for creating computationally efficient convolutional neural networks (CNNs) ...
We propose a new method for creating computationally efficient convolutional neural networks (CNNs) ...
We present a neural network architecture and a training algorithm designed to enable very rapid trai...
2014 We give algorithms with provable guarantees that learn a class of deep nets in the generative m...
In this paper, we propose a novel approach for efficient training of deep neural networks in a botto...
Deep (machine) learning in recent years has significantly increased the predictive modeling strength...
This research project investigates the role of key factors that led to the resurgence of deep CNNs ...
This thesis presents two principled approaches to improve the performance of convolutional neural ne...
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution im...
This dissertation is on the analysis and applications of a constructive architecture for training De...
Classification performance based on ImageNet is the de-facto standard metric for CNN development. In...
Complexity theory of circuits strongly suggests that deep architectures can be much more efcient (so...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
Evidence is mounting that CNNs are currently the most efficient and successful way to learn visual r...