<p>Network in network (NiN) is an effective instance and an important extension of deep convolutional neural network consisting of alternating convolutional layers and pooling layers. Instead of using a linear filter for convolution, NiN utilizes shallow multilayer perceptron (MLP), a nonlinear function, to replace the linear filter. Because of the powerfulness of MLP and 1 x 1 convolutions in spatial domain, NiN has stronger ability of feature representation and hence results in better recognition performance. However, MLP itself consists of fully connected layers that give rise to a large number of parameters. In this paper, we propose to replace dense shallow MLP with sparse shallow MLP. One or more layers of the sparse shallow MLP are s...
Recently, Deep Learning has brought about interesting improvements in solving computer vision proble...
We present a deep layered architecture that generalizes classical convolutional neural networks (Con...
It consists of three convolutional layers with max pooling applied at each layer, along with two ful...
We propose a novel deep network structure called “Network In Network”(NIN) to enhance model discrimi...
In this paper, we propose to factorize the convolutional layer to reduce its computation. The 3D con...
With the increasing of depth and complexity of the convolutional neural network, parameter dimension...
The work concerns the problem of reducing a pre-trained deep neuronal network to a smaller network, ...
The work concerns the problem of reducing a pre-trained deep neuronal network to a smaller network, ...
To overcome problems with the design of large networks, particularly with respect to the depth of th...
The work concerns the problem of reducing a pre-trained deep neuronal network to a smaller network, ...
Recent years have witnessed two seemingly opposite developments of deep convolutional neural network...
This thesis deals with convolutional neural networks. It is a kind of deep neural networks that are ...
In DCNNs, the number of parameters in pointwise convolutions rapidly grows due to the multiplication...
Despite the tremendous success in computer vision, deep convolutional networks suffer from serious c...
4siThis paper proposes a new neural network structure for image processing whose convolutional layer...
Recently, Deep Learning has brought about interesting improvements in solving computer vision proble...
We present a deep layered architecture that generalizes classical convolutional neural networks (Con...
It consists of three convolutional layers with max pooling applied at each layer, along with two ful...
We propose a novel deep network structure called “Network In Network”(NIN) to enhance model discrimi...
In this paper, we propose to factorize the convolutional layer to reduce its computation. The 3D con...
With the increasing of depth and complexity of the convolutional neural network, parameter dimension...
The work concerns the problem of reducing a pre-trained deep neuronal network to a smaller network, ...
The work concerns the problem of reducing a pre-trained deep neuronal network to a smaller network, ...
To overcome problems with the design of large networks, particularly with respect to the depth of th...
The work concerns the problem of reducing a pre-trained deep neuronal network to a smaller network, ...
Recent years have witnessed two seemingly opposite developments of deep convolutional neural network...
This thesis deals with convolutional neural networks. It is a kind of deep neural networks that are ...
In DCNNs, the number of parameters in pointwise convolutions rapidly grows due to the multiplication...
Despite the tremendous success in computer vision, deep convolutional networks suffer from serious c...
4siThis paper proposes a new neural network structure for image processing whose convolutional layer...
Recently, Deep Learning has brought about interesting improvements in solving computer vision proble...
We present a deep layered architecture that generalizes classical convolutional neural networks (Con...
It consists of three convolutional layers with max pooling applied at each layer, along with two ful...