When designing Convolutional Neural Networks (CNNs), one must select the size of the convolutional kernels before training. Recent works show CNNs benefit from different kernel sizes at different layers, but exploring all possible combinations is unfeasible in practice. A more efficient approach is to learn the kernel size during training. However, existing works that learn the kernel size have a limited bandwidth. These approaches scale kernels by dilation, and thus the detail they can describe is limited. In this work, we propose FlexConv, a novel convolutional operation with which high bandwidth convolutional kernels of learnable kernel size can be learned at a fixed parameter cost. FlexNets model long-term dependencies without the use o...
Transformers have quickly shined in the computer vision world since the emergence of Vision Transfor...
In this work, we establish the relation between optimal control and training deep Convolution Neural...
Over-parameterized residual networks (ResNets) are amongst the most successful convolutional neural ...
When designing Convolutional Neural Networks (CNNs), one must select the size of the convolutional k...
Determining kernel sizes of a CNN model is a crucial and non-trivial design choice and significantly...
Convolutional neural networks (CNNs) are currently state-of-the-art for various classification tasks...
Many deep neural networks are built by using stacked convolutional layers of fixed and single size (...
We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by recent ad...
The convolution operation is a central building block of neural network architectures widely used in...
Convolutional neural networks, as most artificial neural networks, are frequently viewed as methods ...
An important goal in visual recognition is to devise image representations that are invariant to par...
Several methods of normalizing convolution kernels have been proposed in the literature to train con...
The Fourier domain is used in computer vision and machine learning as image analysis tasks in the Fo...
In this book, I perform an experimental review on twelve similar types of Convolutional Neural Netwo...
As the pixel resolution of imaging equipment has grown larger, the images’ sizes and the number of p...
Transformers have quickly shined in the computer vision world since the emergence of Vision Transfor...
In this work, we establish the relation between optimal control and training deep Convolution Neural...
Over-parameterized residual networks (ResNets) are amongst the most successful convolutional neural ...
When designing Convolutional Neural Networks (CNNs), one must select the size of the convolutional k...
Determining kernel sizes of a CNN model is a crucial and non-trivial design choice and significantly...
Convolutional neural networks (CNNs) are currently state-of-the-art for various classification tasks...
Many deep neural networks are built by using stacked convolutional layers of fixed and single size (...
We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by recent ad...
The convolution operation is a central building block of neural network architectures widely used in...
Convolutional neural networks, as most artificial neural networks, are frequently viewed as methods ...
An important goal in visual recognition is to devise image representations that are invariant to par...
Several methods of normalizing convolution kernels have been proposed in the literature to train con...
The Fourier domain is used in computer vision and machine learning as image analysis tasks in the Fo...
In this book, I perform an experimental review on twelve similar types of Convolutional Neural Netwo...
As the pixel resolution of imaging equipment has grown larger, the images’ sizes and the number of p...
Transformers have quickly shined in the computer vision world since the emergence of Vision Transfor...
In this work, we establish the relation between optimal control and training deep Convolution Neural...
Over-parameterized residual networks (ResNets) are amongst the most successful convolutional neural ...