Convolutional neural networks, as most artificial neural networks, are frequently viewed as methods different in essence from kernel-based methods. In this work we translate several classical convolutional neural networks into kernel-based counterparts. Each kernel-based counterpart is a statistical model called a convolutional kernel network with parameters that can be learned from data. We provide an alternating minimization algorithm with mini-batch sampling and implicit partial differentiation to learn from data the parameters of each convolutional kernel network. We also show how to obtain inexact derivatives with respect to the parameters using an algorithm based on two inter-twined Newton iterations. The models and the algorithms are...
International audienceThis chapter introduces a powerful class of machine learning approaches called...
This thesis provides an introduction to classical and convolutional neural networks. It describes ho...
As research attention in deep learning has been focusing on pushing empirical results to a higher pe...
An important goal in visual recognition is to devise image representations that are invariant to par...
International audienceIn this paper, we introduce a new image representation based on a multilayer k...
Several methods of normalizing convolution kernels have been proposed in the literature to train con...
When designing Convolutional Neural Networks (CNNs), one must select the size of the convolutional k...
Many deep neural networks are built by using stacked convolutional layers of fixed and single size (...
This thesis deals with convolutional neural networks. It is a kind of deep neural networks that are ...
International audienceA recent line of work showed that various forms of convolutional kernel method...
In recent years, convolutional neural networks have been studied in the Fourier domain for a limited...
In Artificial Intelligence, convolutional neural network has been the most widely used machine learn...
PCANet is an unsupervised Convolutional Neural Network (CNN), which uses Principal Component Analysi...
International audienceWe introduce a family of multilayer graph kernels and establish new links betw...
In stochastic gradient descent (SGD) and its variants, the optimized gradient estimators may be as e...
International audienceThis chapter introduces a powerful class of machine learning approaches called...
This thesis provides an introduction to classical and convolutional neural networks. It describes ho...
As research attention in deep learning has been focusing on pushing empirical results to a higher pe...
An important goal in visual recognition is to devise image representations that are invariant to par...
International audienceIn this paper, we introduce a new image representation based on a multilayer k...
Several methods of normalizing convolution kernels have been proposed in the literature to train con...
When designing Convolutional Neural Networks (CNNs), one must select the size of the convolutional k...
Many deep neural networks are built by using stacked convolutional layers of fixed and single size (...
This thesis deals with convolutional neural networks. It is a kind of deep neural networks that are ...
International audienceA recent line of work showed that various forms of convolutional kernel method...
In recent years, convolutional neural networks have been studied in the Fourier domain for a limited...
In Artificial Intelligence, convolutional neural network has been the most widely used machine learn...
PCANet is an unsupervised Convolutional Neural Network (CNN), which uses Principal Component Analysi...
International audienceWe introduce a family of multilayer graph kernels and establish new links betw...
In stochastic gradient descent (SGD) and its variants, the optimized gradient estimators may be as e...
International audienceThis chapter introduces a powerful class of machine learning approaches called...
This thesis provides an introduction to classical and convolutional neural networks. It describes ho...
As research attention in deep learning has been focusing on pushing empirical results to a higher pe...