Modern neural networks [e.g., Deep Neural Networks (DNNs)] have recently gained increasing attention for visible image classification tasks. Their success mainly results from capabilities in learning a complex feature mapping of inputs (i.e., feature representation) that carries images manifold structure relevant to the task. Despite the current popularity of these techniques, they are training-costly with Back-propagation (BP) based iteration rules. Here, we advocate a lightweight feature representation framework termed as Guided Random Projection (GRP), which is closely related to the classical random neural networks and randomization-based kernel machines. Specifically, we present an efficient optimization method that explicitly learns t...
* Both first authors contributed equally. Abstract. We propose to learn the kernel of an SVM as the ...
Deep learning has recently been enjoying an increasing popularity due to its success in solving chal...
We propose a new method for creating computationally efficient convolutional neural networks (CNNs) ...
Kernel methods and neural networks are two important schemes in the supervised learning field. The t...
While the backpropagation of error algorithm enables deep neural network training, it implies (i) bi...
The random subspace method, also known as the pillar of random forests, is good at making precise an...
Regularization plays an important role in machine learning systems. We propose a novel methodology f...
Abstract Large data requirements are often the main hurdle in training neural networks. Convolutiona...
This study investigates data dimensionality reduction for image object recognition. The dimensionali...
Data augmentation is a critical regularization method that contributes to numerous state-of-the-art ...
In this thesis, we will leverage the use of randomness in multiple aspects of machine learning. We w...
Current vision systems are trained on huge datasets, and these datasets come with costs: curation is...
We explore the training of deep neural networks to produce vector representations using weakly label...
Deep neural networks train millions of parameters to achieve state-of-the-art performance on a wide ...
As a typical dimensionality reduction technique, random projection has been widely applied in a vari...
* Both first authors contributed equally. Abstract. We propose to learn the kernel of an SVM as the ...
Deep learning has recently been enjoying an increasing popularity due to its success in solving chal...
We propose a new method for creating computationally efficient convolutional neural networks (CNNs) ...
Kernel methods and neural networks are two important schemes in the supervised learning field. The t...
While the backpropagation of error algorithm enables deep neural network training, it implies (i) bi...
The random subspace method, also known as the pillar of random forests, is good at making precise an...
Regularization plays an important role in machine learning systems. We propose a novel methodology f...
Abstract Large data requirements are often the main hurdle in training neural networks. Convolutiona...
This study investigates data dimensionality reduction for image object recognition. The dimensionali...
Data augmentation is a critical regularization method that contributes to numerous state-of-the-art ...
In this thesis, we will leverage the use of randomness in multiple aspects of machine learning. We w...
Current vision systems are trained on huge datasets, and these datasets come with costs: curation is...
We explore the training of deep neural networks to produce vector representations using weakly label...
Deep neural networks train millions of parameters to achieve state-of-the-art performance on a wide ...
As a typical dimensionality reduction technique, random projection has been widely applied in a vari...
* Both first authors contributed equally. Abstract. We propose to learn the kernel of an SVM as the ...
Deep learning has recently been enjoying an increasing popularity due to its success in solving chal...
We propose a new method for creating computationally efficient convolutional neural networks (CNNs) ...