This paper presents DWINA: an algorithm for depth and width design of neural architectures in the case of supervised learning with noisy data. Each new unit is trained to learn the error of the existing network and is connected to it such that it does not affect its previous performance. Criteria for choosing between increasing width or increasing depth are proposed. The connection procedure for each case is also described. The stopping criterion is very simple and consists in comparing the residual error signal to the noise signal. Preliminary experiments point out the efficacy of the algorithm especially to avoid spurious minima and to design a network with a well-suited size. The complexity of the algorithm (number of operations) is on a...
Learning-based approaches have recently become popular for various computer vision tasks such as fac...
The design and adjustment of convolutional neural network architectures is an opaque and mostly tria...
Abstract—We develop, in this brief, a new constructive learning algorithm for feedforward neural net...
We present a new incremental procedure for supervised learning with noisy data. Each step consists i...
Over the past few years, deep neural networks have been at the center of attention in machine learn...
This report introduces a novel algorithm to learn the width of non-linear activation functions (of a...
Abstract: To reduce random access memory (RAM) requirements and to increase speed of recognition alg...
A critical question in the neural network research today concerns how many hidden neurons to use. Th...
We develop a fast end-to-end method for training lightweightneural networks using multiple classifie...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
Abstract|Multi-layer networks of threshold logic units offer an attractive framework for the design ...
The training of deep neural networks utilizes the backpropagation algorithm which consists of the fo...
The performance of an Artificial Neural Network (ANN) strongly depends on its hidden layer architect...
Multi-layer networks of threshold logic units offer an attractive framework for the design of patter...
We solve an open question from Lu et al. (2017), by showing that any target network with inputs in $...
Learning-based approaches have recently become popular for various computer vision tasks such as fac...
The design and adjustment of convolutional neural network architectures is an opaque and mostly tria...
Abstract—We develop, in this brief, a new constructive learning algorithm for feedforward neural net...
We present a new incremental procedure for supervised learning with noisy data. Each step consists i...
Over the past few years, deep neural networks have been at the center of attention in machine learn...
This report introduces a novel algorithm to learn the width of non-linear activation functions (of a...
Abstract: To reduce random access memory (RAM) requirements and to increase speed of recognition alg...
A critical question in the neural network research today concerns how many hidden neurons to use. Th...
We develop a fast end-to-end method for training lightweightneural networks using multiple classifie...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
Abstract|Multi-layer networks of threshold logic units offer an attractive framework for the design ...
The training of deep neural networks utilizes the backpropagation algorithm which consists of the fo...
The performance of an Artificial Neural Network (ANN) strongly depends on its hidden layer architect...
Multi-layer networks of threshold logic units offer an attractive framework for the design of patter...
We solve an open question from Lu et al. (2017), by showing that any target network with inputs in $...
Learning-based approaches have recently become popular for various computer vision tasks such as fac...
The design and adjustment of convolutional neural network architectures is an opaque and mostly tria...
Abstract—We develop, in this brief, a new constructive learning algorithm for feedforward neural net...