We study how certain smoothness constraints, for example, piecewise continuity, can be generalized from a discrete set of analog-valued data, by modifying the error backpropagation, learning algorithm. Numeri-cal simulations demonstrate that by imposing two heuristic objectives- (1) reducing the number of hidden units, and (2) minimizing the magnitudes of the weights in the network- during the learning pro-cess, one obtains a network with a response function that smoothly interpolates between the training data.
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
As the central notion in semi-supervised learn-ing, smoothness is often realized on a graph rep-rese...
There are two measures for the optimality of a trained feed-forward network for the given training p...
We study how certain smoothness constraints, for example, piecewise continuity, can be generalized f...
We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This i...
Existing methods for function smoothness in neural networks have limitations. These methods can make...
Methods to speed up learning in back propagation and to optimize the network architecture have been ...
We discuss a model of consistent learning with an additional re-striction on the probability distrib...
The performance of feed-forward neural networks in real applications can be often be improved signif...
This paper examines the problem of learning from examples in a framework that is based on, but more ...
Much of modern learning theory has been split between two regimes: the classical offline setting, wh...
AbstractWe consider the complexity of learning classes of smooth functions formed by bounding differ...
On large problems, reinforcement learning systems must use parameterized function approximators such...
summary:In the paper, we are concerned with some computational aspects of smooth approximation of da...
Injecting noise within gradient descent has several desirable features. In this paper, we explore no...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
As the central notion in semi-supervised learn-ing, smoothness is often realized on a graph rep-rese...
There are two measures for the optimality of a trained feed-forward network for the given training p...
We study how certain smoothness constraints, for example, piecewise continuity, can be generalized f...
We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This i...
Existing methods for function smoothness in neural networks have limitations. These methods can make...
Methods to speed up learning in back propagation and to optimize the network architecture have been ...
We discuss a model of consistent learning with an additional re-striction on the probability distrib...
The performance of feed-forward neural networks in real applications can be often be improved signif...
This paper examines the problem of learning from examples in a framework that is based on, but more ...
Much of modern learning theory has been split between two regimes: the classical offline setting, wh...
AbstractWe consider the complexity of learning classes of smooth functions formed by bounding differ...
On large problems, reinforcement learning systems must use parameterized function approximators such...
summary:In the paper, we are concerned with some computational aspects of smooth approximation of da...
Injecting noise within gradient descent has several desirable features. In this paper, we explore no...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
As the central notion in semi-supervised learn-ing, smoothness is often realized on a graph rep-rese...
There are two measures for the optimality of a trained feed-forward network for the given training p...