l S(W;m). overlap between internal units decreases for many transfer functions (e.g. sigmoids) with increasing order of the derivative. The accuracies of the approximation thus improves with increasing input dimension, and with increasing m. This is a very nice effect, since many real problems deal with many (10 or more) input variables. 4.1 Empirical Comparisons of R(W;m) vs S(W;m) For the regularizers R(W;m) to be effective in penalizing S(W;m), an approximate monotonically-increasing relationship must hold between them. The uncorrelated internal unit assumption implies that this relationship is linear. To test for such a linear scaling, we generated a large number of randomly selected networks. For each such network, we computed the va...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
Smoothing regularizers for radial basis functions have been studied extensively, but no general smoo...
We had previously shown that regularization principles lead to ap-proximation schemes that are equiv...
We had previously shown that regularization principles lead to approximation schemes which are equiv...
We derive a smoothing regularizer for recurrent network models by requiring robustness in prediction...
The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and ...
We study regularized regression problems where the regularizer is a proper, lower-semicontinuous, co...
We study regularized regression problems where the regularizer is a proper, lower-semicontinuous, co...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
Injecting noise within gradient descent has several desirable features. In this paper, we explore no...
obtained from the application of Tychonov regulariza-tion or Bayes estimation to the hypersurface re...
We study regularized regression problems where the regularizer is a proper, lower-semicontinuous, co...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
Smoothing regularizers for radial basis functions have been studied extensively, but no general smoo...
We had previously shown that regularization principles lead to ap-proximation schemes that are equiv...
We had previously shown that regularization principles lead to approximation schemes which are equiv...
We derive a smoothing regularizer for recurrent network models by requiring robustness in prediction...
The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and ...
We study regularized regression problems where the regularizer is a proper, lower-semicontinuous, co...
We study regularized regression problems where the regularizer is a proper, lower-semicontinuous, co...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
Injecting noise within gradient descent has several desirable features. In this paper, we explore no...
obtained from the application of Tychonov regulariza-tion or Bayes estimation to the hypersurface re...
We study regularized regression problems where the regularizer is a proper, lower-semicontinuous, co...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...
International audienceWe study regularized regression problems where the regularizer is a proper, lo...