Abstract: It is well known that the generalization capability is one of the most important criterions to develop and evaluate a classifier for a given pattern classification problem. The localized generalization error model (RSM) [2, 12] recently proposed by Ng et al. provides a more intuitive look at the generalization error. Although RSM gives a brand-new method to promote the generalization performance, it is in nature equivalent to another type of regularization. In this paper, we first prove the essential relationship between RSM and regularization, and demonstrate that the stochastic sensitivity measure in RSM exactly corresponds to a regularizing term. Then, we develop a new generalization error bound from the regulation viewpoint, w...
AbstractThe studies of generalization error give possible approaches to estimate the performance of ...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
Despite superior performance in many situations, deep neural networks are often vulnerable to advers...
Regularization plays an important role in generalization of deep learning. In this paper, we study t...
The generalization error bounds for the entire input space found by current error models using the n...
In pattern classification problem, one trains a classifier to recognize future unseen samples using ...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
This paper was accepted for publication to Machine Learning (Springer). Overfitting data is a well-k...
This paper proposes a novel discriminative regression method, called adaptive locality preserving re...
In supervised learning problems, global and local learning algorithms are used. In contrast to globa...
The Pseudo Fisher Linear Discriminant (PFLD) based on a pseudo-inverse technique shows a peaking beh...
<p>A) A two-dimensional example illustrate how a two-class classification between the two data sets ...
We derive sharp bounds on the generalization error of a generic linear classifier trained by empiric...
The classical statistical learning theory implies that fitting too many parameters leads to overfitt...
Due to the poor generalization performance of traditional empirical risk minimization (ERM) in the c...
AbstractThe studies of generalization error give possible approaches to estimate the performance of ...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
Despite superior performance in many situations, deep neural networks are often vulnerable to advers...
Regularization plays an important role in generalization of deep learning. In this paper, we study t...
The generalization error bounds for the entire input space found by current error models using the n...
In pattern classification problem, one trains a classifier to recognize future unseen samples using ...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
This paper was accepted for publication to Machine Learning (Springer). Overfitting data is a well-k...
This paper proposes a novel discriminative regression method, called adaptive locality preserving re...
In supervised learning problems, global and local learning algorithms are used. In contrast to globa...
The Pseudo Fisher Linear Discriminant (PFLD) based on a pseudo-inverse technique shows a peaking beh...
<p>A) A two-dimensional example illustrate how a two-class classification between the two data sets ...
We derive sharp bounds on the generalization error of a generic linear classifier trained by empiric...
The classical statistical learning theory implies that fitting too many parameters leads to overfitt...
Due to the poor generalization performance of traditional empirical risk minimization (ERM) in the c...
AbstractThe studies of generalization error give possible approaches to estimate the performance of ...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
Despite superior performance in many situations, deep neural networks are often vulnerable to advers...