Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. A key element of this success has been the development of new loss functions, like the popular cross-entropy loss, which has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. While the cross-entropy loss is usually justified from a probabilistic perspective, this paper shows an alternative and more direct interpretation of this loss in terms of t-norms and their associated generator functions, and derives a general relation between loss functions and t-norms. In particular, the presented work shows intriguing results leading to the development of a no...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
While cross entropy (CE) is the most commonly used loss to train deep neural networks for classifica...
Regularization of (deep) learning models can be realized at the model, loss, or data level. As a tec...
Deep learning has been shown to achieve impressive results in several domains like computer vision a...
Injecting prior knowledge into the learning process of a neural architecture is one of the main chal...
In the process of machine learning, models are essentially defined by a group of parameters in multi...
The top-$k$ error is a common measure of performance in machine learning and computer vision. In pra...
The increasing size of modern datasets combined with the difficulty of obtaining real label informat...
Recent researches reveal that deep neural networks are sensitive to label noises hence leading to po...
The categorical cross-entropy loss is shown for the convolutional neural network training as a funct...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....
Robust learning in presence of label noise is an important problem of current interest. Training dat...
Deep learning techniques have become the tool of choice for side-channel analysis. In recent years, ...
The cross-entropy softmax loss is the primary loss function used to train deep neural networks. On t...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
While cross entropy (CE) is the most commonly used loss to train deep neural networks for classifica...
Regularization of (deep) learning models can be realized at the model, loss, or data level. As a tec...
Deep learning has been shown to achieve impressive results in several domains like computer vision a...
Injecting prior knowledge into the learning process of a neural architecture is one of the main chal...
In the process of machine learning, models are essentially defined by a group of parameters in multi...
The top-$k$ error is a common measure of performance in machine learning and computer vision. In pra...
The increasing size of modern datasets combined with the difficulty of obtaining real label informat...
Recent researches reveal that deep neural networks are sensitive to label noises hence leading to po...
The categorical cross-entropy loss is shown for the convolutional neural network training as a funct...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....
Robust learning in presence of label noise is an important problem of current interest. Training dat...
Deep learning techniques have become the tool of choice for side-channel analysis. In recent years, ...
The cross-entropy softmax loss is the primary loss function used to train deep neural networks. On t...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
While cross entropy (CE) is the most commonly used loss to train deep neural networks for classifica...
Regularization of (deep) learning models can be realized at the model, loss, or data level. As a tec...