This paper compares three penalty terms with respect to the effi-ciency of supervised learning, by using first- and second-order learn-ing algorithms. Our experiments showed that for a reasonably ade-quate penalty factor, the combination of the squared penalty term and the second-order learning algorithm drastically improves the convergence performance more than 20 times over the other com-binations, at the same time bringing about a better generalization performance.
Abstract. Recently, we proposed to transform the outputs of each hidden neu-ron in a multi-layer per...
Since the introduction of the backpropagation algorithm as a learning rule for neural networks much ...
Second-order optimizers are thought to hold the potential to speed up neural network training, but d...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
Kernel-based linear-threshold algorithms, such as support vector machines and Perceptron-like algori...
Abstract A new algorithm for on-line learning linear-threshold functions is proposed whichefficientl...
This paper proposes an improved stochastic second order learning algorithm for supervised neural net...
This research was partially supported by the Italian MURST. A new second order algorithm based on Sc...
Networks of neurons can perform computations that even modern computers find very difficult to simul...
The performance of seven minimization algorithms are compared on five neural network problems. These...
Abstract — Over the years, many improvements and refine-ments of the backpropagation learning algori...
Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron net...
金沢大学大学院自然科学研究科情報システムOver the years, many improvements and refinements to the backpropagation learnin...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
Abstract. Recently, we proposed to transform the outputs of each hidden neu-ron in a multi-layer per...
Since the introduction of the backpropagation algorithm as a learning rule for neural networks much ...
Second-order optimizers are thought to hold the potential to speed up neural network training, but d...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
Kernel-based linear-threshold algorithms, such as support vector machines and Perceptron-like algori...
Abstract A new algorithm for on-line learning linear-threshold functions is proposed whichefficientl...
This paper proposes an improved stochastic second order learning algorithm for supervised neural net...
This research was partially supported by the Italian MURST. A new second order algorithm based on Sc...
Networks of neurons can perform computations that even modern computers find very difficult to simul...
The performance of seven minimization algorithms are compared on five neural network problems. These...
Abstract — Over the years, many improvements and refine-ments of the backpropagation learning algori...
Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron net...
金沢大学大学院自然科学研究科情報システムOver the years, many improvements and refinements to the backpropagation learnin...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
Abstract. Recently, we proposed to transform the outputs of each hidden neu-ron in a multi-layer per...
Since the introduction of the backpropagation algorithm as a learning rule for neural networks much ...
Second-order optimizers are thought to hold the potential to speed up neural network training, but d...