In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our expe...
Many connectionist learning algorithms consists of minimizing a cost of the form C(w) = E(J(z; w)) ...
Learning algorithms for perceptrons are deduced from statistical mechanics. Thermodynamical quantiti...
In this paper, we propose a learning method that updates a synaptic weight in probability which is p...
Copyright Springer.The Stochastic Competitive Evolutionary Neural Tree (SCENT) is a new unsupervised...
In this paper the stochastic dynamics of adaptive evolutionary search, as performed by the optimizat...
In this paper, we provide a new algorithm for the problem of stochastic global optimization where on...
The properties of flat minima in the empirical risk landscape of neural networks have been debated f...
A parallel stochastic algorithm is investigated for error-descent learning and optimization in deter...
The present paper elucidates a universal property of learning curves, which shows how the generaliza...
Introduction The work reported here began with the desire to find a network architecture that shared...
International audienceA method is provided for designing and training noise-driven recurrent neural ...
We complement recent advances in thermodynamic limit analyses of mean on-line gradient descent learn...
The revival of multilayer neural networks in the mid 80's originated from the discovery of the ...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
Original article can be found at: http://www.sciencedirect.com/science/journal/15684946--Copyright E...
Many connectionist learning algorithms consists of minimizing a cost of the form C(w) = E(J(z; w)) ...
Learning algorithms for perceptrons are deduced from statistical mechanics. Thermodynamical quantiti...
In this paper, we propose a learning method that updates a synaptic weight in probability which is p...
Copyright Springer.The Stochastic Competitive Evolutionary Neural Tree (SCENT) is a new unsupervised...
In this paper the stochastic dynamics of adaptive evolutionary search, as performed by the optimizat...
In this paper, we provide a new algorithm for the problem of stochastic global optimization where on...
The properties of flat minima in the empirical risk landscape of neural networks have been debated f...
A parallel stochastic algorithm is investigated for error-descent learning and optimization in deter...
The present paper elucidates a universal property of learning curves, which shows how the generaliza...
Introduction The work reported here began with the desire to find a network architecture that shared...
International audienceA method is provided for designing and training noise-driven recurrent neural ...
We complement recent advances in thermodynamic limit analyses of mean on-line gradient descent learn...
The revival of multilayer neural networks in the mid 80's originated from the discovery of the ...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
Original article can be found at: http://www.sciencedirect.com/science/journal/15684946--Copyright E...
Many connectionist learning algorithms consists of minimizing a cost of the form C(w) = E(J(z; w)) ...
Learning algorithms for perceptrons are deduced from statistical mechanics. Thermodynamical quantiti...
In this paper, we propose a learning method that updates a synaptic weight in probability which is p...