The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error in UCMs with L hidden layers and for a large enough number of units. In most of the regimes studied we find that the generalization error, as a function of the number of examples presented, develops a discontinuous drop at a critical value of the load parameter. We also find that when L>1 a number of teacher networks with the same number of hidden layers and different overlaps induce learning processes with the same critical points
Equilibrium states of large layered neural networks with differentiable activation function and a si...
We study two different unsupervised learning strategies for a single-layer perceptron. The environme...
The effort to build machines that are able to learn and undertake tasks such as datamining, image pr...
The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the f...
The problem of learning from examples in multilayer networks is studied within the framework of stat...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
We obtained an analytical expression for the computational complexity of many layered committee mach...
Equilibrium states of large layered neural networks with differentiable activation function and a si...
AbstractSome basic issues in the statistical mechanics of learning from examples are reviewed. The a...
The problem of computing the storage capacity of a feed-forward network, with L hidden layers, N inp...
Heuristic tools from statistical physics have been used in the past to locate the phase transitions ...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
We complement recent advances in thermodynamic limit analyses of mean on-line gradient descent learn...
Machine learning algorithms relying on deep new networks recently allowed a great leap forward in ar...
Deep neural networks (DNN) with a huge number of adjustable parameters remain largely black boxes. T...
Equilibrium states of large layered neural networks with differentiable activation function and a si...
We study two different unsupervised learning strategies for a single-layer perceptron. The environme...
The effort to build machines that are able to learn and undertake tasks such as datamining, image pr...
The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the f...
The problem of learning from examples in multilayer networks is studied within the framework of stat...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
We obtained an analytical expression for the computational complexity of many layered committee mach...
Equilibrium states of large layered neural networks with differentiable activation function and a si...
AbstractSome basic issues in the statistical mechanics of learning from examples are reviewed. The a...
The problem of computing the storage capacity of a feed-forward network, with L hidden layers, N inp...
Heuristic tools from statistical physics have been used in the past to locate the phase transitions ...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
We complement recent advances in thermodynamic limit analyses of mean on-line gradient descent learn...
Machine learning algorithms relying on deep new networks recently allowed a great leap forward in ar...
Deep neural networks (DNN) with a huge number of adjustable parameters remain largely black boxes. T...
Equilibrium states of large layered neural networks with differentiable activation function and a si...
We study two different unsupervised learning strategies for a single-layer perceptron. The environme...
The effort to build machines that are able to learn and undertake tasks such as datamining, image pr...