The information that a pattern of firing in the output layer of a feedforward network of threshold-linear neurons conveys about the network’s inputs is considered. A replica-symmetric solution is found to be stable for all but small amounts of noise. The region of instability depends on the contribution of the threshold and the sparseness: for distributed pattern distributions, the unstable region extends to higher noise variances than for very sparse distributions, for which it is almost nonexistent. © 1998 The American Physical Society
Within a Kuhn-Tucker cavity method introduced in a former paper, we study optimal stability learning...
We consider training noise in neural networks as a means of tuning the structure of retrieval basins...
The paper investigates how the maximum gain of the neuron activation influences robustness of comple...
The information that a pattern of firing in the output layer of a feedforward network of threshold-l...
Recent studies of optimization in neural networks trained with noisy data have shown that replica-sy...
In a previous paper we have evaluated analytically the mutual information between the firing rates o...
A simple expression for a lower bound of Fisher information is derived for a network of recurrently ...
We calculate the mutual information (MI) of a two-layered neural network with noiseless, continuous ...
The Hopfield model of a neural network is studied for p = αN, where p is the number of memorized pat...
Recent experimental and computational evidence suggests that several dynamical prop-erties may chara...
How can neural networks learn to represent information optimally? We answer this question by derivin...
Ascribing computational principles to neural feedback circuits is an important problem in theoretica...
We establish two conditions which ensure the non-divergence of additive recur-rent networks with uns...
The paper addresses robustness of complete stability with respect to perturbations of the interconne...
Using the cavity method, I derive the microscopic equations and their stability condition for inform...
Within a Kuhn-Tucker cavity method introduced in a former paper, we study optimal stability learning...
We consider training noise in neural networks as a means of tuning the structure of retrieval basins...
The paper investigates how the maximum gain of the neuron activation influences robustness of comple...
The information that a pattern of firing in the output layer of a feedforward network of threshold-l...
Recent studies of optimization in neural networks trained with noisy data have shown that replica-sy...
In a previous paper we have evaluated analytically the mutual information between the firing rates o...
A simple expression for a lower bound of Fisher information is derived for a network of recurrently ...
We calculate the mutual information (MI) of a two-layered neural network with noiseless, continuous ...
The Hopfield model of a neural network is studied for p = αN, where p is the number of memorized pat...
Recent experimental and computational evidence suggests that several dynamical prop-erties may chara...
How can neural networks learn to represent information optimally? We answer this question by derivin...
Ascribing computational principles to neural feedback circuits is an important problem in theoretica...
We establish two conditions which ensure the non-divergence of additive recur-rent networks with uns...
The paper addresses robustness of complete stability with respect to perturbations of the interconne...
Using the cavity method, I derive the microscopic equations and their stability condition for inform...
Within a Kuhn-Tucker cavity method introduced in a former paper, we study optimal stability learning...
We consider training noise in neural networks as a means of tuning the structure of retrieval basins...
The paper investigates how the maximum gain of the neuron activation influences robustness of comple...