International audienceWe study the problem of determining the capacity of the binary perceptron for two variants of the problem where the corresponding constraint is symmetric. We call these variants the rectangle-binary-perceptron (RPB) and the u−function-binary-perceptron (UBP). We show that, unlike for the usual step-function-binary-perceptron, the critical capacity in these symmetric cases is given by the annealed computation in a large region of parameter space (for all rectangular constraints and for narrow enough u−function constraints, K K$^*$. We conclude that full-step-replica-symmetry breaking would have to be evaluated in order to obtain the exact capacity in this case
The statistical picture of the solution space for a binary perceptron is studied. The binary percept...
A theorem called the tuning theorem, which brings out the tuning capability of perceptrons where bot...
We analyze the average performance of a general class of learning algorithms for the nondeterministi...
International audienceWe study the problem of determining the capacity of the binary perceptron for ...
We study the number p of unbiased random patterns which can be stored in a neural network of N neuro...
International audienceIn this paper we consider two main aspects of the binary perceptron problem: t...
AblncL In lhis paper we consider two main aspsln of the binary perccptron problem: the maximal capac...
We consider the properties of “Potts” neural networks where each neuron can be in $Q$ different stat...
Error rates of a Boolean perceptron with threshold and either spherical or Ising constraint on the w...
We derive an explicit distribution for the threshold sequence of the symmetric binary perceptron wit...
The storage capacity of multilayer networks with overlapping receptive fields is investigated for a ...
A basic neural model for Boolean computation is examined in the context of learning from examples. T...
© 2019 Association for Computing Machinery. We consider the Ising perceptron with gaussian disorder,...
Within a Kuhn-Tucker cavity method introduced in a former paper, we study optimal stability learning...
The geometrical features of the (non-convex) loss landscape of neural network models are crucial in ...
The statistical picture of the solution space for a binary perceptron is studied. The binary percept...
A theorem called the tuning theorem, which brings out the tuning capability of perceptrons where bot...
We analyze the average performance of a general class of learning algorithms for the nondeterministi...
International audienceWe study the problem of determining the capacity of the binary perceptron for ...
We study the number p of unbiased random patterns which can be stored in a neural network of N neuro...
International audienceIn this paper we consider two main aspects of the binary perceptron problem: t...
AblncL In lhis paper we consider two main aspsln of the binary perccptron problem: the maximal capac...
We consider the properties of “Potts” neural networks where each neuron can be in $Q$ different stat...
Error rates of a Boolean perceptron with threshold and either spherical or Ising constraint on the w...
We derive an explicit distribution for the threshold sequence of the symmetric binary perceptron wit...
The storage capacity of multilayer networks with overlapping receptive fields is investigated for a ...
A basic neural model for Boolean computation is examined in the context of learning from examples. T...
© 2019 Association for Computing Machinery. We consider the Ising perceptron with gaussian disorder,...
Within a Kuhn-Tucker cavity method introduced in a former paper, we study optimal stability learning...
The geometrical features of the (non-convex) loss landscape of neural network models are crucial in ...
The statistical picture of the solution space for a binary perceptron is studied. The binary percept...
A theorem called the tuning theorem, which brings out the tuning capability of perceptrons where bot...
We analyze the average performance of a general class of learning algorithms for the nondeterministi...