We consider the existence of fixed points of nonnegative neural networks, i.e., neural networks that take as an input and produce as an output nonnegative vectors. We first show that nonnegative neural networks with nonnegative weights and biases can be recognized as monotonic and (weakly) scalable functions within the framework of nonlinear Perron-Frobenius theory. This fact enables us to provide conditions for the existence of fixed points of nonnegative neural networks, and these conditions are weaker than those obtained recently using arguments in convex analysis. Furthermore, we prove that the shape of the fixed point set of nonnegative neural networks with nonnegative weights and biases is an interval, which under mild conditions dege...
We investigate the qualitative properties of a recurrent neural network (RNN) for solving the genera...
We study the binary and continuous negative-margin perceptrons as simple nonconvex neural network mo...
Neural network training is usually accomplished by solving a non-convex optimization problem using s...
We derive conditions for the existence of fixed points of cone mappings without assuming scalability...
summary:Biological systems are able to switch their neural systems into inhibitory states and it is ...
Monotone functions and data sets arise in a variety of applications. We study the interpolation prob...
This paper empirically studies commonly observed training difficulties of Physics-Informed Neural Ne...
This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, ...
In many classification and prediction problems it is known that the response variable depends on cer...
This dissertation explores applications of discrete geometry in mathematical neuroscience. We begin ...
Convex $\ell_1$ regularization using an infinite dictionary of neurons has been suggested for constr...
Solving large scale optimization problems, such as neural networks training, can present many challe...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
This brief studies the complete stability of neural networks with nonmonotonic piecewise linear acti...
We analyze the topological properties of the set of functions that can be implemented by neural netw...
We investigate the qualitative properties of a recurrent neural network (RNN) for solving the genera...
We study the binary and continuous negative-margin perceptrons as simple nonconvex neural network mo...
Neural network training is usually accomplished by solving a non-convex optimization problem using s...
We derive conditions for the existence of fixed points of cone mappings without assuming scalability...
summary:Biological systems are able to switch their neural systems into inhibitory states and it is ...
Monotone functions and data sets arise in a variety of applications. We study the interpolation prob...
This paper empirically studies commonly observed training difficulties of Physics-Informed Neural Ne...
This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, ...
In many classification and prediction problems it is known that the response variable depends on cer...
This dissertation explores applications of discrete geometry in mathematical neuroscience. We begin ...
Convex $\ell_1$ regularization using an infinite dictionary of neurons has been suggested for constr...
Solving large scale optimization problems, such as neural networks training, can present many challe...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
This brief studies the complete stability of neural networks with nonmonotonic piecewise linear acti...
We analyze the topological properties of the set of functions that can be implemented by neural netw...
We investigate the qualitative properties of a recurrent neural network (RNN) for solving the genera...
We study the binary and continuous negative-margin perceptrons as simple nonconvex neural network mo...
Neural network training is usually accomplished by solving a non-convex optimization problem using s...