We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work [20].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has l...
This paper deals with the simulation of Turing machines by neural networks. Such networks are made u...
This article studies the computational power of various discontinuous real computational models that...
This paper starts by overviewing results dealing with the approximation capabilities of neural netwo...
AbstractWe pursue a particular approach to analog computation, based on dynamical systems of the typ...
AbstractWe pursue a particular approach to analog computation, based on dynamical systems of the typ...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
This paper discusses some of the limitations of hardware implementations of neural networks. The aut...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
This paper studies the computational power of various discontinuous real computa-tional models that ...
We examine in this chapter the computational power of high order analog feedfor-ward neural nets N, ...
We show that neural networks with three-times continuously differentiable activation functions are c...
Abstract. This paper shows the existence of a finite neural network, made up of sigmoidal nen-rons, ...
We introduce a model for analog computation with discrete time in the presence of analog noise that...
Experimental evidence has shown analog neural networks to be ex-~mely fault-tolerant; in particular....
The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for im...
This paper deals with the simulation of Turing machines by neural networks. Such networks are made u...
This article studies the computational power of various discontinuous real computational models that...
This paper starts by overviewing results dealing with the approximation capabilities of neural netwo...
AbstractWe pursue a particular approach to analog computation, based on dynamical systems of the typ...
AbstractWe pursue a particular approach to analog computation, based on dynamical systems of the typ...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
This paper discusses some of the limitations of hardware implementations of neural networks. The aut...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
This paper studies the computational power of various discontinuous real computa-tional models that ...
We examine in this chapter the computational power of high order analog feedfor-ward neural nets N, ...
We show that neural networks with three-times continuously differentiable activation functions are c...
Abstract. This paper shows the existence of a finite neural network, made up of sigmoidal nen-rons, ...
We introduce a model for analog computation with discrete time in the presence of analog noise that...
Experimental evidence has shown analog neural networks to be ex-~mely fault-tolerant; in particular....
The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for im...
This paper deals with the simulation of Turing machines by neural networks. Such networks are made u...
This article studies the computational power of various discontinuous real computational models that...
This paper starts by overviewing results dealing with the approximation capabilities of neural netwo...