We consider learning on multilayer neural nets with piecewise poly-nomial activation functions and a fixed number k of numerical inputs. We exhibit arbitrarily large network architectures for which efficient and provably successful learning algorithms exist in the rather realis-tic refinement of Valiant’s model for probably approximately correct learning (”PAC learning”) where no a priori assumptions are required about the ”target function ” (agnostic learning), arbitrary noise is per-mitted in the training sample, and the target outputs as well as the network outputs may be arbitrary reals. The number of computation steps of the learning algorithm LEARN that we construct is bounded by a polynomial in the bit-length n of the fixed number of...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
AbstractWe pursue a particular approach to analog computation, based on dynamical systems of the typ...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
There are many types of activity which are commonly known as ‘learning’. Here, we shall discuss a ma...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We show that neural networks with three-times continuously differentiable activation functions are c...
This paper discusses within the framework of computational learning theory the current state of know...
This paper discusses within the framework of computational learning theory the current state of know...
Presented as part of the ARC 11 lecture on October 30, 2017 at 10:00 a.m. in the Klaus Advanced Comp...
) z Bhaskar DasGupta y Department of Computer Science University of Minnesota Minneapolis, MN 554...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
The authors present a class of efficient algorithms for PAC learning continuous functions and regres...
This paper reviews some of the recent results in applying the theory of Probably Approximately Corre...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
AbstractWe pursue a particular approach to analog computation, based on dynamical systems of the typ...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
There are many types of activity which are commonly known as ‘learning’. Here, we shall discuss a ma...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We show that neural networks with three-times continuously differentiable activation functions are c...
This paper discusses within the framework of computational learning theory the current state of know...
This paper discusses within the framework of computational learning theory the current state of know...
Presented as part of the ARC 11 lecture on October 30, 2017 at 10:00 a.m. in the Klaus Advanced Comp...
) z Bhaskar DasGupta y Department of Computer Science University of Minnesota Minneapolis, MN 554...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
The authors present a class of efficient algorithms for PAC learning continuous functions and regres...
This paper reviews some of the recent results in applying the theory of Probably Approximately Corre...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
AbstractWe pursue a particular approach to analog computation, based on dynamical systems of the typ...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...