This paper investigates neural network training as a potential source of problems for benchmarking continuous, heuristic optimization algorithms. Through the use of a student-teacher learning paradigm, the error surfaces of several neural networks are examined using so-called fitness distance correlation, which has previously been applied to discrete, combinatorial optimization problems. The results suggest that the neural network training tasks offer a number of desirable properties for algorithm benchmarking, including the ability to scale-up to provide challenging problems in high-dimensional spaces
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Training a neural network is a difficult optimization problem because of numerous local minimums. M...
We demonstrate that the problem of training neural networks with small (average) squared error is co...
: combinatorial optimization is an active field of research in Neural Networks. Since the first atte...
In neural networks, simultaneous determination of the optimum structure and weights is a challenge. ...
We propose an algorithm to explore the global optimization method, using SAT solvers, for training a...
. In this paper we study how global optimization methods (like genetic algorithms) can be used to tr...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Artificial Neural Network (ANN) is one of the modern computational methods proposed to solve increas...
Neural network training is a highly non-convex optimisation problem with poorly understood propertie...
In this paper we proposed a novel procedure for training a feedforward neural network. The accuracy ...
In the modern IT industry, the basis for the nearest progress is artificial intelligence technologie...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
We introduce techniques to study the weight space organization of neural networks optimizing differe...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Training a neural network is a difficult optimization problem because of numerous local minimums. M...
We demonstrate that the problem of training neural networks with small (average) squared error is co...
: combinatorial optimization is an active field of research in Neural Networks. Since the first atte...
In neural networks, simultaneous determination of the optimum structure and weights is a challenge. ...
We propose an algorithm to explore the global optimization method, using SAT solvers, for training a...
. In this paper we study how global optimization methods (like genetic algorithms) can be used to tr...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Artificial Neural Network (ANN) is one of the modern computational methods proposed to solve increas...
Neural network training is a highly non-convex optimisation problem with poorly understood propertie...
In this paper we proposed a novel procedure for training a feedforward neural network. The accuracy ...
In the modern IT industry, the basis for the nearest progress is artificial intelligence technologie...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
We introduce techniques to study the weight space organization of neural networks optimizing differe...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Training a neural network is a difficult optimization problem because of numerous local minimums. M...