Fitness functions based on test cases are very common in Genetic Programming (GP). This process can be assimilated to a learning task, with the inference of models from a limited number of samples. This paper is an investigation on two methods to improve generalization in GP-based learning: 1) the selection of the best-of-run individuals using a three data sets methodology, and 2) the application of parsimony pressure in order to reduce the complexity of the solutions. Results using GP in a binary classification setup show that while the accuracy on the test sets is preserved, with less variances compared to baseline results, the mean tree size obtained with the tested methods is significantly reduced
In classification,machine learning algorithms can suffer a performance bias when data sets are unbal...
4siGeneralization is an important issue in machine learning. In fact, in several applications good r...
In classification,machine learning algorithms can suffer a performance bias when data sets are unbal...
Fitness functions based on test cases are very common in Genetic Programming (GP). This process can ...
International audienceThis paper proposes a theoretical analysis of Genetic Programming (GP) from th...
[Abstract] Genetic Programming (GP) is a technique which is able to solve different problems through...
Centre for Intelligent Systems and their Applicationsstudentship 9314680This thesis is an investigat...
We propose and motivate the use of vicinal-risk minimization (VRM) for training genetic programming ...
Abstract. This paper proposes a theoretical analysis of Genetic Pro-gramming (GP) from the perspecti...
The ability to generalize beyond the training set is important for Genetic Programming (GP). Interle...
Under review at IEEE Transactions on Evolutionary ComputationGenetic programming (GP) is a common me...
The application of multi-objective evolutionary computation techniques to the genetic programming of...
Abstract. In designing non-linear classifiers, there are important trade-offs to be made between pre...
Genetic Algorithms are bio-inspired metaheuristics that solve optimization problems; they are evolut...
Abstract. Universal Consistency, the convergence to the minimum possible er-ror rate in learning thr...
In classification,machine learning algorithms can suffer a performance bias when data sets are unbal...
4siGeneralization is an important issue in machine learning. In fact, in several applications good r...
In classification,machine learning algorithms can suffer a performance bias when data sets are unbal...
Fitness functions based on test cases are very common in Genetic Programming (GP). This process can ...
International audienceThis paper proposes a theoretical analysis of Genetic Programming (GP) from th...
[Abstract] Genetic Programming (GP) is a technique which is able to solve different problems through...
Centre for Intelligent Systems and their Applicationsstudentship 9314680This thesis is an investigat...
We propose and motivate the use of vicinal-risk minimization (VRM) for training genetic programming ...
Abstract. This paper proposes a theoretical analysis of Genetic Pro-gramming (GP) from the perspecti...
The ability to generalize beyond the training set is important for Genetic Programming (GP). Interle...
Under review at IEEE Transactions on Evolutionary ComputationGenetic programming (GP) is a common me...
The application of multi-objective evolutionary computation techniques to the genetic programming of...
Abstract. In designing non-linear classifiers, there are important trade-offs to be made between pre...
Genetic Algorithms are bio-inspired metaheuristics that solve optimization problems; they are evolut...
Abstract. Universal Consistency, the convergence to the minimum possible er-ror rate in learning thr...
In classification,machine learning algorithms can suffer a performance bias when data sets are unbal...
4siGeneralization is an important issue in machine learning. In fact, in several applications good r...
In classification,machine learning algorithms can suffer a performance bias when data sets are unbal...