International audienceDivide-and-Evolve (DaE) is an original “memeticization” of Evolutionary Computation and Artificial Intelligence Planning. However, like any Evolutionary Algorithm, DaE has several parameters that need to be tuned, and the already excellent experimental results demonstrated by DaE on benchmarks from the International Planning Competition, at the level of those of standard AI planners, have been obtained with parameters that had been tuned once and for-all using the Racing method. This paper demonstrates that more specific parameter tuning (e.g. at the domain level or even at the instance level) can further improve DaE results, and discusses the trade-off between the gain in quality of the resulting plans and the overhea...
Parameter tuning in Evolutionary Algorithms (EA), is a great obstacle that can become the key to suc...
International audienceAll standard AI planners to-date can only handle a single objective, and the o...
International audienceDAEX is a metaheuristic designed to improve the plan quality and the scalabili...
International audienceDivide-and-Evolve (DaE) is an original “memeticization” of Evolutionary Comput...
International audienceLearn-and-Optimize (LaO) is a generic surrogate based method for parameter tun...
International audienceLearn-and-Optimize (LaO) is a generic surrogate based method for parameter tun...
International audienceLearn-and-Optimize (LaO) is a generic surrogate based method for parameter tun...
International audienceThe sub-optimal DAE planner implements the stochastic approach for domain-inde...
In this paper we describe the system used in the Plan-ning and Learning Part of the 7th Internationa...
International audienceDivide-and-Evolve (DaE) is an original “memeticization” of Evolutionary Comput...
Parameter tuning in Evolutionary Algorithms (EA) generally result in suboptimal choices of values be...
The issue of setting the values of various parameters of an evolutionary algorithm is crucial for go...
Parameter tuning in Evolutionary Algorithms (EA), is a great obstacle that can become the key to suc...
International audienceAll standard AI planners to-date can only handle a single objective, and the o...
International audienceDAEX is a metaheuristic designed to improve the plan quality and the scalabili...
International audienceDivide-and-Evolve (DaE) is an original “memeticization” of Evolutionary Comput...
International audienceLearn-and-Optimize (LaO) is a generic surrogate based method for parameter tun...
International audienceLearn-and-Optimize (LaO) is a generic surrogate based method for parameter tun...
International audienceLearn-and-Optimize (LaO) is a generic surrogate based method for parameter tun...
International audienceThe sub-optimal DAE planner implements the stochastic approach for domain-inde...
In this paper we describe the system used in the Plan-ning and Learning Part of the 7th Internationa...
International audienceDivide-and-Evolve (DaE) is an original “memeticization” of Evolutionary Comput...
Parameter tuning in Evolutionary Algorithms (EA) generally result in suboptimal choices of values be...
The issue of setting the values of various parameters of an evolutionary algorithm is crucial for go...
Parameter tuning in Evolutionary Algorithms (EA), is a great obstacle that can become the key to suc...
International audienceAll standard AI planners to-date can only handle a single objective, and the o...
International audienceDAEX is a metaheuristic designed to improve the plan quality and the scalabili...