The impact of learning algorithm optimization by means of parameter tuning is studied. To do this, two quality attributes, sensitivity and classification performance, are investigated, and two metrics for quantifying each of these attributes are suggested. Using these metrics, a systematic comparison has been performed between four induction algorithms on eight data sets. The results indicate that parameter tuning is often more important than the choice of algorithm and there does not seem to be a trade-off between the two quality attributes. Moreover, the study provides quantitative support to the as-sertion that some algorithms are more robust than others with respect to parameter configuration. Finally, it is briefly de-scribed how the q...
We present a comprehensive global sensitivity analysis of two single-objective and two multi-objecti...
International audienceOne of the biggest challenges in evolutionary computation concerns the selecti...
Training machine learning models requires users to select many tuning parameters. For example, a pop...
Much research has been done in the fields of classifier performance evaluation and optimization. Thi...
Abstract. Work on metalearning for algorithm selection has often been criticized because it mostly c...
We address the problem of finding the parameter settings that will result in optimal performance of ...
Supervised classification is the most studied task in Machine Learning. Among the many algorithms us...
Although metaheuristic optimization has become a common practice, new bio-inspired algorithms often ...
We address the problem of nding the pa-rameter settings that will result in optimal performance of a...
This paper introduces a new method for learning algorithm evaluation and selection, with empirical r...
The problem of parameterization is often central to the ef-fective deployment of nature-inspired alg...
In general, biologically-inspired multi-objective optimization algorithms comprise several parameter...
A learning curve displays the measure of accuracy/error on test data of a machine learning algorithm...
Over the last decade, research on automated parameter tuning, often referred to as automatic algorit...
We propose a novel approach to ranking Deep Learning (DL) hyper-parameters through the application o...
We present a comprehensive global sensitivity analysis of two single-objective and two multi-objecti...
International audienceOne of the biggest challenges in evolutionary computation concerns the selecti...
Training machine learning models requires users to select many tuning parameters. For example, a pop...
Much research has been done in the fields of classifier performance evaluation and optimization. Thi...
Abstract. Work on metalearning for algorithm selection has often been criticized because it mostly c...
We address the problem of finding the parameter settings that will result in optimal performance of ...
Supervised classification is the most studied task in Machine Learning. Among the many algorithms us...
Although metaheuristic optimization has become a common practice, new bio-inspired algorithms often ...
We address the problem of nding the pa-rameter settings that will result in optimal performance of a...
This paper introduces a new method for learning algorithm evaluation and selection, with empirical r...
The problem of parameterization is often central to the ef-fective deployment of nature-inspired alg...
In general, biologically-inspired multi-objective optimization algorithms comprise several parameter...
A learning curve displays the measure of accuracy/error on test data of a machine learning algorithm...
Over the last decade, research on automated parameter tuning, often referred to as automatic algorit...
We propose a novel approach to ranking Deep Learning (DL) hyper-parameters through the application o...
We present a comprehensive global sensitivity analysis of two single-objective and two multi-objecti...
International audienceOne of the biggest challenges in evolutionary computation concerns the selecti...
Training machine learning models requires users to select many tuning parameters. For example, a pop...