The Efficient Global Optimization (EGO) is regarded as the state-of-the-art algorithm for global optimization of costly black-box functions. Nevertheless, the method has some difficulties such as the ill-conditioning of the GP covariance matrix and the slow convergence to the global optimum. The choice of the parameters of the GP is critical as it controls the functional family of surrogates used by EGO. The effect of different parameters on the performance of EGO needs further investigation. Finally, it is not clear that the way the GP is learned from data points in EGO is the most appropriate in the context of optimization. This work deals with the analysis and the treatment of these different issues. Firstly, this dissertation contribu...