. We give an equivalence between the tasks of computing the essential supremum of a summable function and of finding a certain zero of a one-dimensional convex function. Interpreting the integral method as Newton-type method we show that in the case of objective functions with an essential supremum that is not spread the algorithm can work very slowly. For this reason we propose a method of accelerating the algorithm which is in some respect similar to the method of Aitken/Steffensen. Key words: essential supremum, convergence speed, integral global optimization, Newton algorithm 1. Introduction The problem of determining the essential supremum of a summable function f over its domain D ae IR n can be regarded as a generalization of the...
A number of first-order methods have been proposed for smooth multiobjective optimization for which ...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
The direct algorithm has been recognized as an efficient global optimization method which has few re...
Motivated by machine learning problems over large data sets and distributed optimization over networ...
Abstract. We present algorithms for solving general sup-norm minimization problems over spaces of an...
The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained c...
We propose an extension of Newton's method for unconstrained multiobjective optimization (multicrite...
We begin by developing a line search method for unconstrained optimization which can be regarded as ...
We provide Frank–Wolfe (≡ Conditional Gradients) method with a convergence analysis allowing to appr...
Standard global convergence proofs are examined to determine why some algorithms perform better than...
We provide new tools for worst-case performance analysis of the gradient (or steepest descent) metho...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
Recently several methods were proposed for sparse optimization which make careful use of second-orde...
We provide new tools for worst-case performance analysis of the gradient (or steepest descent) metho...
In [1], Nesterov has introduced an optimal algorithm with constant step-size, with is th...
A number of first-order methods have been proposed for smooth multiobjective optimization for which ...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
The direct algorithm has been recognized as an efficient global optimization method which has few re...
Motivated by machine learning problems over large data sets and distributed optimization over networ...
Abstract. We present algorithms for solving general sup-norm minimization problems over spaces of an...
The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained c...
We propose an extension of Newton's method for unconstrained multiobjective optimization (multicrite...
We begin by developing a line search method for unconstrained optimization which can be regarded as ...
We provide Frank–Wolfe (≡ Conditional Gradients) method with a convergence analysis allowing to appr...
Standard global convergence proofs are examined to determine why some algorithms perform better than...
We provide new tools for worst-case performance analysis of the gradient (or steepest descent) metho...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
Recently several methods were proposed for sparse optimization which make careful use of second-orde...
We provide new tools for worst-case performance analysis of the gradient (or steepest descent) metho...
In [1], Nesterov has introduced an optimal algorithm with constant step-size, with is th...
A number of first-order methods have been proposed for smooth multiobjective optimization for which ...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
The direct algorithm has been recognized as an efficient global optimization method which has few re...