Two common questions when one uses a stochastic global optimization algorithm, e.g., simulated annealing, are when to stop a single run of the algorithm, and whether to restart with a new run or terminate the entire algorithm. In this paper, we develop a stopping and restarting strategy that considers tradeoffs between the computational effort and the probability of obtaining the global optimum. The analysis is based on a stochastic process called Hesitant Adaptive Search with Power-Law Improvement Distribution (HASPLID). HASPLID models the behavior of stochastic optimization algorithms, and motivates an implementable framework, Dynamic Multistart Sequential Search (DMSS). We demonstrate here the practicality of DMSS by using it to govern t...
In this paper we develop a methodology for defining stopping rules in a general class of global rand...
We consider a combination of state space partitioning and random search methods for solving determin...
By far the most efficient methods for global optimization are based on starting a local optimization...
Thesis (Master's)--University of Washington, 2021Black-box optimization is ubiquitous in machine lea...
Local search algorithms for global optimization often suffer from getting trapped in a local optimum...
This thesis addresses aspects of stochastic algorithms for the solution of global optimisation probl...
In this article we study stochastic multistart methods for global optimization, which combine local ...
How long should we run a stochastic global optimisation algorithm such as simulated annealing? How s...
Most state-of-the-art optimization algorithms utilize restart to resample new initial solutions to a...
This paper presents some simple technical conditions that guarantee the convergence of a general cla...
In this paper several probabilistic search techniques are developed for global optimization under th...
A stochastic method for global optimization is described and evaluated. The method involves a combin...
This paper presents some simple technical conditions that guarantee the convergence of a general cla...
AbstractA stochastic technique for multiextremal optimization is discussed; the technique derives fr...
A direct stochastic algorithm for global search This paper presents a new algorithm called PGSL- Pro...
In this paper we develop a methodology for defining stopping rules in a general class of global rand...
We consider a combination of state space partitioning and random search methods for solving determin...
By far the most efficient methods for global optimization are based on starting a local optimization...
Thesis (Master's)--University of Washington, 2021Black-box optimization is ubiquitous in machine lea...
Local search algorithms for global optimization often suffer from getting trapped in a local optimum...
This thesis addresses aspects of stochastic algorithms for the solution of global optimisation probl...
In this article we study stochastic multistart methods for global optimization, which combine local ...
How long should we run a stochastic global optimisation algorithm such as simulated annealing? How s...
Most state-of-the-art optimization algorithms utilize restart to resample new initial solutions to a...
This paper presents some simple technical conditions that guarantee the convergence of a general cla...
In this paper several probabilistic search techniques are developed for global optimization under th...
A stochastic method for global optimization is described and evaluated. The method involves a combin...
This paper presents some simple technical conditions that guarantee the convergence of a general cla...
AbstractA stochastic technique for multiextremal optimization is discussed; the technique derives fr...
A direct stochastic algorithm for global search This paper presents a new algorithm called PGSL- Pro...
In this paper we develop a methodology for defining stopping rules in a general class of global rand...
We consider a combination of state space partitioning and random search methods for solving determin...
By far the most efficient methods for global optimization are based on starting a local optimization...