We present the first q-Gaussian smoothed functional (SF) estimator of the Hessian and the first Newton-based stochastic optimization algorithm that estimates both the Hessian and the gradient of the objective function using q-Gaussian perturbations. Our algorithm requires only two system simulations (regardless of the parameter dimension) and estimates both the gradient and the Hessian at each update epoch using these. We also present a proof of convergence of the proposed algorithm. In a related recent work (Ghoshdastidar, Dukkipati, & Bhatnagar, 2014), we presented gradient SF algorithms based on the q-Gaussian perturbations. Our work extends prior work on SF algorithms by generalizing the class of perturbation distributions as most distr...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under...
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochasti...
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochasti...
The importance of the q-Gaussian family of distributions lies in its power-law nature, and its close...
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic opt...
In this article, we present three smoothed functional (SF) algorithms for simulation optimization.Wh...
In this article, we present three smoothed functional (SF) algorithms for simulation optimization.Wh...
Optimization problems involving uncertainties are common in a variety of engineering disciplines suc...
This work presents a novel version of recently developed Gauss--Newton method for solving systems of...
Smoothed functional (SF) algorithm estimates the gradient of the stochastic optimization problem by ...
We develop four algorithms for simulation-based optimization under multiple inequality constraints. ...
While first-order methods are popular for solving optimization problems that arise in large-scale de...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under...
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochasti...
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochasti...
The importance of the q-Gaussian family of distributions lies in its power-law nature, and its close...
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic opt...
In this article, we present three smoothed functional (SF) algorithms for simulation optimization.Wh...
In this article, we present three smoothed functional (SF) algorithms for simulation optimization.Wh...
Optimization problems involving uncertainties are common in a variety of engineering disciplines suc...
This work presents a novel version of recently developed Gauss--Newton method for solving systems of...
Smoothed functional (SF) algorithm estimates the gradient of the stochastic optimization problem by ...
We develop four algorithms for simulation-based optimization under multiple inequality constraints. ...
While first-order methods are popular for solving optimization problems that arise in large-scale de...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...