Thesis (Ph.D.)--University of Washington, 2018Many applications in decision-making use a dynamic optimization framework to model a system evolving uncertainly in discrete time, and an agent who chooses actions/controls from a set of available choices in order to minimize a suitable cost function. An important aspect of model formulation is the choice of input parameters. These are traditionally estimated from historical data and prior domain knowledge, and treated as known quantities in the decision-making process. This approach ignores any estimation errors or misspecification in the problem data, leading to potentially suboptimal solutions. Robust optimization addresses this issue by treating the parameters themselves as unknown quantitie...
Markov decision processes (MDP) is a standard modeling tool for sequential decision making in a dyna...
We consider large-scale Markov decision processes (MDPs) with parameter un-certainty, under the robu...
Decision making formulated as finding a strategy that maximizes a utility function de-pends critical...
Thesis (Ph.D.)--University of Washington, 2018Many applications in decision-making use a dynamic opt...
We consider robust Markov Decision Processes with Borel state and action spaces, unbounded cost and ...
In this paper, approximate dynamic programming (ADP) problems are modeled by discounted infinite-hor...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Resea...
Optimal solutions to Markov decision problems may be very sensitive with respect to the state transi...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Resea...
In this paper, we propose a new tractable framework for dealing with linear dynamical systems affect...
Thesis (Ph.D.)--University of Washington, 2018A broad range of optimization problems in applications...
Abstract — This paper presents a new robust decision-making algorithm that accounts for model uncert...
Markov decision processes (MDP) is a standard modeling tool for sequential decision making in a dyna...
Multi-stage robust optimization problems, where the decision maker can dynamically react to consecut...
This thesis is about robust optimization, a class of mathematical optimization problems which arise ...
Markov decision processes (MDP) is a standard modeling tool for sequential decision making in a dyna...
We consider large-scale Markov decision processes (MDPs) with parameter un-certainty, under the robu...
Decision making formulated as finding a strategy that maximizes a utility function de-pends critical...
Thesis (Ph.D.)--University of Washington, 2018Many applications in decision-making use a dynamic opt...
We consider robust Markov Decision Processes with Borel state and action spaces, unbounded cost and ...
In this paper, approximate dynamic programming (ADP) problems are modeled by discounted infinite-hor...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Resea...
Optimal solutions to Markov decision problems may be very sensitive with respect to the state transi...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Resea...
In this paper, we propose a new tractable framework for dealing with linear dynamical systems affect...
Thesis (Ph.D.)--University of Washington, 2018A broad range of optimization problems in applications...
Abstract — This paper presents a new robust decision-making algorithm that accounts for model uncert...
Markov decision processes (MDP) is a standard modeling tool for sequential decision making in a dyna...
Multi-stage robust optimization problems, where the decision maker can dynamically react to consecut...
This thesis is about robust optimization, a class of mathematical optimization problems which arise ...
Markov decision processes (MDP) is a standard modeling tool for sequential decision making in a dyna...
We consider large-scale Markov decision processes (MDPs) with parameter un-certainty, under the robu...
Decision making formulated as finding a strategy that maximizes a utility function de-pends critical...