This paper provides an algorithm for computing policies for dynamic economic models whose state vectors evolve as ergodic Markov processes. The algorithm can be described as a simple learning process (one that agents might actually use). It has two features which break the relationship between its computational requirements and the dimension of the model’s state space. First the integral over future states needed to determine policies is never calculated; rather it is estimated by a simple average of past outcomes. Second, the algorithm never computes policies at all points. Iterations are defined by a location and only policies at that location are computed. Random draws from the distribution determined by those policies determine the next ...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
We introduce a numerical algorithm for solving dynamic economic models that merges stochastic simula...
This paper presents a novel approach for approximate stochastic dynamic programming (ASDP) over a co...
This paper provides an algorithm for computing policies for dynamic economic models whose state vect...
AbstractStochastic dynamic programs suffer from the so called curse of dimensionality whereby the nu...
Statistical procedures are developed for reducing the number of autonomous state variables in stocha...
This paper presents a novel algorithm for learning in a class of stochastic Markov decision process...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
We describe a sparse grid collocation algorithm to compute recursive solutions of dynamic economies ...
We develop numerically stable stochastic simulation approaches for solving dynamic economic models. ...
Markov decision process (MDP) models are widely used for modeling sequential decision-making problem...
We present a comprehensive framework for Bayesian estimation of structural nonlinear dynamic economi...
Discrete-time stochastic games with a finite number of states have been widely applied to study the ...
We provide a probabilistic analysis of the banker algorithm when transition probabilities may depend...
We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems w...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
We introduce a numerical algorithm for solving dynamic economic models that merges stochastic simula...
This paper presents a novel approach for approximate stochastic dynamic programming (ASDP) over a co...
This paper provides an algorithm for computing policies for dynamic economic models whose state vect...
AbstractStochastic dynamic programs suffer from the so called curse of dimensionality whereby the nu...
Statistical procedures are developed for reducing the number of autonomous state variables in stocha...
This paper presents a novel algorithm for learning in a class of stochastic Markov decision process...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
We describe a sparse grid collocation algorithm to compute recursive solutions of dynamic economies ...
We develop numerically stable stochastic simulation approaches for solving dynamic economic models. ...
Markov decision process (MDP) models are widely used for modeling sequential decision-making problem...
We present a comprehensive framework for Bayesian estimation of structural nonlinear dynamic economi...
Discrete-time stochastic games with a finite number of states have been widely applied to study the ...
We provide a probabilistic analysis of the banker algorithm when transition probabilities may depend...
We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems w...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
We introduce a numerical algorithm for solving dynamic economic models that merges stochastic simula...
This paper presents a novel approach for approximate stochastic dynamic programming (ASDP) over a co...