This paper derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilibrium in a problem with non-constant discounting and general functional forms. Beginning with a discrete stage model and taking the limit as the length of the stage goes to 0 leads to the DPE corresponding to the continuous time problem. The note discusses the multiplicity of equilibria under non-constant discounting, calculates the bounds of the set of candidate steady states, and Pareto ranks the equilibria. (c) 2005 Published by Elsevier Inc
AbstractIn this paper, we study discounted Markov decision processes on an uncountable state space. ...
and seminar participants at the IIES Stockholm and the Federal Reserve Banks of Philadelphia and Min...
We consider a discrete time Markov Decision Process, where the objectives are linear combinations of...
This paper derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilib...
This note derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilibr...
This note derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilibr...
The possibility of non-constant discounting is important in environmental and resource management pr...
This paper deals with discrete-time Markov decision processes (MDPs) with Borel state and action spa...
International audienceUnder non-exponential discounting, we develop a dynamic theory for stopping pr...
This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite h...
Abstract. We study the existence of optimal strategies and value func-tion of non stationary Markov ...
We consider dynamic programming problems with a large time horizon, and give sufficient conditions fo...
AbstractThis paper first investigates the nonstationary continuous time Markov decision processes wi...
1. The basic problem and its solution in the deterministic case à 1.1. General deterministic case Dy...
This article establishes a dynamic programming argument for a maximin optimization problem where the...
AbstractIn this paper, we study discounted Markov decision processes on an uncountable state space. ...
and seminar participants at the IIES Stockholm and the Federal Reserve Banks of Philadelphia and Min...
We consider a discrete time Markov Decision Process, where the objectives are linear combinations of...
This paper derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilib...
This note derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilibr...
This note derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilibr...
The possibility of non-constant discounting is important in environmental and resource management pr...
This paper deals with discrete-time Markov decision processes (MDPs) with Borel state and action spa...
International audienceUnder non-exponential discounting, we develop a dynamic theory for stopping pr...
This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite h...
Abstract. We study the existence of optimal strategies and value func-tion of non stationary Markov ...
We consider dynamic programming problems with a large time horizon, and give sufficient conditions fo...
AbstractThis paper first investigates the nonstationary continuous time Markov decision processes wi...
1. The basic problem and its solution in the deterministic case à 1.1. General deterministic case Dy...
This article establishes a dynamic programming argument for a maximin optimization problem where the...
AbstractIn this paper, we study discounted Markov decision processes on an uncountable state space. ...
and seminar participants at the IIES Stockholm and the Federal Reserve Banks of Philadelphia and Min...
We consider a discrete time Markov Decision Process, where the objectives are linear combinations of...