The possibility of non-constant discounting is important in environmental and resource management problems where current decisions affect welfare in the far-distant future, as with climate change. The difficulty of analyzing models with non-constant discounting limits their application. We describe and provide software to implement an algorithm to numerically obtain a Markov Perfect Equilibrium for an optimal control problem with non-constant discounting. The software is available online. We illustrate the approach by studying welfare and observational equivalence for a particular renewable resource management problem
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
The United Nations aim to perform a transition toward a sustainable environment where people can liv...
The possibility of non-constant discounting is important in environmental and resource management pr...
This paper derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equ...
This paper derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilib...
This note derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilibr...
aeres : ACLInternational audienceWe study the optimal harvesting of a renewable resource that cannot...
This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite h...
The use of a constant discount rate to study long-lived environmental problems such as global warmin...
The use of a constant discount rate to study long-lived environmental problems such as global warmin...
The Markov decision process is treated in a variety of forms or cases: finite or infinite horizon, w...
Application of optimal control theory to applied problems is limited by the difficulty of numerical ...
This paper generalizes the classical discounted utility model introduced by Samuelson by replacing a...
This paper compares the performance of approximately optimal current period analytical solution meth...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
The United Nations aim to perform a transition toward a sustainable environment where people can liv...
The possibility of non-constant discounting is important in environmental and resource management pr...
This paper derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equ...
This paper derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilib...
This note derives the dynamic programming equation (DPE) to a differentiable Markov Perfect equilibr...
aeres : ACLInternational audienceWe study the optimal harvesting of a renewable resource that cannot...
This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite h...
The use of a constant discount rate to study long-lived environmental problems such as global warmin...
The use of a constant discount rate to study long-lived environmental problems such as global warmin...
The Markov decision process is treated in a variety of forms or cases: finite or infinite horizon, w...
Application of optimal control theory to applied problems is limited by the difficulty of numerical ...
This paper generalizes the classical discounted utility model introduced by Samuelson by replacing a...
This paper compares the performance of approximately optimal current period analytical solution meth...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
The United Nations aim to perform a transition toward a sustainable environment where people can liv...