Dynamic Programming In Macroeconomics

718 Words2 Pages

which is a fundamental tool of dynamic macroeconomics. "The term dynamic programming was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to nd the best decisions one after another. By 1953, he re ned this to the modern meaning, referring speci cally to nesting smaller decision problems inside larger decisions.
1Bellmans'(1957) and Bertsekas'(1976) contributions give us the mathematical theory behind it as a tool of solving dynamic optimization problems.
For economists, Sargent (1987), Stokey and Lucas (1989) contributed a valuable bridge between them.
2.1 Dynamic Programming Overview
Dynamic programming is used to solve complex problems by decomposing them into simpler sub-problems. The main idea behind it, is quite simple. In order to solve a given problem, we have to solve di erent parts of the problem
(sub-problems) and then to reach an overall solution we combine the solutions of these sub-problems. The dynamic programming approach aims to solve each sub-problem only once and therefore reduces the number of computations.
This is especially useful, as often the number of repeating sub-problems is exponentially large.
The basic idea of dynamic programming is to turn the sequence problem into a functional equation, i.e., one of nding a function rather than a sequence.
This often gives better economic insights, similar to the logic of com-
1From Wikipedia article on Dynamic Programming.
2. Stochastic Dynamic Programming 4 paring today to tomorrow. It is also often easier to characterize analytically or numerically. Some important concepts in dynamic programming are the time horizon, state variables, decision variables, transition functions, return functions, objective
f...

... middle of paper ...

...he principle of optimality for dynamic programming. 6. The solution procedure begins by nding the optimal policy for the last stage. The optimal policy for the last stage prescribes the optimal policy decision for each of the possible states at that stage. The solution of this one-stage problem is usually trivial.
7. A recursive relationship that identi es the optimal policy for stage n, given the optimal policy for stage n + 1, is available.
In the context of mathematical optimization, dynamic programming often refers to the simpli cation of a decision by breaking it down into a sequence of decision steps over time. We de ne a sequence of value functions V1; V2; :::Vn, with an argument y which represent the state of the system at times i, i 2
1; :::; n. The de nition of Vn(y) is the value obtained at the last time n, in state
y. The values Vi at earlier times i = n

Open Document