Dynamic Stability and Recursion

Economics and computer science overlap in surprising frequency.  Most of it is computational; calculating equilibrium, maximizing profit, etc.  Computer science has even integrated into economics through R. Comp sci can make things easier and there’s a wide range of economic applications.  Recently I observed one of these applications. It began when I was studying for my computer science test There was a subject on this test called recursion.  In practice recursion is very difficult, and as any students know, just because you take a test on it, doesn’t mean you understand it.

While the application can be difficult, the concept is not as complicated.  The idea behind recursion is an action, that is repeated itself. An example is multiplication, when I say 4×3 what I’m also saying is 4+4+4.  If I wanted to solve this recursively I would take the following steps.

  1. 4+4+4
  2. 8+4
  3. 12

Similar methods can be used to solve exponents, addition and much more.  If you’re a visual learner, then I suggest looking at the graphical methods below to get a better idea of recursion.  While I was studying recursion we where reviewing Cournot duopolies in my math-econ class (best response functions). A best response function is a function that calculates a firms best response to another firms choice in price or quantity.  Usually in these equations you can solve for an nash equilibrium. A nash equilibrium is when neither firms would have any incentive to deviate from a price* or quantity*.

However, similar to simple supply and demand, these equilibrium can have dynamic stability.  If one firm chooses a p or q outside of equilibrium the decisions will gravitate toward equilibrium.  It does this by each firm best responding to the other firm’s out of equilibrium decision. Mathematically, it requires recalculating the firm’s best response function.  This is a recursive concept. In order to arrive at the nash equilibrium you have to calculate BR over and over and over again until it converges.

Another aspect of recursion is getting them to stop.  There are a few simple ways to get a method to stop; one way is specifying an amount of repetitions like 5 or 10,000, another way is by making a convergence constraint.  This is when the answer is changing from step to step. It is more applicable in this case to use the latter. When observing dynamic stability it never exactly reaches equilibrium, it just gets really really close.  Therefore you can’t stop when it hits equilibrium, you have to stop for another reason.

Leave a Reply

Your email address will not be published. Required fields are marked *

*