Constant relative risk aversion
From Hobson's Choice
A theory of risk preference, in which people are believed to give downside risks a greater weight than upside "risks" (i.e., likelihood of a windfall). Constant relative risk aversion (CRRA) is an example of a stylized fact; economists do not literally believe that human behavior reliably conforms to CRRA, but some believe that humans in the aggregate tend to follow this pattern. The interesting aspect of CRRA is that, in situations where an institution can control the precise monetary risks of people doing business with it, and if the people do not adhere to this system, then the institution can reliably "pump" money out of them.
CRRA assigns a constant ratio by which people give higher weights to downside risks than to upside risks. CRRA is also used in the Ramsey-Cass-Koopmans model as the "constant intertemporal elasticity of substitution," or the degree to which people prefer a stable rate of consumption relative to higher consumption in the future.
There is a game of chance called "St Petersburg," which is the simplest thing possible. Take a fair coin and flip it. You have bet 2 rubles on the outcome. If the coin comes up heads then you win 2 rubles, but if it comes up tails you play again, this time for 4 rubles. Each time, the stake is doubled, so n plays yields a prize of 2n rubles. Each flip of the coin is called a "trial" and the string of trials with their outcomes that concludes the game, is called a "consequence."
The probability of a consequence of n flips (P(n)) is 1 divided by 2n, and the "expected payoff" of each consequence is the prize times its probability. The ‘expected value’ of the game is the sum of the expected payoffs of all the consequences. Since the expected payoff of each possible consequence is 1 ruble, and there are an infinite number of them, this sum is an infinite number of rubles. This became known as the St. Petersburg Paradox.
Bernoulli, the Swiss philosopher and mathematician, suggested the problem lay in rewarding people with money rather than utility. At the back of the paradox is the assumption that (2n)(2-n) = 1 for all values of n; as n becomes (or could become) infinitely large, the sum of probable outcomes reaches ∞. In fact, that's not true for utility, and Bernoulli proposed that the expected utility—as opposed to expected payout in rubles—was necessarily finite.
Application to Economics
Economists were increasingly interested in risk because it applies to virtually all decisions, particularly those related to savings. Suppose you have a temporary employee working at a longterm assignment. The temp could be dismissed at any second; temps are almost never given any notice, and an abrupt dismissal typically causes the temp a lot of hardship. Because of this, the temp faces a risk if she accrues any debt; savings are vital to surviving periods between assignments. Yet there is also potential benefit in taking night school courses—say, in accounting. She must therefore weigh the risk (weighted for consequences) of dismissal, against the probability of getting a permanent job with benefits (weighted for the benefits of doing so).
Putting this another way, let U be the utility experienced by the temp. U is a function of consumption C, which of course varies over time; U = U(C(t)). The temp may prefer to take risks in order to enhance her estimated future consumption: an increase in income caused by risky investment of scarce money in tuition. For small values of θ, marginal utility diminishes more slowly—i.e., U(C) is smaller—than for larger values. That's the crucial significance of θ.
In the graph below, the horizontal axis C(z) refers to a random outcome; the probability that z1 happens is p, and the probability that z2 happens is (1-p). In other words, either z1 or z2 can happen. So the expected outcome E(z) is pz1 + (1-p)z2. Now, please notice someone has drawn a chord between points A and B. Notice that the expected utility E(U) is substantially lower than the utility of the expected outcome u[E(z)]; or just notice D and E. The position of E on the chord is dependent on the ratio of p:(p-1).
This is just a complicated way of saying that risk aversion inflicts a severe hit on the utility of bundle of benefits.
The function above was developed by Milton Friedman and Leonard Savage in 1948. Friedman & Savage also speculated on other shapes of the risk-utility function, but the curve above has a certain usefulness for the economics profession. You see, if a person has a curve very much unlike the one shown above, then one can be presented with a series of risks, each of which one finds acceptable, that lead one into any position; the other party—say, the casino management—can always make a profit, and essentially "pump" money out of players. While some people undoubtedly are like that, the population in the aggregate cannot be, or the economy would grind to a halt forever.
If we are looking at the function as a CIES graph, then the horizontal access merely represents increasing values of consumption. If, however, we are looking at the function as a CRRA graph, then it makes sense to regard the horizontal axis as a series of equally likely payouts. A segment between zi and zj with a length of 1% of the entire horizontal axis, would have a 1% possibility of happening.
- ↑ For those of you unfamiliar with calculus: some algebraic functions, like f(x) = x-2 can be graphed from 0 to infinity, and the total area under their curve is finite. This seems impossible, but it's true.
- "The Theory of Risk Aversion," College of Economics and Public Administration, New School University, NY
James R MacLean (12:03, 23 September 2007 (PDT))