Saturday, November 24, 2012

Mathematics for Sustainability 4

RiskAnother theme I want to address in the course is how we evaluate risk. This is a tricky but important task... On the one hand, there is an effective mathematical language, the language of probability theory, for quantifying risk and expectation. On the other hand, there is extensive behavioral literature which strongly suggests that when we actually make our decisions, we do not always do so in the ways in which probability calculus would suggest.



 For an example, take the well-known paper Choices, Values and Frames by Kahneman and Tversky. Here are some of the results that they report. 132 undergraduates answered the following two questions, which were separated by a short "filler" problem. (The order of the questions was reversed half of the time.)
  •  A. Would you accept a gamble that offers a 10% chance to win $95 and a 90% chance to lose $5?
  •  B. Would you pay $5 to participate in a lottery that offers a 10% chance to win $100 and a 90% chance to win nothing? 
It is easy to see that the probabilities and final outcomes in A and in B are completely equivalent. Nevertheless, 42 of the students (almost a third) declined the gamble in A but were willing to pay for the lottery in B. This is an example of what the authors call "framing" and "loss aversion". Somehow, thinking of the $5 as a payment to take part in a risky venture is more acceptable than thinking of it as an incurred loss.

Here's another example with a similar point. The experimenters told a group of subjects two stories about a potential epidemic, and asked them to choose a response. Half of the subjects were told this story: "Imagine that the U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. The exact scientific estimates of the consequences of the programs are as follows:
  • If Program A is adopted, 200 people will be saved. 
  • If Program B is adopted, there is a one-third probability that all 600 people will be saved and a two-thirds probability that no people will be saved.
Which of the two programs would you favor?" A large majority chose A: save 200 people for sure rather than accepting the risk that all 600 will be lost.

The other half of the subjects were told a similar story, but the outcomes were framed differently.
  • "If Program C is adopted, 400 people will die. 
  • If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die." 
 Now, a large majority chose D: the one-third chance of keeping everyone alive is strongly preferred to the cold-blooded acceptance of 400 deaths. But, once again, it is easy to verify that C and D are indistinguishable outcomes from A and B respectively.

To quote the paper: 'The failure of invariance is both pervasive and robust. It is as common among sophisticated respondents as among naive ones, and it is not eliminated even when the same respondents answer both questions within a few minutes. Respondents confronted with their conflicting answers are typically puzzled. Even after rereading the problems, they still wish to be risk averse in the "lives saved" version; they wish to be risk seeking in the "lives lost" version; and they also wish to obey invariance and give consistent answers in the two versions. In their stubborn appeal, framing effects resemble perceptual illusions more than computational errors.'

It is tempting for technical people, in response to this, to complain about "irrational behavior": to invest the calculus of expected value with something like moral force. In terms of a general education mathematics course, this would lead to presenting the basics of probability and expectation and then claiming, explicitly or implicitly, that this is how decisions under uncertainty "ought" to be taken. I don't want to do this. I want to get across two ideas that pull in somewhat different directions: the first that numbers cannot do our judgment (especially our ethical judgment) for us, and the second that they can nevertheless greatly facilitate our judgment by helping us see the moral field clearly. I wonder how possible this will be?

 Of course, in the background to this discussion is the question of how numbers are used in helping develop an ethical response to climate change and similar long-term sustainability questions. Now there are issues of discounting over time to be added to the mix (and these in turn are informed by assumptions about economic growth, cf. the Stern report). But leaving those aside for the moment, it is an interesting question how to frame the costs of mitigation efforts. Are these flat-out losses from the status quo, whatever their future benefits? Or are they investments which may yield a huge payoff? This is not so far from the first example I quoted from the paper. Anyway, how do we judge what is the "status quo" when we are talking about the long-term future trajectory of society?

 I've noticed a number of writers using the metaphor of "buying insurance" for investing in mitigation. I wonder whether they have been studying Kahneman's paper (or his recent book)?

Photo by Flickr user Jared Zimmerman, licensed under Creative Commons

No comments: