For an example, take the well-known paper Choices, Values and Frames by Kahneman and Tversky. Here are some of the results that they report. 132 undergraduates answered the following two questions, which were separated by a short "filler" problem. (The order of the questions was reversed half of the time.)
- A. Would you accept a gamble that offers a 10% chance to win $95 and a 90% chance to lose $5?
- B. Would you pay $5 to participate in a lottery that offers a 10% chance to win $100 and a 90% chance to win nothing?
Here's another example with a similar point. The experimenters told a group of subjects two stories about a potential epidemic, and asked them to choose a response. Half of the subjects were told this story: "Imagine that the U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. The exact scientific estimates of the consequences of the programs are as follows:
- If Program A is adopted, 200 people will be saved.
- If Program B is adopted, there is a one-third probability that all 600 people will be saved and a two-thirds probability that no people will be saved.
The other half of the subjects were told a similar story, but the outcomes were framed differently.
- "If Program C is adopted, 400 people will die.
- If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die."
To quote the paper: 'The failure of invariance is both pervasive and robust. It is as common among sophisticated respondents as among naive ones, and it is not eliminated even when the same respondents answer both questions within a few minutes. Respondents confronted with their conflicting answers are typically puzzled. Even after rereading the problems, they still wish to be risk averse in the "lives saved" version; they wish to be risk seeking in the "lives lost" version; and they also wish to obey invariance and give consistent answers in the two versions. In their stubborn appeal, framing effects resemble perceptual illusions more than computational errors.'
It is tempting for technical people, in response to this, to complain about "irrational behavior": to invest the calculus of expected value with something like moral force. In terms of a general education mathematics course, this would lead to presenting the basics of probability and expectation and then claiming, explicitly or implicitly, that this is how decisions under uncertainty "ought" to be taken. I don't want to do this. I want to get across two ideas that pull in somewhat different directions: the first that numbers cannot do our judgment (especially our ethical judgment) for us, and the second that they can nevertheless greatly facilitate our judgment by helping us see the moral field clearly. I wonder how possible this will be?
Of course, in the background to this discussion is the question of how numbers are used in helping develop an ethical response to climate change and similar long-term sustainability questions. Now there are issues of discounting over time to be added to the mix (and these in turn are informed by assumptions about economic growth, cf. the Stern report). But leaving those aside for the moment, it is an interesting question how to frame the costs of mitigation efforts. Are these flat-out losses from the status quo, whatever their future benefits? Or are they investments which may yield a huge payoff? This is not so far from the first example I quoted from the paper. Anyway, how do we judge what is the "status quo" when we are talking about the long-term future trajectory of society?
I've noticed a number of writers using the metaphor of "buying insurance" for investing in mitigation. I wonder whether they have been studying Kahneman's paper (or his recent book)?
Photo by Flickr user Jared Zimmerman, licensed under Creative Commons
No comments:
Post a Comment