Field of Science

Why is losing $10 worse than winning $10 is good?

Losses loom larger than gains.

This useful mnemonic describes an odd experimental finding: if you have people rate on a scale of 1 to 10 how unhappy they would be to lose $100, that rating will be higher than if you ask them how happy they would be to win $100. Similarly, people tend to be reluctant to gamble when the odds are even (50% chance of winning $100, 50% chance of losing $100). Generally, if odds are even, people aren't likely to bet unless the potential prize is greater than the potential loss.

This is a well-known phenomenon in psychology and economics. It is particularly surprising, because simple statistical analysis would suggest that losses and gains should be treated equally. That is, if you have a 50% chance of winning $100 and a 50% chance of losing $100, on average you will break even. So why not gamble?

(Yes, it is true that people play slot machines or buy lottery tickets, in which, on average, you lose money. That's a different phenomenon that I don't completely understand. When/if I do, I'll write about it.)

A question that came up recently in a conversation is: why aren't people more rational? Why don't they just go with the statistics?

I imagine there have been papers written on the subject, and I'd love to get some comments referring me to them. Unfortunately, nobody involved in this conversation knew of said papers, so I actually did some quick-and-dirty simulations to investigate this problem.

Here is how the simulation works: each "creature" in my simulation is going to play a series of games in which they have a 50% chance of winning food and a 50% chance of losing food. If they run out of food, they die. The size of the gain and the size of the loss are each chosen randomly. If the ratio of gain to loss is large enough, the creature will play.

For some of the creatures, losses loom larger than gains. That is, they won't play unless the gain is more than 1.5 times larger than the loss (50% chance of winning 15.1 units of food, 50% chance of losing 10). Some of the creatures treat gains and losses roughly equally, meaning they will play as long as the gain is at least a sliver larger than the loss (50% chance of winning 10.1 units of food, 50% chance of losing 10). Some of the creatures weigh gains higher than losses and will accept any gamble as long as the gain is at least half the size of the loss (50% chance of winning 5.1 unites of food, 50% chance of losing 10).

(Careful observers will note that all these creatures are biased in favor of gains. That is, there is always some bet that is so bad the creature won't take it. There are never any bets so good that the creature refuses. They just differ in how biased they are.)

Each creature plays the game 1000 times, and there are 1000 creatures. They all start with 100 units of food.

In the first simulation, the losses and gains were capped at 10 units of food, or 10% of the creature's starting endowment, with an average of 5 units. Here's how the creatures faired:

Losses loom larger than gains:
0% died.
807 = average amount of food at end of simulation.

Losses roughly equal to gains:
0% died.
926 = average amount of food at end of simulation.

Gains loom larger than losses:
2% died.
707 = average amount of food at end of simulation.


So this actually suggests that the best strategy in this scenario would be to treat losses and gains similarly (that is, act like a statistician -- something humans don't do). However, the average loss and gain was only 5 units of food (5% of the starting endowment), and the maximum was 10 units of food. So none of these gambles were particularly risky, and maybe that has something to do with it. So I ran a second simulation with losses and gains capped at 25 units of food, or 25% of the starting endowment:

Losses loom larger than gains:
0% died
1920 = average amount of food at end of simulation

Losses roughly equal to gains:
1% died
2171 = average amount of food at end of simulation

Gains loom larger than losses:
14% died
1459 = average amount of food at end of simulation


Now, we see that the statistician's approach still leads to more food on average, but there is some chance of starving to death, making weighing losses greater than gains seem like the safest option. You might not get as rich, but you won't die, either.

This is even more apparent if you up the potential losses and gains to a maximum of 50 units of food each (50% of the starting endowment), and an average of 25 units:

Losses loom larger than gains:
1% died.
3711 = average amount of food at end of simulation

Losses equal to gains
9% died
3941 = average amount of food at end of simulation

Gains loom larger than losses
35% died.
2205 = average amount of food at end of simulation


Now, weighing losses greater than gains really seems like the best strategy. Playing the statistician will net you 6% more food on average, but it also increases your chance of dying by 9! (The reason that the statistician ends up with more food on average is probably because the conservative losses-loom-larger-than-gains creatures don't take as many gambles and thus have less opportunity to win.)

So what does this simulation suggest? It suggests that when the stakes are high, it is better to be conservative and measure what you might win by what you might lose. If the stakes are low, this is less necessary. Given that humans tend to value losses higher than gains, this suggests that we evolved mainly to think about risks with high stakes.

Of course, that's all according to what is a very, very rough simulation. I'm sure there are better ones in the literature, but it was useful to play around with the parameters myself.

2 comments:

Anonymous said...

I don't know if this is something that you'll eventually publish or continue to work on... But, one comment: I think this simulation and your write-up of it would have been a great place to use some visuals, a la Tufte. I hate reading large blocks of text, and I admit, I'm lazy and like pictures. Illustrations would help show the information better, I think.

(I recognize that this would be a lot of work... but it's just a thought.)

Anonymous said...

You're probably already aware of this but even if dying is not an option preferences will depend on the level of the endowment; utility is not linear i.e. diminishing marginal returns kick in, so the 12th hot dog is less pleasurable than the first even if you weren't hungry to start with.

The literature suggests that the above reasoning i.e. diminishing marginal utility can not explain the 'losses loom larger than gains' phenomonan.For an interesting discussion see Rabin's article "Diminishing Marginal Utility of Wealth Cannot Explain Risk Aversion". Link below.

http://ideas.repec.org/p/cdl/econwp/1025.html