Wednesday, July 18, 2012

decomposing problems


(...and by that I don't mean necrotising fasciitis or "flesh eating bacteria"... that's a decomposing problem too, but not one that I want to talk about.)

Still slogging through the "Thinking Fast and Slow" audiobook during my drive.  This morning Danny was on the topic of decomposing problems correctly.  In other words, he was talking about setting the frame.
He uses an example of an economist who asked his friend "would you accept a coin-flip bet.  If the coin comes up heads, you win $200.  If it comes up tails, you lose $100."

A rational actor would jump all over it, but his friend (a human and therefore not completely rational) responded "no, because I would emotionally suffer from the loss of $100 far more accutely than I would emotionally enjoy the gain of $200."  His friend then added "however, if you were willing to make 100 such bets, I'll take you up on it".

Danny goes on to explain a theoretical foundation based on prospect and utility theory for why this adds up to make sense based on the relative loss-aversion and personal utility of winning and losing money.  In really simple terms, when you work out the numbers:
  1. one coin flip on this bet you have a 50-50 chance of losing money
  2. two coin flips, you have a 25% chance of losing money
  3. three coin flips, you have a 12.5% chance of losing money
  4. and so on. 
The real point is that framing matters.  We often make the mistake of taking big, complicated analytical problems and breaking them up into SDs and sending special teams off to work specific parts of the problem.  They report back one at a time with their answer, but the questions were framed too narrowly and the sum of the answers to the individual questions may not add up to the right answer for the big problem.

In the case of the coin flip from above, we could take the portfolio of 100 bets and say "whoa, this is too complicated" and pick 100 teams and tell each them to go work the numbers.  Each would come back and say "don't make the bet" and the leader, who invested much effort from his org, would rationally say "do none of this stuff" and reject the whole portfolio.  That would be a mistake.

I don't have data to support it, but I think this is, in large part, why large companies often fail to take advantage of the many good, small ideas that crop up.  We have many, many of these small bets that we could be taking.  Each of them has a potential for a big win but is more likely to lose.  We feel the losses more accutely than the hypothetical wins.  For that matter, we suffer the investment more than the excitement of the possible win.  One-at-a-time, we kill these ideas, without ever looking at the whole portfolio.

No comments: