The Inside-Outside View to Better Decision-making

Decision-making isn’t always the easiest thing in the world. While many errors may seem obvious in hindsight, they’re rarely as crystal clear during the decision-making process. Even worse, we have a tough time imagining the opposing view. As Michael Mauboussin states in his book Think Twice, we have “a tendency to favor the inside view over the outside view.” He goes on to explain,

An inside view considers a problem by focusing on the specific task and by using information that is close at hand, and makes predictions based on that narrow and uniques set of inputs. These inputs may include anecdotal evidence and fallacious perceptions. This is the approach that most people use in building models of the future and is indeed common for all forms of planning.

Compare that with The Outside View:

The outside view asks if there are similar situations that can provide a statistical basis for making a decision. Rather than seeing a problem as unique, the outside view wants to know if others have faced comparable problems and, if so, what happened. The outside view is an unnatural way to think, precisely because it forces people to set aside all the cherished information they have gathered…. The outside view can often create a very valuable reality check for decision makers.

He goes on to list three illusions that lead one to the inside view: the Illusion of Superiority (I’m better than them), the Illusion of Optimism (that’ll never happen to me), and the Illusion of Control (I can make this happen). Obvious question: “How can we get better at adopting The Outside View?”

Mauboussin pulls from Kahneman and Tversky, and distills their 5 step process into 4 steps.

  1. Select a reference class: “Find a group of situations, or a reference class, that is broad enough to be statistically significant but narrow enough to be useful in analyzing the decision that you face. The task is generally as much art as science, and is certainly trickier for problems that few people have dealt with before. But for decisions that are common – even if they are not common for you – identifying a reference class is straightforward.”
  2. Assess the distribution of outcomes: “Once you have a reference class, take a close look at the rate of success and failure…. Two other issues worth mentioning. The statistical rate of success and failure must be reasonably stable over time for a reference class to be valid. If the properties of the system change, drawing inference from past data can be misleading…. Also keep an eye out for systems where small perturbations can lead to large-scale change. Since cause and effect are difficult to pin down in these systems, drawing on past experiences is more difficult.”
  3. Make a prediction: “With the data from your reference class in had, including an awareness of the distribution of outcomes, you are in a position to make a forecast. The idea is to estimate you chances of success and failure…. Sometimes when you find the right reference class, you can see the success rate is not very high. So to improve your chance of success, you have to do something different that everyone else.”
  4. Assess the reliability of your prediction and fine-tune: “How good we are at making decisions depends  a great deal on what we are trying to predict. Weather forecasters, for instance, do a pretty good job of predicting what the temperature will be tomorrow. Book publishers, on the other hand, are poor at picking winners, with the exception of those books from a handful of best-selling authors. The worse the record of successful prediction is, the more you should adjust your prediction toward the mean (or other relevant statistical measure). When cause and effect is clear, you can have more confidence in your forecast.”

The more probabilistic the context, the better these step will work. Now you know how to take The Outside View to increase the odds of a better decision.

The main lesson from the inside-outside view is that while decision makers tend to dwell on uniqueness, the best decisions often derive from sameness. – Mauboussin

— @Cinema_Air

How to Survive the “Experts”

We are bombarded everyday with various “experts” on the television, radio, podcasts, at work, or even on the street. How can we filter through their thoughts & opinions while remaining grounded? Garrett Hardin developed 3 filters – Literate, Numerate, and Ecolate – for just the situation in his book, “Filters Against Folly“. The following excerpts are from the book.

No one filter by itself is adequate for understanding the world and predicting the consequences of our actions. We must learn to use all three.

Literate Filter

Consciously or not, literate analysis begins with this question: “What are the words?” i.e., what are the most appropriate words?

To understand what is meant, one often has to be able to hear two languages: language in the ordinary sense, and the unspoken language that tells you how to “hear” the spoken.

Beyond communication, language has two functions: to promote thought, and to prevent it.

Numerate Filter

The implicit question of the numerate person is this: “What are the numbers?” i.e., what are the exact quantities, the proportions, and the rates?

Quantities matter. Numbers matter. Duration of time matters.

Ecolate Filter

Time and its consequences are essential concerns of the ecolate filter. The key question of ecolate analysis is this: “And then what?” That is, what further changes occur after the treatment or experience is repeated time after time?

Every measured thing is part of a web of variable more richly interconnected than we know. We use the ecolate filter to ferret out at least the major interconnections. Every proposal of plausible policy must be followed by the question “And then what?”

I’ll take the liberty of adding 2 more filters to Hardin’s three. The first, Incentive Structures, I borrowed from Charlie Munger.

Incentive Structure – I strongly recommend that you read more about the power of incentives in Charlie Munger’s 1995 speech to Harvard University [pdf]. The first 2 quotes aren’t from the speech, but are very relevant, and the last two quotes are nice teasers from his speech.

Never, ever, think about something else when you should be thinking about the power of incentives.

You must have the confidence to override people with more credentials than you whose cognition is impaired by incentive-caused bias or some similar psychological force that is obviously present. 

Well you can say, “Everybody knows that.” Well I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes but I get some surprise that pushes my limit a little farther.

One of my favorite cases about the power of incentives is the Federal Express case. The heart and soul of the integrity of the system is that all the packages have to be shifted rapidly in one central location each night. And the system has no integrity if the whole shift can’t be done fast. And Federal Express had one hell of a time getting the thing to work. And they tried moral suasion, they tried everything in the world, and finally somebody got the happy thought that they were paying the night shift by the hour, and that maybe if they paid them by the shift, the system would work better. And lo and behold, that solution worked.

Reflexivity – The 2nd filter I would add is George Soros’ concept of reflexivity that guided him “both in making money and giving it away.” The following quotes are from his 1994 speech at MIT. It’s a conceptualization of the effects of a participant’s causal thoughts on their relevant environment by either reinforcing a trend or breaking that trend thereby reflecting changes back on the relevant environment.

Reflexivity is, in effect, a two-way feedback mechanism in which reality helps shape the participants’ thinking and the participants’ thinking helps shape reality in an unending process in which thinking and reality may come to approach each other but can never become identical.

Reflexivity is a two-way feedback mechanism, which is responsible for a causal indeterminacy as well as a logical one. The causal indeterminacy resembles Heisenberg’s uncertainty principle, but there is a major difference: Heisenberg’s theory deals with observations, whereas reflexivity deals with the role of thinking in generating observable phenomena.

With the world becoming ever more complex the value of simplicity beckons. Complexity can often overwhelm us. That’s the perfect time to exercise this caveat: If you can’t make it through the 1st three filters, then you might need the advice of an expert or two.

But there are also cases where you have to recognize that you have no wisdom to add – and that your best course is to trust some expert. – Charlie Munger

Armed with these filters you can differentiate the worthwhile from the distractions, and apply yourself to what you really want to do with your time while having a better understanding of the “experts” and their expertise.

I’d also like to hear from you. What filters do you use to simplify the world around you and your decision-making process?

I am @Cinema_Air

The Baloney Detection Kit – Part 2

In Part 1 we highlighted some excerpts from Carl Sagan’s The Demon-Haunted World on what to do when evaluating a claim. In this post we look at what NOT to do and fallacies to avoid. My favorite excerpts:

Ad Hominem: Attacking the arguer, not the argument

Argument from Authority

Appeal to Ignorance: the claim that whatever has not been proved false must be true, and vice versa. Absence of Evidence is Not Evidence of Absence.

Observational Selection: as Francis Bacon described it, “counting the hits and forgetting the misses.”

Non Sequitur: Often those falling into this fallacy have simply failed to recognize alternative possibilities.

False Dichotomy: considering only the two extremes in a continuum of intermediate possibilities. Also known as “excluding the middle.”

Confusion of Correlation and Causation [Cinema says: Allen Besselink’s article on Pathoanatomy provides a perfect and current example of this in the medical world]

Carl Sagan’s Baloney Detection Kit is a fine tool for evaluating any claim. And, you are now better equipped to filter out ambiguous and vacuous statements or arguments and focus on what matters.

Refresh yourself on Part 1 of The Baloney Detection Kit here.

Sources: Carl Sagan’s The Demon-Haunted World,                                                        Allan Besselink’s Blogpost “In Pathoanatomy We Trust – But Should We”

Follow me on Twitter @Cinema_Air

The Baloney Detection Kit – Part 1

Carl Sagan provided us with “Tools for skeptical thinking” through his “Baloney Detection Kit.” Here are some of my favorite excerpts on what to do when evaluating a claim:

Arguments of authority carry little weight – “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts

Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives.

Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.

Quantify

Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.

Always ask whether the hypothesis can be, at least in principle, falsified.

Control experiments are essential.

In part 2: What the Baloney Detection Kit teaches us NOT to do by avoiding common fallacies.

Source: Carl Sagan’s The Demon-Haunted World

Follow me on Twitter @Cinema_Air