Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131. doi:10.1126/science.185.4157.1124

Summary

In this paper, Tversky & Kahneman give an overview of three commonly used heuristics that lead to systematic biases: representativeness, availability, and anchoring.

Representativeness

The representativeness heuristic was discussed in Kahneman & Tversky, so I won’t go into too much detail about it here.

  • Insensitivity to prior probabilities
  • Insensitivity to sample size
  • Misconceptions of chance — e.g. HTHTTH is more representative than HHHTTT
  • Insensitivity to predictability
  • The illusion of validity — confidence is a function based more on degree of representativeness rather than accuracy
  • Misconceptions of regression

Availability

The availability heuristic states that people will estimate things like probabilities by seeing how many items from the distribution come to mind. Thus, things that come to mind more easily will be estimated to have higher probability, even if they don’t actually have higher probability.

  • Biases due to the retrievability of instances — e.g. things like fame or salience will make things more available, and thus be considered more numerous
  • Biases due to the effectiveness of a search set — e.g. judging whether more words start with ‘r’ or have ‘r’ as the third letter
  • Biases of imaginability — e.g. intuitively computing “10 choose $k$” for $2\leq k\leq 8$. People are better able to imagine disjoint sets for $k=2$ than $k=8$, so they say that “10 choose 2” is higher than “10 choose 8”, even though they are the same
  • Illusory correlations — overestimating the frequency of co-occurring events (e.g. believing that patients who exhibit suspiciousness draw characters with shifty eyes, because “suspiciousness” and “eyes” have a strong association, even if they might not predict anything true about the diagnosis)

Anchoring and Adjustment

The anchoring heuristic states that people estimate values by starting at one value (the anchor) and adjusting either up or down. Usually, people fail to fully adjust, and thus their estimates are biased towards the anchor.

  • Insufficient adjustments — e.g. in multiplying $8\times 7\times 6\times 5\times 4\times 3\times 2\times 1$ vs. multiplying $1\times 2\times 3\times 4\times 5\times 6\times 7\times 8$, people will report higher estimates for the first product and lower estimates for the second product
  • Biases in the evaluation of conjunctive and disjunctive events — e.g. overestimation that things with high probability will co-occur, underestimation that at least one thing with low probability will occur. The anchors in this case are the probabilities which are then insufficiently adjusted, leading to overestimates of high probability things and underestimates of low probability things.
  • Anchoring in the assessment of subjective probability distributions — e.g. starting at current (mean) estimate and adjusting upward to get to the 90th percentile

Takeaways

Each of these heuristics is quite telling about the way that people think. Although they’ve been taken in the past that people are just “irrational”, I think they’re more interesting when thought about in the context of how people are actually processing information. The case is clearer in cases like the availability heuristic, for example: it seems plausible that whatever algorithm is used for retrieving things from memory will bias us when we try to objectively estimate the probabilities of things. Generally, the point of memory seems to be to provide us with information that is going to be relevant, which means there will be an effect of both frequency and utility—the combination of which would manifest itself as “availability”.