The Representativeness Heuristic: Judging by Resemblance
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable: (A) Linda is a bank teller, or (B) Linda is a bank teller and is active in the feminist movement? In studies conducted by Amos Tversky and Daniel Kahneman, between 85% and 90% of participants chose option B. This is logically impossible. And yet it feels obviously right. That feeling is the representativeness heuristic doing its work.
What the Representativeness Heuristic Is
When we assess the probability of an event or the likelihood that something belongs to a category, we often rely on a mental shortcut: how much does this instance resemble the prototype of the category? How representative is it of our mental model of what that thing looks like?
The heuristic was first described by Tversky and Kahneman in their 1972 paper "Subjective Probability: A Judgment of Representativeness" and developed through their seminal 1974 Science article on heuristics and biases. The core claim is simple and powerful: people estimate probability by similarity. A description that closely matches our stereotype of a typical member of a category will be judged probable; one that doesn't match will be judged improbable — regardless of the actual base rates involved.
This heuristic is not irrational in the abstract. In many situations, resemblance is a reasonable guide to category membership. A creature with four legs, a wagging tail, and a wet nose is probably a dog. The problem arises when representativeness is used as a substitute for actual probability calculations — when the vivid match overrides the statistical evidence.
The Conjunction Fallacy
The Linda problem demonstrates what Tversky and Kahneman called the conjunction fallacy: the belief that the conjunction of two events (Linda is a bank teller AND a feminist) is more probable than either event alone (Linda is a bank teller). This is a direct violation of the most basic rule of probability: the probability of two events occurring together can never exceed the probability of either event occurring alone. P(A and B) ≤ P(A).
The reason people make this error is precisely representativeness. The description of Linda fits the stereotype of a feminist activist far better than it fits the stereotype of a bank teller. Bank tellers, in the mental prototype, are not especially political, not philosophy majors, not concerned with discrimination. Feminist bank tellers are at least feminist — a closer match to the description. The conjunction feels more probable because it feels more representative, even though adding a second condition can only make an event rarer.
Tversky and Kahneman ruled out several alternative explanations — misunderstanding of the word "probable," charitable interpretation, conversational implicature — and found the fallacy robust across a wide range of participants and phrasings. Even when explicitly told about conjunction rules, many participants maintained their choice. The pull of representativeness overrides explicit logical knowledge for a significant proportion of people.
This fallacy is not trivially academic. Scenario-based planning, legal reasoning, and medical diagnosis all involve judging the probability of specific, richly described situations. A scenario in which a detailed, coherent story leads to a specific outcome consistently feels more probable than a vaguer story leading to a general outcome — even though the specific scenario is necessarily a subset of the general one. The more detail added to a prediction, the more probable it can seem, and the less probable it actually is. See conjunction fallacy for a deeper treatment.
Base Rate Neglect
The representativeness heuristic's most consequential failure is its systematic tendency to displace base rates. Base rates are the prior probabilities of events — how common a category is in the population. When a vivid description is available, people consistently underweight or ignore these prior probabilities in favour of how well the description matches the category.
Tversky and Kahneman demonstrated this with the "Tom W." problem. Participants were told that Tom W. was described as intelligent but lacking creativity and empathy, preferring orderly systems and abstract problems. They were then asked to rank the likelihood that Tom was enrolled in various graduate programmes. Most ranked computer science or engineering as highly likely and humanities or education as unlikely — an entirely understandable response to the description. But the experiment also gave participants information about the relative sizes of those programmes: social science and humanities are much larger than computer science. The statistically correct response was to estimate that Tom was more likely to be in a large programme even if the description didn't fit. Participants barely adjusted at all. The description dominated the base rate.
This is base rate neglect in action. The match between description and stereotype is psychologically vivid; the population statistic is abstract. Vivid always beats abstract in the intuitive system.
Representativeness in Everyday Life
Stereotyping and Profiling
The mechanism underlying stereotyping is representativeness. We judge individuals by how closely they match our prototype of a group. A young man in certain clothes on a city street who matches the prototype of "criminal" is judged more likely to be a criminal than base rates would warrant. A woman in a tech company who doesn't match the prototype of "engineer" may be assumed to be in a support role. These are not always conscious prejudices; they are automatic outputs of a heuristic that treats resemblance as probability.
Predictive policing, insurance risk assessment, and hiring all face the same problem: algorithms and humans alike may use representativeness to make probabilistic judgments, systematically misestimating risk for individuals based on how much they match a group stereotype rather than on evidence specific to them.
Medical Diagnosis
Clinicians are taught to "think horses, not zebras" — when you hear hoofbeats, consider the common diagnosis before the exotic one. This is an explicit correction for representativeness bias. Without such training, a patient presenting with a dramatic symptom pattern that closely matches a rare disease may be diagnosed with that disease more readily than base rates warrant. The vivid pattern match overrides the prior probability of the rare condition. The same physician who orders a test for a dramatic rare disease may miss a common one whose presentation doesn't match the textbook prototype as cleanly.
Financial Prediction
Investors notoriously judge companies by narratives rather than base rates. A company with an inspiring story, a charismatic CEO, and a product that feels transformative is judged as a likely success regardless of the base rate for such companies (which is low). The "growth company" prototype is a powerful attractor. Conversely, a company in an unglamorous industry with steady but unexciting fundamentals may be undervalued because it doesn't match any exciting prototype. Representativeness drives the well-documented tendency to overpay for glamour stocks and underpay for value stocks.
Gambling Fallacies
People expect random sequences to "look random." A sequence of coin flips like HHHHHH doesn't look representative of randomness — so it feels unlikely and due for a correction. HTTHHT looks more representative of a random sequence and is judged more probable. This produces the gambler's expectation that a run of bad luck will soon reverse: the current sequence doesn't look like what random chance "should" look like. See also: gambler's fallacy.
Representativeness and the Availability Heuristic
Representativeness is one of three major heuristics identified by Tversky and Kahneman; the others are availability and anchoring. All three produce systematic errors by substituting an easy-to-compute proxy (resemblance, ease of recall, initial value) for the genuinely relevant quantity (probability, frequency, true value). Representativeness specifically substitutes pattern-matching for probability. When the two align, it works well. When base rates, sample sizes, or conjunction rules matter, it fails.
Can We Do Better?
Training in statistical reasoning helps, but less than you would hope. Studies show that statistics education reduces (but does not eliminate) susceptibility to the conjunction fallacy. The dual-process account explains why: the representativeness judgment is fast, automatic, and emotionally compelling; the statistical correction requires slow, deliberate reasoning. Under time pressure, cognitive load, or emotional involvement, the heuristic wins.
What actually helps:
- Ask about base rates explicitly. Before relying on a description to judge probability, ask: how common is this outcome in general? The population prior is often more informative than the vivid description.
- Count the alternatives. The conjunction fallacy is partly defeated by asking: what are all the possible outcomes? When you see that "bank teller and feminist" is a subset of "bank teller," the logic becomes clearer.
- Separate the story from the probability. A compelling narrative makes a scenario feel likely. That feeling is not evidence. Stories that fit together beautifully are specific, and specific scenarios are always rarer than general ones.
- Use reference classes. For predictions and assessments, systematically look up how similar cases turned out, rather than extrapolating from the details of this particular case.
The representativeness heuristic is one of the most fundamental patterns in human probabilistic reasoning — and one of its most reliable sources of error. Linda remains a bank teller and a feminist in the minds of most people who encounter her. She always will. The question is whether, knowing this, we can catch ourselves before acting on the feeling.
Sources & Further Reading
- Tversky, A., & Kahneman, D. "Subjective Probability: A Judgment of Representativeness." Cognitive Psychology 3, no. 3 (1972): 430–454.
- Tversky, A., & Kahneman, D. "Judgment Under Uncertainty: Heuristics and Biases." Science 185, no. 4157 (1974): 1124–1131.
- Tversky, A., & Kahneman, D. "Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment." Psychological Review 90, no. 4 (1983): 293–315.
- Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011. Chapters 14–15.
- Wikipedia: Representativeness heuristic