Apps

🧪 This platform is in early beta. Features may change and you might encounter bugs. We appreciate your patience!

← Back to Library
blog.category.aspect Mar 29, 2026 7 min read

The Conjunction Fallacy: When More Detail Feels More Likely

In 1983, Amos Tversky and Daniel Kahneman published one of the most replicated findings in cognitive psychology. They described a woman named Linda: 31 years old, single, outspoken, very bright, a philosophy major, deeply concerned with discrimination and social justice, and a participant in anti-nuclear demonstrations as a student. Participants were then asked which of two options was more probable: (A) Linda is a bank teller, or (B) Linda is a bank teller and active in the feminist movement. Roughly 85% of participants chose option B. This is impossible. A conjunction — two conditions both being true — can never be more probable than either condition alone. The participants were not bad at arithmetic. They were doing something far more interesting, and far more dangerous: they were judging by story quality rather than by probability.

The Mathematics of the Fallacy

The conjunction rule is one of the foundational axioms of probability theory: for any two events A and B, the probability of both A and B occurring together cannot exceed the probability of either occurring alone. Written formally: P(A ∩ B) ≤ P(A) and P(A ∩ B) ≤ P(B). This holds regardless of how related the events are, regardless of how the events were described, and regardless of how strongly one event seems to imply the other.

To make this concrete: suppose 10% of women Linda's age are bank tellers. Even if 99% of women bank tellers with Linda's background are feminist activists, the probability of being both a bank teller and a feminist activist is still 9.9% — less than the 10% probability of simply being a bank teller. Adding more conditions to a description can only reduce or maintain its probability; it can never increase it. This is arithmetic, not opinion.

Yet the vast majority of participants — including statisticians and graduate students who should know better — select the conjunction. The effect is remarkably robust and has been replicated across cultures, age groups, and levels of statistical education.

Why We Fall For It: The Representativeness Heuristic

Tversky and Kahneman's explanation centres on the representativeness heuristic: we judge the probability of an event by how well it matches our mental prototype or narrative expectation. Linda's description is carefully constructed to match the stereotype of a feminist activist. When we imagine "Linda the bank teller AND feminist activist," the image coheres — it fits. When we imagine "Linda the bank teller," the image jars — it doesn't match the description. The conjunction feels more likely because it is a better story.

This is the fundamental confusion: how representative a description is has nothing logically to do with how probable it is. A description can be highly representative (it fits a stereotype perfectly) and extremely improbable (the stereotype is rare). But our intuitive system conflates the two. When something fits a narrative, we treat narrative fit as evidence of probability. It is a category error with pervasive consequences.

The "Linda Effect" in Different Domains

Medical Diagnosis

In clinical settings, the conjunction fallacy appears when a "complete" diagnosis — one that accounts for all the patient's symptoms — is judged more likely than a simpler one. A physician presented with a patient who has chest pain, fatigue, and mild shortness of breath may rate "heart failure with secondary anaemia" as more probable than "heart failure alone," even if heart failure alone accounts for all three symptoms and adding anaemia only reduces the probability. The richer diagnosis feels more accurate because it seems to explain more. It may well be less probable.

This connects to the pattern-matching tendency in medicine that base rate neglect also exploits: vivid, specific, symptom-matching diagnoses feel more compelling than the epidemiologically probable ones.

Legal Reasoning

In courtrooms, prosecutors and defence attorneys both exploit the representativeness heuristic. A detailed narrative of events — one that connects motive, opportunity, and method into a coherent story — feels more persuasive than a bare assertion of guilt or innocence. But the more specific the narrative, the less probable any single version of it is. Each additional detail that is added to a story reduces the probability that this exact version of events occurred, even as it increases the story's plausibility and memorability. Jurors may be judging narrative fit when they should be estimating probability.

Forecasting and Political Prediction

Psychologist Philip Tetlock documented a related phenomenon in his work on political forecasting: expert analysts tend to give higher confidence ratings to detailed, causally connected scenarios than to simple, general ones — even when the detailed scenario is logically contained within the general one. A prediction that "there will be a major oil price shock, caused by political instability in the Gulf, triggered by domestic elections" is assigned higher subjective probability than "there will be a major oil price shock," despite being a strict subset. More detail makes the scenario feel grounded, plausible, inevitable — but it can only be less probable.

Variants and Extensions

Tversky and Kahneman distinguished between two versions of the Linda problem. The "transparent" version presents the two options in a way that makes the conjunction relationship explicit — most participants immediately recognise that B must be less probable. The "opaque" version, which is the standard version, lists multiple options at varying levels of specificity without making the logical relationships obvious. The fallacy occurs primarily in the opaque version, which suggests that it is not a failure of reasoning per se but a failure of representation: when the logical relationship is visible, people reason correctly; when it is hidden in a list of apparently parallel options, they default to representativeness.

The conjunction fallacy also appears in the valuation of scenarios: Tversky found that people were willing to pay more for insurance that covered "death from terrorism" than for insurance that covered "death from any cause" — because the first story was more vivid and representatively frightening, even though the second policy strictly dominated it in coverage.

Debate and Refinements

The conjunction fallacy has generated substantial academic debate. Some researchers, notably Gerd Gigerenzen and colleagues, argue that the effect partly reflects ambiguity about what "probability" means in natural language — that when people say Linda is "more likely" to be a feminist bank teller, they may be expressing typicality or fit rather than strict probability in the frequentist sense. When the Linda problem is reformulated in terms of frequencies ("out of 100 women matching Linda's description, how many are bank tellers, and how many are feminist bank tellers?"), the fallacy rate drops substantially.

This is a genuine and important qualification. But it does not dissolve the core problem: in real-world judgments under uncertainty — financial forecasting, medical diagnosis, legal reasoning, risk assessment — people routinely conflate narrative coherence with probability. Whether this reflects a fundamental reasoning error or a pragmatic interpretation of ambiguous language matters less than the practical consequence: that detailed, plausible-sounding scenarios are systematically overestimated relative to their logically simpler alternatives.

Defending Against It

The cognitive toolkit for resisting the conjunction fallacy is primarily structural:

  • Decompose the conjunction: Before judging whether A-and-B is probable, first judge whether A alone is probable, and whether B alone is probable. If either is rare, the conjunction must be rarer still.
  • Reframe in frequencies: Replace "how likely is it that…" with "out of 1,000 cases like this, how many would have feature X, and how many of those would also have feature Y?" Frequency framing reduces the fallacy by making the set-subset relationship concrete.
  • Watch for narrative seduction: When a description is vivid and specific and the story "hangs together" beautifully, that is precisely the moment to be most sceptical about probability judgments. Coherent stories are often less probable than incoherent ones.
  • Compare to the less-detailed alternative: Always ask whether the simpler hypothesis — the one that doesn't require all the specific conjuncts — has been given its due. Availability and representativeness both push us toward the vivid; base rates push back.

The Linda problem is thirty years old and remains one of the most effective demonstrations in the psychology of reasoning. It shows not that people are irrational, but that human rationality runs on software optimised for narrative — for finding coherent stories, recognising patterns, building models of people and situations that hang together. That software produces the conjunction fallacy as a side effect. Understanding the bug does not uninstall it, but it does allow us to patch our judgments when it matters most.

Sources & Further Reading

  • Tversky, A., & Kahneman, D. "Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment." Psychological Review 90, no. 4 (1983): 293–315. (The original paper.)
  • Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011. Chs. 14–15.
  • Gigerenzen, G. "How to Make Cognitive Illusions Disappear: Beyond 'Heuristics and Biases.'" European Review of Social Psychology 2, no. 1 (1991): 83–115.
  • Tetlock, P. E. Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press, 2005.
  • Mellers, B., et al. "Superforecasting: The Art and Science of Prediction." Psychological Science in the Public Interest 16, no. 3 (2015): 43–48.
  • Wikipedia: Conjunction fallacy

Related Articles