Apps

🧪 This platform is in early beta. Features may change and you might encounter bugs. We appreciate your patience!

← Back to Library
blog.category.aspect Mar 29, 2026 8 min read

The Subadditivity Effect: Why Parts Seem Larger Than the Whole

Ask someone to estimate the probability of dying from heart disease. Then from cancer. From stroke. From accidents. From respiratory disease. From all other causes combined. Add up their answers. The total will almost certainly exceed 100% — sometimes by a substantial margin. Now ask a different person to estimate the probability of dying from any cause whatsoever. They will give you a number far below the sum of those specific causes — even though "dying from any cause" is logically guaranteed at 100% and must encompass everything the other person estimated. This is the subadditivity effect: the whole is judged as smaller than the sum of its parts. It is one of the most consistent and consequential anomalies in human probability judgment.

Support Theory and Its Origins

The subadditivity effect was formalised by Amos Tversky and Derek Koehler in their 1994 paper introducing support theory. The core claim of support theory is that probability judgment is not attached to events (in the logical sense) but to descriptions of events. When you unpack a general event into its specific components, each component carries its own psychological "support" — evidence, associations, mental imagery. The sum of support for specific components exceeds the support for the general event described as a whole.

Consider the event "death by natural causes." As a whole, this category is abstract and doesn't evoke much specific imagery. But unpack it into: death by heart attack, death by stroke, death by cancer, death by pneumonia — and each component brings its own associations, personal experiences, memories of news stories, faces of affected relatives. Each specific cause feels substantial. Their sum feels overwhelming. The general category, by contrast, feels almost pale.

Tversky and Koehler showed this across numerous domains. In one study, participants estimated the probability that a randomly selected individual would die from each of seven specific causes, and a separate group estimated the probability of dying from "any natural cause." The sum of specific-cause estimates was significantly higher than the general estimate — despite the general category logically including all specific causes and more. The more detailed the unpacking, the larger the subadditivity effect.

Why Parts Feel Larger Than the Whole

The Role of Imagination and Availability

When you consider a specific cause of death — a car accident, say — you can imagine it. You may recall a specific incident. You can picture the scenario. Each specific description activates the availability heuristic: you judge how likely something is partly by how easily you can retrieve or construct an example. A specific scenario is imaginable; a general category is abstract.

This is why the subadditivity effect grows with unpacking: the more finely you divide a category into specific scenarios, the more imaginative fodder you provide. Each scenario, once imagined, carries real psychological weight. The sum of all that weight exceeds what the abstract general category evokes.

Implicit Assumptions in Category Estimates

When someone estimates "probability of dying from any cause," they may implicitly assume some average or typical person in a healthy state, and anchor on a sense of how likely this person is to die prematurely. When asked about specific causes, each cause invites consideration of conditions under which it would occur — conditions that may not all apply to the same person simultaneously, but each of which adds probability weight to the estimate.

There is also a cognitive completeness problem: when asked about a general category, people may not mentally enumerate all the specific sub-cases. The unpacked list does this enumeration automatically, making it harder to forget or undercount.

Failure to Normalise

Probability judgments are not naturally constrained to sum to one. People do not typically check whether their probability estimates for a set of mutually exclusive and exhaustive events add up to 100%. In everyday reasoning, we estimate each probability in isolation, without a global normalisation step. Since each specific estimate is generated by considering supporting evidence for that outcome — and not by considering its relationship to other outcomes — there is no natural mechanism preventing the sum from exceeding 100%.

The Subadditivity Effect in Practice

Insurance Pricing and Risk Perception

The insurance industry exploits subadditivity, whether deliberately or by observing human behaviour. People will pay more for insurance covering specific named risks than for comprehensive insurance covering all risks — even though the comprehensive policy is guaranteed to include whatever the specific policy covers. Itemised lists of covered risks make each risk feel real and present; comprehensive coverage sounds abstract.

Travel insurance is a classic case. A policy that specifies "covers trip cancellation due to illness, injury, job loss, family emergency, natural disaster, terrorism, airline bankruptcy" produces stronger purchase intent than one that says "comprehensive travel protection." The same coverage, identically priced, sells better when the specific perils are enumerated — because enumeration activates availability, which inflates perceived risk, which justifies the purchase.

Legal Reasoning and Prosecution

In legal contexts, the subadditivity effect suggests that breaking down a general charge into specific acts or specific types of harm will make those acts seem more probable to jurors than they would seem if assessed under the general charge. A prosecution that details specific instances of negligence ("failed to maintain the safety guard; failed to conduct required inspections; failed to train operators adequately") may produce higher perceived probability of negligence than simply arguing "was negligent." The details do more than provide evidence — they expand the psychological footprint of the allegation.

Risk Assessment and Project Planning

Organisations conducting risk assessments frequently find that when they enumerate specific risk categories (technical risks, market risks, regulatory risks, personnel risks, supply chain risks), the sum of assessed probabilities of at least one materialising exceeds the overall estimated probability of facing significant project risk. The detailed breakdown has made risk feel more present than the holistic view. This can lead to either excessive caution (if the specific estimates drive decisions) or a false sense of security when the general risk is assessed abstractly without unpacking.

The planning fallacy is a related phenomenon: when we plan projects, we focus on the specific things we expect to happen and underweight the general category of "things I haven't thought of." Subadditivity explains part of why: the unimagined risks carry no availability, and therefore no psychological weight. The things we enumerate feel concrete; the things we don't enumerate don't exist in our risk model.

Medical Prognosis

Physicians asked to estimate the probability of various specific complications following a procedure will typically generate a sum of estimates that exceeds the general estimate of "at least one complication occurring." Each complication, considered individually, is made salient by clinical knowledge and memory of past cases. The overall complication rate, assessed directly, is often lower — both because it is anchored on general clinical experience and because it doesn't undergo the availability inflation that comes from specific case-by-case consideration.

This has implications for informed consent: detailed enumeration of risks may inflate perceived risk beyond the true overall probability, potentially leading to greater anxiety or avoidance of beneficial procedures. But failure to enumerate may suppress it below the true probability, inadequately preparing patients. The subadditivity effect sits at the centre of a genuine ethical tension in clinical communication.

Superadditivity: The Opposite Problem

Tversky and Koehler also identified superadditivity in some contexts: the sum of estimates for specific hypotheses can fall below the estimate for the general category. This typically occurs when the specific hypotheses are seen as the most likely or most available instances of a broader class, and the general category estimate implicitly includes less-available instances that don't make the specific list. But subadditivity is the dominant effect when people are explicitly presented with unpacked components — the case in most practical decision contexts.

Relationship to Other Biases

The subadditivity effect is not an isolated anomaly. It connects to several other well-documented biases:

  • Availability heuristic: Specific scenarios are more imaginable and therefore feel more probable. Unpacking increases availability and therefore increases subjective probability.
  • Anchoring bias: When assessing specific components, each is anchored on its own supporting evidence rather than on its proportional share of the whole. There is no natural anchoring on the constraint that the total must be 100%.
  • Base rate neglect: Specific scenarios crowd out the base rate of the general category. The statistical frequency of the whole is underweighted relative to the vivid specifics of the parts.

Correcting for Subadditivity

Awareness of the effect is a starting point, but not sufficient. Practical corrections include:

  • Normalise probability estimates. After generating estimates for specific components, check whether they sum to approximately the right total for the general category. If they sum to more than 100%, explicitly revise downward.
  • Use outside-view estimates. Instead of building up from specific components, look up the base rate for the general category first and use it as a constraint on the specific estimates. How often does this type of project fail overall? Use that as a ceiling for the sum of specific risk estimates.
  • Compare packed and unpacked versions. In risk assessment, generate both a holistic estimate and a component-by-component estimate, and take seriously the discrepancy between them.
  • Beware enumeration effects in marketing and legal framing. When someone presents you with a detailed list of specific risks, benefits, or charges, ask what the general assessment would be — and whether the specific enumeration is doing more psychological work than logical work.

The subadditivity effect is a window into the fundamental architecture of human probability judgment: we do not assess probabilities by logical calculation; we construct them from available evidence, imagination, and narrative. The whole suffers from the poverty of abstraction. The parts benefit from the richness of detail. In a world that constantly demands both holistic and detailed risk assessment, knowing which direction the bias pushes is the first step toward correcting it.

Sources & Further Reading

  • Tversky, A., & Koehler, D. J. "Support Theory: A Nonextensional Representation of Subjective Probability." Psychological Review 101, no. 4 (1994): 547–567.
  • Rottenstreich, Y., & Tversky, A. "Unpacking, Repacking, and Anchoring: Advances in Support Theory." Psychological Review 104, no. 2 (1997): 406–415.
  • Fox, C. R., & Tversky, A. "A Belief-Based Account of Decision Under Uncertainty." Management Science 44, no. 7 (1998): 879–895.
  • Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.
  • Wikipedia: Subadditivity effect

Related Articles