Apps

🧪 This platform is in early beta. Features may change and you might encounter bugs. We appreciate your patience!

← Back to Library
blog.category.aspect Mar 29, 2026 8 min read

Overconfidence Effect: Why We Consistently Think We're Better Than We Are

In 1981, Ola Svenson at the University of Stockholm asked American and Swedish drivers to rate their own driving skill and safety compared to others. In the American sample, 93% placed themselves in the top 50% for skill; 88% rated themselves in the top 50% for safety. Among Swedish drivers, the figures were lower but still substantial — around 69% and 77% respectively. This is logically impossible. Half of any group is, by definition, below the median. The finding wasn't a fluke. It has been replicated in dozens of domains, across cultures and professions, consistently producing the same result: most people believe they are better than average at most things. This is the overconfidence effect.

Three Flavours of Overconfidence

Research by Don Moore and Paul Healy (2008) usefully distinguishes three distinct types of overconfidence that are often conflated:

  • Overestimation: Believing your absolute performance is better than it actually is. You think you'll finish the project in three days; it takes two weeks.
  • Overplacement: Believing your relative performance is better than others'. The "above-average effect" — 93% of drivers, 94% of college professors who rate their teaching as above average, etc.
  • Overprecision: Being more confident than warranted in the accuracy of your beliefs. Giving narrow confidence intervals around estimates that are actually much more uncertain than you think.

These three forms are psychologically related but not identical — and interestingly, they can sometimes pull in opposite directions. For very difficult tasks, people tend to underestimate their absolute performance while still overplacing relative to peers. But in everyday practical contexts, all three forms tend to produce systematic over-confidence in ways that affect real decisions.

The Planning Fallacy

Perhaps the most costly manifestation of overconfidence in practical life is the planning fallacy — the systematic tendency to underestimate the time, costs, and risks associated with future tasks, while overestimating benefits. Identified by Kahneman and Tversky in 1979 and developed extensively by Kahneman and subsequent researchers, the planning fallacy explains why construction projects routinely run over budget and schedule, why software development timelines are almost always optimistic, and why personal projects take far longer than anticipated.

The empirical record is striking. Roger Buehler, Dale Griffin, and Michael Ross (1994) asked students to predict when they would complete a current project and how confident they were. Even when the students estimated that their 99% confidence prediction would be met "only 45% of the time by that date, they were still surprised when the actual completion was later." A meta-analysis by Flyvbjerg (2003) of large infrastructure projects found that 9 in 10 projects exceeded their cost estimates, with average cost overruns of 28% for roads, 45% for railways, and 52% for tunnels. The Sydney Opera House, originally budgeted at AUD$7 million, cost AUD$102 million.

The planning fallacy arises from what Kahneman calls an "inside view" — planners focus on the specific features of their particular project, mentally simulating a smooth path to completion, rather than consulting the "outside view": the base rate of similar projects and their actual outcomes. When you think about your project in detail, you generate reasons it will succeed; when you look at how projects like yours typically go, you learn that they fail, run late, and cost more. Overconfidence drives the adoption of the inside view and the neglect of the outside view.

Expert Predictions and Calibration Failures

Philip Tetlock's landmark study, summarised in Expert Political Judgment (2005), tracked 82,361 predictions made by 284 experts in political science, economics, and related fields over nearly two decades. The results were damning: expert predictions performed barely better than chance, and in many cases no better than a simple base-rate rule of thumb. More significantly, experts were systematically overconfident: they assigned probabilities to predictions that were far too high given their actual hit rates.

Tetlock found that the more famous and confident the expert, the worse their calibration tended to be. High-status experts who appeared regularly in media were no more accurate than obscure academics — and were often less calibrated because public visibility selects for bold, confident predictions rather than accurate ones. Markets, media, and prestige reward certainty; they punish nuanced probabilistic statements that hedge appropriately but sound weak.

Calibration — the alignment between stated confidence and actual accuracy — can be improved. Tetlock's subsequent "Good Judgment Project" identified a class of "superforecasters" who were substantially better calibrated than average experts. The key qualities: actively seeking disconfirming evidence, updating beliefs fluidly in response to new information, thinking in probability ranges rather than yes/no, and avoiding overcommitment to specific predictions. These are deliberate cognitive practices that work against the natural pull of overconfidence.

Financial Markets and the Overconfidence Premium

Overconfidence is widely considered one of the primary mechanisms driving excess trading in financial markets. Terrance Odean's analysis of retail brokerage accounts found that the most active traders — those who traded most frequently — consistently underperformed the market and underperformed less active investors. They were overconfident in their ability to identify winning investments, generating excessive transactions that produced costs (commissions, taxes, bid-ask spreads) without producing superior returns.

Research by Brad Barber and Terrance Odean found that men trade 45% more than women and as a result earn returns 1.4 percentage points lower per year. The gender gap in trading frequency tracks a gender gap in overconfidence documented across many domains — men tend to be more overconfident than women in competitive and financial contexts, though both genders show overconfidence compared to calibrated benchmarks.

At the macroeconomic level, overconfidence among investors and business leaders may amplify asset price bubbles. The dot-com boom of the late 1990s and the housing bubble of the mid-2000s both involved systematic overconfidence: investors, lenders, and ratings agencies consistently underestimated downside risks while overestimating the ability of new business models or financial instruments to eliminate risk. The 2008 financial crisis, in part, was an overconfidence crisis.

Medicine and the Expert Paradox

Medical overconfidence has documented, measurable consequences for patient outcomes. Studies of diagnostic accuracy consistently find that physician confidence in a diagnosis is poorly correlated with accuracy — confident diagnoses are wrong at roughly the same rate as uncertain ones. Yet confident diagnoses receive less scrutiny, less follow-up testing, and less revision when new information arrives.

A systematic review by Mark Graber, Nancy Franklin, and Ruthanna Gordon (2005) found that diagnostic errors affected approximately 10–15% of patients in medical settings, with overconfidence identified as a contributing factor in a substantial proportion of cases. Doctors who were confident their diagnosis was correct were less likely to order confirming tests or maintain confirmation vigilance — and therefore more likely to miss alternative diagnoses.

Surgical outcomes research shows a similar pattern: surgeons who overestimate their skills relative to peers have higher complication rates. The Dunning-Kruger related insight applies: the least skilled practitioners are also the least aware of their skill deficits, partly because assessing quality in a domain requires domain competence. See also: Dunning-Kruger effect.

Why Overconfidence Persists

If overconfidence is so consistently harmful, why hasn't it been selected away? Several answers are proposed:

  • Social benefits: Expressing confidence signals competence and attracts followers. In ancestral group contexts, confident individuals may have been better positioned for leadership roles regardless of their actual accuracy. The social reward for appearing confident may have outweighed the individual cost of being wrong.
  • Motivational benefits: Overconfidence may support persistence in difficult endeavours. An accurate assessment of the odds of startup success (around 10% survive 10 years) might rationally deter entrepreneurship that produces social value. Some overconfidence may function as a commitment device.
  • Information asymmetry: In many ancestral contexts, feedback on predictions was delayed, noisy, or ambiguous. Without clear calibration feedback, overconfidence was hard to correct. Modern environments provide cleaner feedback in some domains (financial markets, sports) — and indeed, professionals in heavily feedback-rich domains tend to show less overconfidence.

Countering Overconfidence

The research literature suggests several interventions that genuinely improve calibration:

  • Reference class forecasting: Consult the base rate for your type of project, prediction, or decision before estimating. "How long do projects like this typically take?" forces outside-view thinking.
  • Pre-mortem analysis: Before committing to a plan, imagine it has failed. Ask: what went wrong? This surfaces risks that optimistic forward-thinking suppresses.
  • Seek disconfirming evidence: Actively look for reasons your prediction might be wrong, rather than accumulating reasons it is right. This counteracts the confirmation bias that fuels overconfidence.
  • Express uncertainty in ranges: Forcing yourself to give a range rather than a point estimate encourages honest engagement with uncertainty.
  • Track your predictions: Calibration is learnable, but only if you keep score. Reviewing your predictions against outcomes is the single most effective way to correct systematic overconfidence.

Sources & Further Reading

  • Svenson, O. "Are We All Less Risky and More Skillful Than Our Fellow Drivers?" Acta Psychologica 47, no. 2 (1981): 143–148.
  • Moore, D. A., & Healy, P. J. "The Trouble with Overconfidence." Psychological Review 115, no. 2 (2008): 502–517.
  • Kahneman, D., & Tversky, A. "Intuitive Prediction: Biases and Corrective Procedures." TIMS Studies in Management Sciences 12 (1979): 313–327.
  • Tetlock, P. E. Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press, 2005.
  • Flyvbjerg, B., Skamris Holm, M. K., & Buhl, S. L. "How Common and How Large Are Cost Overruns in Transport Infrastructure Projects?" Transport Reviews 23, no. 1 (2003): 71–88.
  • Barber, B. M., & Odean, T. "Boys Will Be Boys: Gender, Overconfidence, and Common Stock Investment." Quarterly Journal of Economics 116, no. 1 (2001): 261–292.
  • Wikipedia: Overconfidence effect

Related Articles