Apps

🧪 This platform is in early beta. Features may change and you might encounter bugs. We appreciate your patience!

← Back to Library
blog.category.aspects Mar 30, 2026 2 min read

Multiple Comparisons Problem — When Logic Wears a Disguise

The statistical error of performing many tests without adjusting for the increased probability of false positives. With a significance level of 0.05 and 20 independent tests, there is a 64% chance of at least one false positive. Failure to correct for this inflates the apparent number of 'significant' findings.

Also known as: Look-Elsewhere Effect, Multiple Testing Problem, Multiplicity

How It Works

Each individual test seems legitimate at the 0.05 level. The cumulative false positive rate is counterintuitive because people think about each test in isolation rather than as part of a family of tests.

A Classic Example

A brain imaging study tests 100,000 voxels for activation. At p < 0.05, about 5,000 voxels will appear significant by chance alone, potentially producing spurious 'brain activation' maps.

More Examples

A nutrition researcher surveys 500 participants on 80 different dietary habits and tests each one for correlation with heart disease risk. At p < 0.05, roughly four associations will appear significant purely by chance. The researcher publishes the 'finding' that eating soup three times a week reduces risk, without correcting for multiple comparisons.
A social media company's data science team runs A/B tests on 200 minor interface variations in a single month, each evaluated at p < 0.05. Statistically, about 10 of those tests will show a 'significant' effect even if none of the changes actually influence user behavior, leading the team to roll out ineffective features confidently.

Where You See This in the Wild

Genomics, neuroimaging, clinical trials with multiple endpoints, and any large-scale data analysis.

How to Spot and Counter It

Apply multiple comparison corrections (Bonferroni, FDR, permutation testing). Pre-register hypotheses to distinguish confirmatory from exploratory analysis.

The Takeaway

The Multiple Comparisons Problem is one of those reasoning errors that sounds perfectly logical at first glance. That's what makes it dangerous — it wears the costume of valid reasoning while smuggling in a broken conclusion. The best defense? Slow down and ask: does this conclusion actually follow from these premises, or am I just connecting dots that happen to be near each other?

Next time someone presents you with an argument that "just makes sense," check the structure. The feeling of logic is not the same as logic itself.

Related Articles