Illusory Correlation: Seeing Patterns That Aren't There
For centuries, physicians, nurses, and patients alike reported a clear pattern: emergency rooms fill up on full-moon nights. Psychiatric wards get louder. Bizarre cases cluster around the lunar peak. The effect seems undeniable to those who work the shifts. A comprehensive review of 37 studies and over 1.4 million hospital admissions found no relationship whatsoever between lunar phase and emergency room admissions, psychiatric episodes, or any of the other phenomena confidently attributed to the moon. The doctors and nurses weren't lying. They were experiencing illusory correlation — one of the most robust and consequential errors in human perception.
The Founding Research
The concept of illusory correlation was introduced by Loren and Jean Chapman in two landmark studies in 1967 and 1969. The Chapmans were puzzled by a clinical paradox: experienced psychologists consistently reported specific associations between patients' responses to projective tests (like the Draw-A-Person test or the Rorschach) and clinical diagnoses — associations that systematic statistical analysis consistently failed to confirm.
In their 1967 study, the Chapmans presented participants with word pairs (e.g., "lion–tiger", "blossoms–notebook") and later asked them to estimate how often each pairing had occurred. Participants consistently overestimated the frequency of pairs that were semantically associated (words that "went together") and underestimated pairs that didn't — even when all pairings had occurred with equal frequency. The mind perceived relationship where only co-occurrence existed, guided by pre-existing expectations about what should go together.
In their 1969 study, they showed clinicians patient profiles paired with clinical symptoms, constructed so that there was no actual correlation between any profile feature and any symptom. Clinicians consistently reported seeing correlations — specifically the ones they expected to see based on their training and intuitions. Experienced clinicians with years of practice were no better, and sometimes worse, than novices. What they "saw" was their prior expectation, not the data.
Why We See Patterns That Aren't There
Illusory correlation has two main sources:
1. Expectation-Based Illusory Correlation
When we already believe two things are related, we notice and remember instances that confirm this belief, and fail to register instances that disconfirm it. If you believe that full moons cause strange behaviour, you will notice and remember the difficult shift on the full-moon night, and not notice the equally difficult shift on a new-moon night. You will encode the confirming instances as data and the disconfirming instances as unremarkable coincidence. This is a joint product of confirmation bias and selective attention — the prior belief creates a perceptual filter that generates its own evidence.
2. Distinctiveness-Based Illusory Correlation
Even without prior expectations, we form illusory correlations when two distinctive or unusual events co-occur. Unusual events are memorable. When they happen together, the pairing is doubly memorable — and memorable events feel frequent. If you see one car accident involving a red car and one involving a blue car, but red cars are rare, the red-car accident will feel more salient and will contribute disproportionately to a perceived pattern. Small or minority groups are inherently more distinctive than majority groups; any negative event associated with a minority group member will be more memorable, and will more easily generate a perceived pattern.
This distinctiveness mechanism was identified by David Hamilton and Robert Gifford in a 1976 study that directly demonstrated how illusory correlations form between minority groups and negative behaviours — even in the absence of any actual association. Participants read sentences describing behaviours of two groups (Group A and Group B), with Group B having fewer members. Negative behaviours were less frequent than positive ones in both groups. But because both Group B and negative behaviours were "minority" items (less frequent), their co-occurrences were distinctive, memorable, and perceived as correlated. Group B was judged more negatively than the data warranted.
Illusory Correlation in the Real World
Sugar and Hyperactivity
The belief that sugar causes hyperactivity in children is one of the most durable myths in popular health culture. It has been directly tested in numerous double-blind studies, including one by Mark Wolraich and colleagues (1995) published in the New England Journal of Medicine that found no effect of sugar on children's behaviour or cognitive performance — even in children diagnosed with ADHD and in children whose parents claimed they were "sugar-sensitive." A meta-analysis of 23 studies confirmed the null result. Sugar does not cause hyperactivity.
The myth persists because of a perfect illusory correlation setup. Children are often given sugar at parties, celebrations, and events where they are also excited, running around, and stimulated — occasions that independently cause energetic behaviour. Parents, expecting a sugar effect, notice and remember the post-sugar exuberance and attribute it to the sugar. They do not track the same children's behaviour after sugar consumption in quiet settings, or the behaviour of equally excited children who consumed no sugar. The expectation creates the pattern.
The Full Moon Effect
Emergency room workers, police officers, and mental health professionals around the world share a deep conviction about full-moon nights. The belief is cross-cultural and persistent. It is also, by every statistical measure, false. Studies examining hospital admissions, psychiatric episodes, crime rates, traffic accidents, and birth rates across lunar phases consistently find no significant relationship. The illusory correlation is sustained by:
- The memorability of striking cases on notable nights
- Confirmation bias in attention and reporting
- Social reinforcement — colleagues share the belief, which amplifies the perception
- The psychological salience of the full moon itself, which makes associated events feel more significant
Stereotyping and Minority Groups
The Hamilton and Gifford distinctiveness mechanism provides a cognitive explanation for how stereotypes form and persist even in the absence of genuine group differences. When a minority group member commits a crime or behaves badly, the event is distinctive by both dimensions (minority + negative = doubly unusual), making it disproportionately memorable. The mind registers it as part of a pattern. When a majority group member commits the same act, it is less distinctive, and less likely to be encoded as representative of the group.
This is not the same as attributing all prejudice to cognitive bias — structural factors, historical context, and deliberate motivated reasoning all play roles. But the illusory correlation mechanism shows that stereotypes can form and be maintained through purely cognitive processes, without any underlying truth and without any malicious intent. The perceptual system generates the pattern automatically.
Clinical and Diagnostic Practice
The Chapmans' original concern was clinical psychology, and the problem has not gone away. Research consistently shows that clinicians perceive relationships between diagnostic indicators and client characteristics that aren't there — particularly when those relationships match what they were trained to expect, or what they remember reading about. This isn't limited to psychology: physicians see patterns in treatment-outcome relationships that randomised trials fail to confirm. The clinical mind is a pattern-recognition engine running in a noisy environment, and it is not well-calibrated for estimating genuine correlation.
Related to this is the phenomenon of clustering illusion — the tendency to see meaningful clusters in random data. Illusory correlation and clustering illusion are complementary errors: the former involves perceiving a link between two variables; the latter involves perceiving meaningful structure within a single stream of events. Both reflect the same underlying drive to find pattern in noise. See also apophenia for the broader category of perceiving meaningful connections in unrelated phenomena.
Measuring the Illusion
The formal measure of illusory correlation requires comparing perceived correlation with actual correlation calculated from the data. In the Chapmans' work, this was done directly: the experimenters knew what correlations were actually present (none), and compared this to clinicians' reported perceptions. The gap between perceived and actual correlation is the illusory component.
In naturalistic settings, this is harder to measure. People don't collect data on disconfirming instances. They don't track base rates. They don't calculate what frequency of association you'd expect by chance. This is precisely why the bias is so persistent: the information needed to refute it is systematically underrepresented in our mental data set.
Correcting for Illusory Correlation
The correction is systematic data collection and analysis — which is, after all, why we invented statistics. Specific strategies:
- Track all four cells. Genuine correlation requires knowing not just how often A and B occur together, but how often A occurs without B, B occurs without A, and neither occurs. Most illusory correlations survive because people only track the confirming cell (A and B together).
- Seek disconfirming instances actively. If you believe sugar causes hyperactivity, deliberately observe children who consumed sugar in calm settings, and children who did not consume sugar at chaotic events. The belief rarely survives this exercise.
- Use base rates. If 40% of patients would improve without treatment, seeing 50% of treated patients improve is weak evidence of treatment effect. Most informal correlations fail to account for the base rate of the outcome.
- Randomise and blind. Where possible, collect data in conditions that make it hard to selectively notice confirming instances. Randomised controlled trials exist precisely to defeat this bias.
We are storytelling animals, and stories require patterns. Illusory correlation is the price we pay for pattern recognition — a system tuned so sensitively for finding signal that it regularly conjures signal out of pure noise. The moon does not fill the wards. The sugar does not cause the chaos. The minority group is not more dangerous. What we see is what we expected to see, confirmed by a memory system that keeps exactly the evidence we need to believe it.
Sources & Further Reading
- Chapman, L. J., & Chapman, J. P. "Genesis of Popular but Erroneous Psychodiagnostic Observations." Journal of Abnormal Psychology 72, no. 3 (1967): 193–204.
- Chapman, L. J., & Chapman, J. P. "Illusory Correlation as an Obstacle to the Use of Valid Psychodiagnostic Signs." Journal of Abnormal Psychology 74, no. 2 (1969): 271–280.
- Hamilton, D. L., & Gifford, R. K. "Illusory Correlation in Interpersonal Perception." Journal of Experimental Social Psychology 12, no. 4 (1976): 392–407.
- Wolraich, M. L., et al. "The Effect of Sugar on Behavior or Cognition in Children." JAMA 274, no. 20 (1995): 1617–1621.
- Rotton, J., & Kelly, I. W. "Much Ado About the Full Moon: A Meta-Analysis of Lunar-Lunacy Research." Psychological Bulletin 97, no. 2 (1985): 286–306.
- Wikipedia: Illusory correlation