Cherry Picking (Suppressed Evidence): How Selective Facts Distort Reality
Imagine a prosecutor who only reads the jury the confessions and ignores every piece of alibi evidence. Or a pharmaceutical company that publishes its three successful drug trials and quietly shelves the twelve that showed no effect. Or a politician who cites one unemployment statistic while omitting six others that tell a completely different story. This is cherry picking — the deliberate or unconscious selection of evidence that supports a predetermined conclusion, while suppressing, ignoring, or dismissing everything that contradicts it.
The name comes from the image of someone picking only the ripest, reddest cherries from a tree while pretending the rest of the tree doesn't exist. In formal logic, it is known as the fallacy of incomplete evidence or suppressed evidence. In science, it appears as selective reporting, outcome switching, and publication bias. In everyday argument, it is simply the oldest trick in the rhetorical book.
The Structure of the Fallacy
Cherry picking follows a deceptively simple pattern:
- A body of evidence exists — studies, statistics, historical events, expert opinions.
- Some of that evidence supports a desired conclusion; some contradicts it.
- Only the supporting evidence is presented as if it were the whole picture.
- The audience, seeing only the curated selection, draws the intended conclusion.
The conclusion drawn may even be technically true in the narrow sense — each cherry really is red. The fallacy lies in the implication that the selected evidence is representative, when it is actually exceptional. The argument is not wrong because the evidence is fabricated; it is wrong because the full evidence base leads somewhere else entirely.
This makes cherry picking especially insidious: the arguer can always point to their sources and say "but this is real data!" The counter-argument requires the audience to know what's missing — which takes considerably more effort than making the original claim.
Climate Denial: A Textbook Case
Climate change denial provides perhaps the most consequential example of institutional cherry picking. The global scientific consensus — based on tens of thousands of peer-reviewed studies — holds that human-caused climate change is real, accelerating, and dangerous. Denial arguments systematically sidestep this evidence by selecting particular data points: a cold winter in one region, a period of slower warming in a specific decade, studies from a tiny minority of dissenting researchers.
One famous example: global warming "paused" between 1998 and 2012, according to cherry-picked readings. What deniers selectively omitted was that 1998 was an exceptionally hot El Niño year, making it a misleading baseline; that ocean heat content continued rising throughout the period; and that every subsequent decade has been the hottest on record. The "pause" was real in the narrow sense — it was present in one dataset for one measure. The implication — that warming had stopped — was entirely false when the full picture was examined.
Health Claims and the Science Illusion
The health and wellness industry runs largely on cherry picking. "Studies show that [food/supplement/lifestyle choice] [prevents cancer / boosts IQ / extends lifespan]" is the structural backbone of billions of dollars in marketing. What these claims rarely mention is the balance of the evidence — the studies that found no effect, the negative results, the retracted papers, the replications that failed.
The phenomenon has a formal name in medical research: publication bias. Studies with positive results are more likely to be published; studies with null or negative results tend to disappear into file drawers. A 2008 analysis of antidepressant trials submitted to the FDA found that, of 38 trials with positive results, 37 were published. Of 36 trials with negative results, only 14 were published — and some of those were published in misleading ways that made them appear positive. A doctor reading the published literature would see strong evidence for the drugs; a doctor who could see the full trial record would reach a much more cautious conclusion.
The same mechanism applies to nutrition science, where the conflicting findings on eggs, coffee, red wine, saturated fat, and dozens of other foods often reflect not the complexity of nutrition but the selective reporting of studies that happened to support current dietary fashions or industry interests.
Political Spin and the Statistics Game
Politicians and their communications teams are professional cherry pickers. A government might simultaneously claim credit for falling unemployment (using one measure) while attributing rising inequality to global forces (using a different measure). An opposition party might cite a rising crime statistic (one type of crime, one region, one year) while ignoring a decade-long downward trend in the same category.
What makes political cherry picking particularly effective is that audiences tend to accept statistics from sources they already trust and reject statistics from sources they distrust — regardless of which statistics are actually more representative. This creates an information ecosystem where each tribe has its own curated body of facts, and where debate across tribal lines becomes nearly impossible because the disputants are effectively arguing from different evidence bases.
The Relationship to Confirmation Bias
Cherry picking can be deliberate (as in political spin or commercial advertising) or unconscious (as in the way ordinary people read the news). When it is unconscious, it usually reflects confirmation bias — the well-documented cognitive tendency to seek out, remember, and weight evidence that confirms our existing beliefs more heavily than evidence that challenges them.
Confirmation bias means that cherry picking is the default cognitive mode rather than a deliberate deception. We notice the articles that confirm our views; we skim or dismiss the ones that contradict them. We remember the anecdote that fits our narrative; we forget the ten anecdotes that didn't. This is why cherry picking persists even among intelligent, well-intentioned people: it requires active effort to counteract.
Related to this is the Texas Sharpshooter Fallacy, where patterns are drawn around data points after the fact — finding clusters in random noise and treating them as meaningful targets. Both Texas Sharpshooter and cherry picking share the same root: the data is real, but the selection and framing create a false impression of what the data shows.
Cherry Picking in Science: P-Hacking and Outcome Switching
Within academic research itself, cherry picking appears in several technical forms. P-hacking (also called data dredging) involves running many statistical tests on a dataset until a significant result emerges by chance, then reporting only that result. Outcome switching involves pre-registering a study with one primary outcome measure and then reporting a different measure once the results are in — because the pre-registered measure didn't show what the researchers hoped. HARKing (Hypothesizing After Results are Known) involves presenting a post-hoc hypothesis as if it had been predicted in advance.
These practices don't necessarily involve bad faith — researchers can convince themselves that the adjustments are legitimate. But they produce a scientific literature that is systematically skewed toward positive, surprising, and theory-confirming results. The replication crisis in psychology, medicine, and other fields is partly the accumulated debt of decades of these practices.
How to Spot Cherry Picking
Several diagnostic questions help identify cherry picking in arguments:
- What evidence exists on the other side? If an argument never acknowledges counter-evidence, that's a signal. Strong arguments address and explain opposing data.
- Is this the representative sample? One study, one statistic, one case — these are not the evidence base. Ask how many studies were done and what the overall pattern shows.
- Who benefits from this selection? When industries, politicians, or advocates present evidence, ask what incentive they have to present only favorable data.
- Look for what's missing. The absence of counter-evidence in a complex field is itself suspicious. Real evidence rarely points uniformly in one direction.
- Check meta-analyses and systematic reviews. These are specifically designed to counteract cherry picking by synthesising all available evidence, not just the studies that support a given conclusion.
Related Concepts
Cherry picking overlaps significantly with Confirmation Bias (the cognitive tendency that enables unconscious cherry picking), Publication Bias (the systemic version in academic research), and the Texas Sharpshooter Fallacy (finding patterns in data after the fact). It is also closely related to Hasty Generalization — drawing broad conclusions from too-small or unrepresentative samples.
Summary
Cherry picking is the fallacy of presenting a carefully curated subset of evidence as if it were the whole picture. It is simultaneously one of the most common and most consequential reasoning errors in public life — the engine behind medical fraud, climate denial, political spin, and the daily distortions of the information environment. Its power comes precisely from the fact that the evidence presented is real: the cherries genuinely are red. Countering it requires not just evaluating what is presented, but asking what has been left out — a habit of mind that is both cognitively demanding and culturally undervalued.
Sources
- Turner, E. H., et al. (2008). Selective publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine, 358(3), 252–260.
- Gøtzsche, P. C., & Jørgensen, A. W. (2011). Opening up data at the European Medicines Agency. BMJ, 342, d2686.
- Head, M. L., et al. (2015). The extent and consequences of p-hacking in science. PLOS Biology, 13(3), e1002106.
- Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.
- Cook, J., et al. (2013). Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters, 8(2), 024024.
- Damer, T. E. (2008). Attacking Faulty Reasoning (6th ed.). Wadsworth.