The Fallacy Fallacy
Why you shouldn't go looking for faulty reasoning everywhere.

Have you ever wondered why people believe the moon landing was faked, vaccines secretly poison us, and Mercury in retrograde can ruin your love life? Why does irrationality seem so pervasive? A popular answer, beloved by academics and educators alike, points to fallacies—certain types of arguments that are deeply flawed yet oddly seductive. Because people keep falling for these reasoning traps, they end up believing all sorts of crazy stuff. Still, the theory offers hope: if you memorize the classic fallacies—ad hominem, post hoc, straw man—you will inoculate yourself against them.
It’s a neat little story, and I used to believe it too. Not anymore. I’ve become a fallacy apostate.
Growing doubts
My doubts began when I was still in academia, teaching critical thinking to philosophy students and science majors alike. Fallacies are a favorite chapter in such courses. In some ways, they are ideal teaching material: they come in tidy lists and seem easy to apply. Many trace back to Aristotle and still parade under their Latin names—ad hominem, ad populum, ad ignorantiam, ad verecundiam (better known as the argument from authority), the slippery slope, affirming the consequent, and so on.
So I dutifully taught my students the standard laundry list and then challenged them to put theory into practice. Read a newspaper article or watch a political debate—and spot the fallacies!
After a few years, I abandoned the assignment. The problem? My students turned paranoid. They began to see fallacies everywhere. Instead of engaging with the substance of an argument, they hurled labels and considered the job done. Worse, most of the “fallacies” they identified did not survive closer scrutiny.
It would be too easy to blame my students. When I tried the exercise myself, I had to admit that I mostly came away empty-handed. Clear-cut fallacies are surprisingly hard to find in real life. So what do you do if your professor tells you to hunt for fallacies and you can’t find any? You lower the bar. To satisfy the assignment, you expand your definition.
The Fallacy Fork
In 2015, I published a paper in the journal Argumentation with two colleagues arguing that fallacy theory should be abandoned. Here is its crux: every so-called fallacy closely resembles forms of reasoning that are perfectly legitimate, depending on the context. In formal terms, good and bad arguments are often indistinguishable. Worse, there is almost always a continuum between strong and weak arguments. You cannot capture that gradient in a rigid formal scheme. As my friends Hugo Mercier and Dan Sperber succinctly put it in The Enigma of Reason: “most if not all fallacies on the list are fallacious except when they are not.”
We show this by using a dilemma we call the Fallacy Fork, which forces you to choose between two unpalatable options:
(A) You define your fallacy using a neat scheme of deduction. In a deductive argument, the conclusion must follow inexorably from the premises. If the premises are true yet the conclusion could still be false, the argument is invalid—case closed. The trouble is that, in real life, you almost never encounter such clean blunders. You can invent textbook examples, sure, but flesh-and-blood cases are surprisingly rare.
(B) So you loosen things up and abandon the strict realm of deductive logic. You add some context and nuance. Now you can capture plenty of real-life arguments. Great. Except there’s a catch: your “fallacy” stops being fallacious. It turns into a perfectly ordinary move in everyday reasoning—sometimes weak, sometimes strong, but not obviously irrational.
Let’s see how some of the most famous fallacies fare when confronted with the Fallacy Fork.
Post hoc fallacy
As the saying goes: correlation does not imply causation. If you think otherwise, logic textbooks will tell you that you’re guilty of the fallacy known as post hoc ergo propter hoc. You can formalize it like this:
If B follows A, then A is the cause of B.
Clearly, this is false. Any event B is preceded by countless other events. If I suddenly get a headache, which of the myriad preceding events should I blame? That I had cornflakes for breakfast? That I wore blue socks? That my neighbor wore blue socks?
It’s easy to mock this fallacy—websites like Spurious Correlations offer graphs showing correlations between margarine consumption and divorce rates, or between the number of people who drowned by falling into a pool and the number of Nicholas Cage films released per year.
The problem is that not even the most superstitious person really believes that just because A happened before B, A must have caused B. Sure, in strict deductive terms, post hoc ergo propter hoc is a fallacy—but real-life examples are almost nonexistent. That’s the first prong of the Fallacy Fork.
So what do real-life post hoc arguments actually look like? More like this: “If B follows shortly after A, and there’s some plausible causal mechanism linking A and B, then A is probably the cause of B.” Many such arguments are entirely plausible—or at least not obviously wrong. Context is everything.
Imagine you eat some mushrooms you picked in the forest. Half an hour later, you feel nauseated, so you put two and two together: “Ugh. That must have been the mushrooms.” Are you committing a fallacy? Yes, says your logic textbook. No, says common sense—at least if your inference is meant to be probabilistic.
Here, the inference is actually reasonable, assuming a few tacit things:
Some mushrooms are toxic.
It’s easy for a layperson to mistake a poisonous mushroom for a harmless one.
Nausea is a common symptom of food poisoning.
You don’t normally feel nauseated.
If you want, you can even spell this out in probabilistic terms. Consider the last premise—the base rate. If you usually have a healthy stomach, the mushroom is the most likely culprit. If, on the other hand, you frequently suffer from gastrointestinal problems, the post hoc inference becomes much weaker.
Almost all of our everyday knowledge about cause and effect comes from this kind of intuitive post hoc reasoning. My phone starts acting up after I drop it; someone unfriends me after I post an offensive joke; the fire alarm goes off right after I light a cigarette. As Randall Munroe, creator of the webcomic xkcd, once put it: “Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there.’” The problem with astrology, homeopathy, and other forms of quack medicine lies in their background causal assumptions, not in the post hoc inferences themselves.
Ad hominem
Perhaps the most famous fallacy of all is the ad hominem. The principle seems simple: when assessing an argument, you should attack the argument, not the person. Play the man instead of the ball, and you’re guilty of ad hominem reasoning. But is it really that simple?
If your ad hominem argument is meant deductively, then yes—it’s invalid. For example: “This researcher is in the pocket of the pharmaceutical industry, therefore his study is flawed.” If “therefore” is intended in a strict deductive sense, the argument is clearly invalid: the conclusion doesn’t logically follow. But how often do we actually encounter such rigid ad hominems in real life?
Here’s a more reasonable, plausible version: “This researcher studying a new antidepressant was funded by the company that makes the drug. Therefore, we should take his results with a large grain of salt.” This is far more defensible, especially for ordinary people under real-life constraints. It’s what philosophers call a defeasible argument—one that’s provisional, open to revision, and inconclusive. Most real-world arguments work this way. Yes, it has an ad hominem structure, but does that mean we should dismiss it outright?
Courts routinely rely on ad hominem reasoning. Judges can discount witnesses or experts because of bias, conflicts of interest, or hidden agendas. Sure, a biased witness might still tell the truth—but courts aren’t schools of formal logic. Politicians attack the character of their opponents too—and often for very good reason.
Even in science, despite its lofty rhetoric, personal reputation and status matter enormously. Peer review is anonymous in theory, but once you publish a paper, you’re staking your name and reputation on it. You also have to declare your affiliation, funding sources, and any conflicts of interest. Everyone understands why. A study claiming a link between vaccination and autism by someone who’s funded by anti-vaccination groups isn’t automatically invalid—but you’d be well-advised to take it with a truckload of salt.
The truth is, we can’t do without ad hominem reasoning, for the simple reason that human knowledge is deeply social. Almost everything we know comes from testimony; only an infinitesimal fraction do we verify ourselves. The rest is, literally, hearsay. No wonder we are so sensitive to the reputation and trustworthiness of our sources.
Fallacies galore
I could continue to dissect every other “fallacy” on the list, but you’d probably get bored. I’ll quickly run through a few more examples—then you can try it yourself.
The argument from ignorance (argumentum ad ignorantiam) is usually called a fallacy because of the classic dictum: “absence of evidence is not evidence of absence.” But in real life, it often is evidence, and people seem intuitively aware of this. For example: “Recovered memories about satanic cults sacrificing babies are probably the product of confabulation and suggestion, because we have never found any material traces of these atrocities.” That’s perfectly reasonable. The probabilistic premise is sound: if such cults had existed, we would have found baby corpses—or at least hundreds of missing infants.
The argument from authority (ad verecundiam) claims something is true because an expert said so. But as we’ve seen, almost all our knowledge is testimonial—based on authority. Rejecting arguments from authority entirely would lead to radical skepticism. Once again, context matters. Is the expert truly knowledgeable in this field? What’s their track record on similar claims? Do they have incentives to lie? Do others trust them? In the real world, these questions matter far more than any blanket rule about “ad verecundiam.”
The gambler’s fallacy occurs when you assume that if an event has happened less often than expected, it is somehow “due” next time around. At the roulette table, this belief will reliably lead you astray. But casinos are highly artificial environments designed to produce independent events. Out in the wild, things are different. Events cluster, streak, and correlate, and humans are remarkably good at detecting patterns. In natural settings, what looks like the gambler’s fallacy is often not fallacious at all.
The fallacy of affirming the consequent is a staple of formal logic. From “If A, then B,” you cannot infer “B, therefore A.” The hackneyed textbook example goes like this: “If it rains, the street will be wet. The street is wet. Therefore, it rained.” Well, who knows? Perhaps a firetruck just hosed down your street? The problem is that one person’s “affirming the consequent” is another person’s inference to the best explanation. Anyone stepping outside onto a wet street will almost automatically conclude that it probably rained. That’s not a logical blunder; it’s ordinary causal reasoning. Without this kind of abductive inference, we would have no causal knowledge at all.
A rare breed
Pure fallacies are a rare breed. You mostly spot them in logic textbooks or books about irrationality—not in real life. If you think you’ve caught one, chances are you missed something and the case isn’t so clear-cut. Maybe you exaggerated the force of an argument, turning a defeasible argument into a deductive one, glossing over probabilistic assumptions, or stripping away context and knocking it down in the abstract. In other words, you may have built a straw man—another supposed “fallacy” that can’t be formally defined.
Humans love to pigeonhole things into neat little boxes, and the traditional taxonomy of fallacies—especially the ones with pretentious Latin names—is irresistible in that regard. Encounter an argument you don’t like? Just squeeze it into a pre-established category and blow the whistle: fallacy! Making causal inferences from a sequence of events? Post hoc ergo propter hoc! Relying on an expert for a point? Ad verecundiam! Challenging someone’s credibility? Ad hominem!
In my experience, fallacy theory is not just useless—it can be actively harmful. It encourages intellectual laziness, offering high-minded excuses to dismiss any argument you don’t like. It impoverishes debate by pretending that messy, real-world disagreements can be resolved with the blunt tools of formal logic. You end up like my students: slightly paranoid, dismissive, and, ironically, less critical than when they started.
Most of all, fallacy theory misleads us about the real sources of human irrationality. Most people intuitively grasp the basic rules of logic and probability—we wouldn’t have survived otherwise. Human reason is a bag of tricks and heuristics, mostly accurate in the environments in which we evolved, but easily led astray in modern life. And it is strategic and self-serving, motivated to reach conclusions that serve our own interests.
Despite its flaws and foibles, though, human reasoning is far more sophisticated and subtle than the theory of “fallacies” suggests.
Maarten Boudry is a philosopher and independent scholar writing on human progress and human (ir)rationality. He is the author of The Betrayal of Enlightenment, forthcoming from Pitchstone Publishing.
A version of this article was originally published at Maarten Boudry’s Substack.
Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:





One of the benefits of age and experience is the production of the best human interpretive skill available - a difficult to define but remarkably useful quality we usually call common sense. It is the best fallacy detector I know of.
Welcome to the world of culture. Culture is what's left over after you forgot what you tried to learn.
You are beginning to comprehend the current dominant sacred victim, entitled parasite culure and are gravitating to the former but workable privilege, obligation, honor, divine order one.