Search for topics or resources
Enter your search below and hit enter or click the search icon.
Picture yourself sitting at a coffee shop. At the table next to you are two friends chatting. One of them is explaining that she wants to buy a dog but cannot stand the prospect of dealing with white fur everywhere. Her friend responds simply: “Why not get a dog without white fur?”
“Don’t be ridiculous.” The reply is heard, “All dogs have white fur.”
The friend hears this and is puzzled. “What are you talking about? Dogs come in all sorts of breeds, colors and patterns. You can have white, brown, black, puffy, spotted…”
“No, that’s silly,” the first insists. “All true dogs must have white fur. And I can prove it. The next dog we see will be white.”
Just then a stranger happens to stroll by the window with a big white dog in tow.
“There! A dog! And it is white! You admit it’s a dog, don’t you?”
“Yes, of course but–”
“Then why are you arguing with me? You’ve just seen with your own eyes that the creature outside has white fur and you admit it’s a dog. You clearly agree with me. Why are you making such a huge fuss? You are getting hung up on something that does not matter.”
If you or I were sitting nearby, we’d probably find the exchange maddening. Indeed, this whole scenario seems widely unrealistic. But if we study it closely we realize this is not merely a case of confirmation bias, but a structural flaw in the logic itself. It is the mistake of assuming that because a predicted result occurs, the theory behind it must therefore be true. If we read our Aristotle we would have a name for this.
In formal logic, this is a classic case of affirming the consequent. The structure goes something like this:
If P, then Q.
Q is true.
Therefore, P is true.
When we plug in the variables “all dogs” and “white fur,” the flaw becomes obvious. It’s the error of using the very phenomenon we expect to see as proof of the belief that produced it. It sounds silly yet, if we’re honest, this same pattern often creeps unnoticed into our conversations on matters far more serious.
To my mind, we encounter it most frequently in two forms: conspiracy theories and bulverism.
One of the great appeals of conspiratorial thinking is its endless speculations. We’re handed a jumble of “facts” and coincidences and then invited to weave them into one grand, all-explaining narrative. But this kind of reasoning easily collapses into affirming the consequent (among many pitfalls). This is why conversations with conspiracists often feel like they go in circles: they think they already have proof. But their “proof” is that something happened. Even clear opposing evidence is creatively absorbed into a grand theory (“That’s what they want you to think!”).
Now, sometimes it is the case that a conspiracy theory actually does turn out to be correct. And when that happens, it can feel like a vindication of the conspiratorial mindset. But this is often no different, formally speaking, from finding a white dog on occasion. White dogs certainly exist, and a particular observation may be true, but it does not necessarily validate the hypothesis that predicted that phenomena.
The trouble lies not in the truth of the conclusion, but in the manner by which it is reached. When the outcome is taken as proof of the very theory it was meant to test, the reasoner begins chasing his own tail. Yet time and again, we find ourselves excusing such faulty logic simply because it occasionally delivers an answer that we agree with. Yet permitting such intellectual shortcuts trains us to think poorly, exposing our minds to all manner of nonsense and danger.
This trap extends beyond tinfoil-hattery. It shows up in supposed polite conversation as well, a kind of variant of what C.S. Lewis might call the fallacy of bulverism. Bulverism works by this rule: instead of engaging with the substance of an argument, we pre-assume the speaker is wrong and immediately focus on explaining why they would hold that belief.
Lewis illustrates it this way. Suppose we encounter a man who insists he has a million dollars in his bank account. We may be tempted to think him mad or delusional—but first, we should check the sums ourselves. If he turns out to be correct, any attempt at psychoanalyzing why he believed he had a million dollars is not only premature but entirely irrelevant. This is why C.S. Lewis provides the simple rule, “You must show that a man is wrong before you start explaining why he is wrong.”
Of course, a person might genuinely be delusional, self-serving, tyrannical, or morally corrupt. That may even be the hypothesis we’re trying to test. But when we assume the conclusion from the start, we distort reality by interpreting every action as evidence confirming our bias, falling into the same cycle of flawed reasoning. This often manifests as: “If you’re an X-ist, you’d support Y policy. You support Y policy, therefore...” Here as well the “evidence” is merely the predicted outcome, with support for an idea taken as proof it must be due to prejudice, envy, virtue-signalling or other ulterior motives (see also: Reductio ad Hitlerum).
In everyday talk, speaking with a bulverist can actually be more draining than arguing with a conspiracy dabbler. This is because bulverism often presents itself under the guise of empathy and moral high ground: “I am only trying to understand where you are coming from.” That of course is good polite advice. But sometimes this courteous statement hides a prior judgment.
To the bulverist, “where you’re coming from” is already the position of the villain, so naturally, they may find your behavior “makes perfect sense” and pat themselves on the back that they have you figured out. But once that psychological profile is assigned, every word, every gesture, every attempt at clarification is absorbed into the conspiracy they themselves have constructed.
This may be what is meant by the counsel to “assume the best” or “believe all things.” It is not a license to excuse wrongdoing, nor to find something admirable in those who are reprehensible, but a guide for sound reasoning. Our task is to test our hypotheses rigorously, seeking out counterexamples to see whether they hold up. A practical rule of thumb for the bulverist would be to pay special attention to those who defy expectations. To look for those who do not fit the psychological profile we imagine (“You are so ordinary, and yet seem to be collaborating with those who oppose you; why are you on the wrong side?”) and to judge our theory by the arguments and evidence they present.
Affirming the consequent occurs all around us. We may be unlikely to overhear the absurd conversation about dogs at a coffee shop, of course. Yet if we keep our ears and minds open, we will encounter the same error in other forms: in conversations with friends about the news, in podcasts, or even in our own psychological assessments of “the other side.”
It is all too easy to fall into this habit of poor reasoning, to wander in a circle with no clear way out. And I think a sure sign that we are off on the wrong track is when a theory or character analysis (our own or another’s) becomes completely immune to disproof or correction, rendering it unfalsifiable. Another sign, no less telling, is that those who live by such methods often make for rather tedious company.
In any case, we might still profit from consulting someone like Aristotle who, even after all this time, continues to instruct us in the disciplines of sound thinking and good friendship.
Daniel Goodman is the president of the Libertas Society. His work has been featured in Plough, Ad Fontes, and the London Lyceum. He writes from Louisville, KY.