GUEST POST: ChatGPT is Bullshit (Partly) Because People are Bullshitters
This is a guest post on the ChatGPT crisis by Jimmy Alfonso Licon, PhD.
Dr. Licon teaches Philosophy at Arizona State University. He earned his doctorate in philosophy from University of Maryland in 2019, and he writes on issues in ethics, epistemology, and political economy. See his newsletter, Uncommon Wisdom.
“The bullshitter ignores these demands altogether. He does not reject the authority of the truth... He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”
—Harry Frankfurt, On Bullshit (2005)
LLMs are not truth engines. They’re prediction engines. They model the statistical likelihood of a given string of text based on massive corpora of human language. And their outputs are startlingly fluent—so much so that many mistake fluency for understanding, and plausibility for truth.
Recent philosophical work here puts this in sharper relief. Hicks et al. (2024) argue that ChatGPT is not merely prone to hallucination or error. It systematically generates bullshit—truth-insensitive, plausible-sounding utterances that aim not at veracity but at surface coherence. They contend this is best understood not as lying, but as bullshitting in Frankfurt’s sense.
That claim is compelling. But there is another layer worth considering. ChatGPT doesn’t merely generate bullshit because of its architecture. It generates bullshit because it is trained on us. It predicts and completes strings of text based on data drawn from people—people with reputational concerns, tribal affiliations, self-serving biases, and incentives to fudge the truth when it suits them.
In that sense, ChatGPT is bullshit because people are bullshitters. This explanation, I argue, is both independent of and complementary to the design-based argument. Even if LLMs were retooled to better track truth, the distortions built into human discourse would still shape what they learn to say.
2. ChatGPT Is Bullshit
Frankfurt’s now-canonical analysis of bullshit distinguishes it from lying. The liar needs the truth—to know it, so they can evade it. The bullshitter, by contrast, is disengaged from the truth entirely. His goal is persuasion, performance, or face-saving, regardless of accuracy. ChatGPT fits this model remarkably well. It has no beliefs. It tracks no truth conditions. It aims only to complete prompts with statistically probable continuations. As Hicks et al. write, “the goal is to provide a normal-seeming response,” not a truthful one². If the truth happens to be “normal-seeming,” so much the better. But that is incidental.
Their argument is, at heart, a design argument. LLMs are structured to generate plausible continuations, not truthful reports. And since truth and plausibility diverge regularly in human discourse, especially online, LLMs inherit that divergence. So, they are designed to produce persuasive bullshit that is often true. However, design is not the whole story with regard to why ChatGPT is bullshit. The training data matters too. And that data reflects not a neutral catalog of factual content but a species invested in self-presentation, motivated reasoning and bullshit because that can make them better at gaining the cooperation of others and thereby improving their chances of survival.
3. Why People Are Bullshitters
3.1 Reputational Incentives
We are a cooperative species. Our survival and flourishing depend on how others perceive us—on our reputations. But reputational incentives do not always track truth. They often track appearances: sincerity, virtue, competence. And so we learn, early and often, how to curate our image. That curation frequently involves bullshit.
Consider the lengths people go to avoid reputational damage. Research finds that people will endure substantial costs—legal, social, physical—to protect their moral standing. Children as young as five will forgo cheating if they believe their reputation is at stake. The impulse to self-protect through language begins early and runs deep.
When the truth is flattering, we share it. When it isn’t, we spin—psychologists call this social desirability bias. And that spin—truth-insensitive, audience-calibrated, socially palatable—is what Frankfurt called bullshit. We don’t always lie. Often, we say what sounds good and sidestep the rest.
Subscribe
3.2 Bullshit as Social Signal
There’s another angle here: signaling. Producing bullshit, it turns out, is cognitively demanding. It requires fluency, creativity, situational awareness. And research confirms that those who are better at generating satisfying bullshit are rated as more intelligent and often are. This fits into broader literature on signaling theory. Just as peacocks advertise genetic fitness through costly feathers, humans advertise intelligence through verbal performance. And bullshit—at least the convincing kind—is a costly signal. Not everyone can generate it on demand, especially under scrutiny.
In short, our epistemic practices are not always aimed at truth. They are often aimed at reputation. And that reputation is managed, in part, by deploying language that flatters, persuades, or confounds—regardless of whether it is true.
3.3 The Bullshit Market
The demand for bullshit does not end with production. It also fuels a market. Some people need bullshit to justify actions, to defend beliefs, or to maintain moral self-regard. Others are better at producing it. The result is a marketplace of rationalizations.
And this is where LLMs enter with full force. They drastically reduce the cost of bullshit. You no longer need verbal fluency or rhetorical skill. You just need a prompt. ChatGPT can generate an excuse, an argument, a love letter, or an apology in seconds. It supplies what people have always needed: language that performs well, sounds plausible, and protects reputations.
4. Using LLMs to Bullshit Better
The democratization of bullshit will have predictable effects. People will use LLMs to craft more persuasive resumes, more polished excuses, more palatable defenses. The same reputational games we’ve always played—now accelerated by predictive text.
There are risks, of course. If you're caught outsourcing your thinking or passing off LLM-generated content as your own, you risk damage to your cognitive and moral reputation. What was once a signal of competence becomes evidence of fakery. But that risk won’t eliminate the demand. People will use LLMs as rhetorical prosthetics—to sound smarter, warmer, more articulate than they are. And unless norms evolve quickly, much of that usage will go undetected.
This dynamic applies not only to interpersonal speech, but to intrapersonal thought. People will use LLMs to bolster their self-image, to generate affirming narratives, to avoid cognitive dissonance. They’ll use ChatGPT not only to bullshit others, but to bullshit themselves. Some already are. There are reports of people forming emotional attachments to AI chatbots—preferring simulated empathy to the real stuff—suggest that we’re entering a new phase in the psychology of self-deception.
More speculative are cases where people might train LLMs to simulate themselves in digital dating markets, sending their proxy bots to interact with others’ proxies to gauge compatibility. But if those LLMs are trained on curated, self-flattering data, then the whole process becomes an echo chamber of second-order bullshit.
5. Conclusion
ChatGPT is bullshit. But the deeper point is that it is bullshit because we are. It reflects not just a flaw in model design, but a flaw in the epistemic habits of its creators. It mimics our discourse because it learns from us—and what it learns, too often, is spin, flattery, and reputational theater.
Even a more truth-sensitive LLM would remain hostage to the data it’s trained on. And that data reflects a human species skilled at rationalization, motivated cognition, and performance. As long as our incentives favor looking good over being right, we will produce bullshit. And so will the machines we train.
We often talk about aligning AI with human values. But perhaps the more urgent task is aligning our values with truth. Until we do, our machines will reflect our bullshit back at us, with increasing fluency—and decreasing friction.
Enjoyed this paper? Read Professor Jimmy Alfonso Licon’s newsletter, Uncommon Wisdom and his philosophy essays College Writing is Low Rent and Large Language Models Reduce Harm.
This guest post was brought to you by Toni Airaksinen.
As they say Garbage in Garbage out.