Much has changed since 1986 when the Princeton philosopher Harry Frankfurt published an essay in an obscure journal, Raritan, titled On Bulls**t. Yet the essay, later republished as a slim best-seller, remains unnervingly relevant.
Frankfurt’s brilliant insight was that bulls**t lies outside the realm of truth and lies. A liar cares about the truth and wishes to obscure it. A bullsh**ter is indifferent to whether his statements are true: “He just picks them out, or makes them up, to suit his purpose.” Typically for a 20th-century writer, Frankfurt described the bullsh**ter as “he” rather than “she” or “they”. But now it’s 2023, we may have to refer to the bullsh**ter as “it” – because a new generation of chatbots are poised to generate bulls**t on an undreamt of scale.
Consider what happened when David Smerdon, an economist at the University of Queensland, asked the leading chatbot ChatGPT: “What is the most cited economics paper of all time?” ChatGPT said that it was A Theory of Economic History by Douglass North and Robert Thomas, published in the Journal of Economic History in 1969 and cited more than 30,000 times since. It added that the article is “considered a classic in the field of economic history”. A good answer, in some ways. In other ways not a good answer because the paper does not exist.
Why did ChatGPT invent this article? Smerdon speculates as follows: the most cited economics papers often have “theory” and “economic” in them; if an article starts “a theory of economic ...” then “ ... history” is a likely continuation. Douglass North, Nobel laureate, is a heavily cited economic historian, and he wrote a book with Robert Thomas. In other words the citation is magnificently plausible. What ChatGPT deals in is not truth; it is plausibility.
And how could it be otherwise? ChatGPT doesn’t have a model of the world. Instead it has a model of the kinds of things that people tend to write. This explains why it sounds so astonishingly believable. It also explains why the chatbot can find it challenging to deliver true answers to some fairly straightforward questions.
It’s not just ChatGPT. Meta’s short-lived Galactica bot was infamous for inventing citations. And it’s not just economics papers. I recently heard from the author Julie Lythcott-Haims, newly elected to Palo Alto’s city council. ChatGPT wrote a story about her victory. “It got so much right and was well written,” she told me. But Lythcott-Haims is black, and ChatGPT gushed about how she was the first black woman to be elected to the city council. Perfectly plausible, completely untrue.
Gary Marcus, author of Rebooting AI, explained on Ezra Klein’s podcast: “Everything it produces sounds plausible because it’s all derived from things that humans have said. But it doesn’t always know the connections between the things that it’s putting together.” Which prompted Klein’s question, “What does it mean to drive the cost of bulls**t to zero”?
Experts disagree over how serious the confabulation problem is. ChatGPT has made remarkable progress in a very short space of time. Perhaps the next generation, in a year or two, will not suffer from the problem. Marcus thinks otherwise. He argues that the pseudo-facts won’t go away without a fundamental rethink of the way these artificial intelligence systems are built.
I’m not qualified to speculate on that question, but one thing is clear enough: there is plenty of demand for bulls**t in the world and, if it’s cheap enough, it will be supplied in enormous quantities. Think about how assiduously we now need to defend ourselves against spam, noise and empty virality. And think about how much harder it will be when the online world is filled with interesting text that nobody ever wrote, or fascinating photographs of people and places that do not exist.
Consider the famous “fake news” problem, which originally referred to a group of Macedonian teenagers who made up sensational stories for the clicks and thus the advertising revenue. Deception was not their goal; their goal was attention. The Macedonian teens and ChatGPT demonstrate the same point. It’s a lot easier to generate interesting stories if you’re unconstrained by respect for the truth.
I wrote about the bulls**t problem in early 2016, before the Brexit referendum and the election of Donald Trump. It was bad then; it’s worse now. After Trump was challenged on Fox News about retweeting some false claim, he replied, “Hey, Bill, Bill, am I gonna check every statistic?” ChatGPT might say the same.
If you care about being right, then yes, you should check. But if you care about being noticed or being admired or being believed, then truth is incidental. ChatGPT says a lot of true things, but it says them only as a byproduct of learning to seem believable.
Chatbots have made huge leaps forward in the past couple of years, but even the crude chatbots of the 20th century were perfectly capable of absorbing human attention. MGonz passed the Turing test in 1989 by firing a stream of insults at an unwitting human, who fired a stream of insults back. ELIZA, the most famous early chatbot, would fascinate humans by appearing to listen to their troubles. “Tell me more,” it would say. “Why do you feel that way?”
These simple chatbots did enough to drag the humans down to their conversational level. That should be a warning not to let the chatbots choose the rules of engagement. Harry Frankfurt cautioned that the bullsh**ter does not oppose the truth, but “pays no attention to it at all. By virtue of this, bulls**t is a greater enemy of the truth than lies are.” Be warned: when it comes to bulls**t, quantity has a quality of its own. – Copyright The Financial Times Limited 2023