The inconvenient truth about artificial intelligence

Incurious scepticism is not an adequate response to the questions AI raises

A great many people who should be paying serious attention to artificial intelligence have been doing something else entirely. Photograph: Joel Saget/AFP via Getty Images
A great many people who should be paying serious attention to artificial intelligence have been doing something else entirely. Photograph: Joel Saget/AFP via Getty Images

Journalism has always covered technology in cycles of hype, backlash and hype again. The AI cycle has produced something slightly different, though. The media’s coverage of artificial intelligence has split fairly cleanly along different lines. Business and technology journalists have tracked the hype curve. Cultural commentary has pushed in the opposite direction, deploying a vocabulary of scepticism designed to puncture the balloon.

And AI executives have been conducting their own theatre of hyperbole. The results have not been especially illuminating.

There is a scene near the beginning of Gideon Lewis-Kraus’s recent, lengthy New Yorker profile of the AI company, Anthropic, in which a mathematician on the company’s interpretability team describes what happens when new employees, drawn from all kinds of industries, spend their first two weeks in proximity to the company’s large language model, Claude. “After two weeks,” Joshua Batson says, “they’re like, ‘Oh shit, I had no idea.’”

That reaction of surprise, disorientation and a sudden sense that the ground has shifted is not confined to Anthropic’s induction programme. It is, or ought to be, the appropriate response to the accumulation of evidence now in front of anyone paying serious attention to artificial intelligence.

But a great many people who should be paying serious attention have been doing something else entirely: using AI as a canvas for pre-existing anxieties about capitalism, power, labour displacement, cultural homogenisation and environmental destruction. These anxieties are fully justified. But they have produced a critical discourse that is curiously uninformed about the technology it claims to be critiquing.

The phrase “stochastic parrot”, popularised by the linguist Emily Bender and her colleagues, captures a widely held belief: that these systems process statistical patterns in text rather than understanding it in any philosophically robust sense. Lewis-Kraus quotes Bender’s co-author, Alex Hanna, dismissing large language models as “mathy maths” and “a racist pile of linear algebra”. This satisfying demystification is starting to look less like rigorous analysis and more like incurious scepticism for scepticism’s sake.

The curmudgeonly camp has been protecting a concept it never properly examined. Terms like “thinking”, “understanding”, “intelligence” and “creativity” were never as well-defined as their defenders assumed.

We applied them to ourselves, and by loose extension to things that seemed sufficiently like us. The arrival of entities that do many things we associate with those words, but through processes entirely alien to biological cognition, has exposed a conceptual vagueness we simply hadn’t needed to confront before.

The interpretability researchers in Lewis-Kraus’s article are scrupulous about this. They don’t claim the models “really” think but they suggest that perhaps we don’t have as firm a grip on the word “thinking” as we imagined.

Few predicted just how deep AI’s race to the bottom would goOpens in new window ]

Some of the implications are explored in Yascha Mounk’s recent Substack essay. Mounk, an author, podcaster and political scientist at Johns Hopkins University in the US, describes spending a single morning asking Claude to write a publishable academic paper in political theory, one of the disciplines most resistant to the idea that AI can do anything intellectually serious.

The result, a theoretically ambitious piece arguing that AI corporations represent the most complete realisation of what Tocqueville and Mill feared about concentrated control over the conditions of thought, was by Mounk’s own assessment good enough to appear in a serious journal with minor revisions. The meta-irony is heavy: a paper about AI’s power over how we think, produced by an AI, reaching a significant readership and contributing to the very discourse it analyses.

The time has come to move on from these party tricks – AI produces passable text, therefore AI is intelligent, therefore panic – and examine the specific nature of what was produced.

Political theory is not boilerplate corporate language. It requires the kind of synthetic, structurally layered argumentation that critics have long insisted is constitutively human.

Mounk is careful not to overstate his case. The paper has shortcomings. References need checking. It is not earth-shatteringly original. But “not earth-shatteringly original” describes the overwhelming majority of academic output, and that is Mounk’s point.

Our inner idiots unwittingly feed AI models for freeOpens in new window ]

The institution of humanities studies – the journals, the citation networks, the career ladders built on niche contributions – is undermined. It’s one thing to argue that AI cannot replicate human creativity at its highest. It is another to watch it replicate the median output of professional academia before breakfast.

As both pieces make clear in different ways, the irony is that the most interesting intellectual questions raised by AI are precisely the ones that humanities professors like Mounk and cultural commentators like Lewis-Kraus are supposed to address. What does it mean to have a self? Can something have functional emotions without having consciousness? What is the relationship between language and thought? What happens to democratic self-governance when the infrastructure of cognition is controlled by a handful of unaccountable corporations?

These are not engineering questions. They are philosophical, political, literary and psychological, the disciplines that have been most dismissive of the technology raising them.

The good news, if the phrase applies, is that this may be starting to change and we are beginning to see the sort of serious, philosophically-informed engagement that this subject deserves. Neither credulous nor contemptuous, but genuinely curious about what cannot yet be known. The discourse is moving, and that is welcome. The technology is not hanging around waiting for the commentary to catch up.