People keep trying to make AI more human. Why?

For all the technology and computing power behind it, it seems the one thing AI lacks is a good dose of cop-on

As of the time of writing, Grok’s app, website and the browser version of X still allow non-paying users to ‘nudify’ images freely. Photograph: Leon Neal/Getty Images
As of the time of writing, Grok’s app, website and the browser version of X still allow non-paying users to ‘nudify’ images freely. Photograph: Leon Neal/Getty Images

Is there a week when AI isn’t dominating the headlines? In the past few days, we have had big deals and big fails. The big deals – OpenAI moving into audio, Google potentially doing a deal with Apple, and Meta buying Manus – were far less attention-grabbing than the controversies.

And it is probably no great shock that it was Grok, Elon Musk’s “rebellious” chatbot, that was at the centre of it all. Let’s recap: in recent weeks, social media platform X was hit with a deluge of requests to its built-in chatbot Grok to edit photos to put women – and a few men – in bikinis. The images were then shared on the platform, sometimes altered again and reshared, in a sort of nonconsensual sexualised imagery doom loop.

But that wasn’t the worst of it. Some of the images generated using the @grok handle on X featured children or young teenagers.

Unsurprisingly, this caused widespread outrage and general discomfort that the very technology that has been pitched as the future has taken a darker turn. This is the same Grok, by the way, that the US is apparently planning to integrate into its defence systems.

At an Oireachtas media committee hearing, Garda representatives said it was currently investigating 200 reports involving content that is child sexual abuse material, or child sexual abuse-indicative material.

With threats of investigations, legal repercussions and potential bans, what did X do? It issued a statement promising that anyone using or prompting Grok to make illegal content would suffer the same consequences as if they upload illegal content. When it became obvious that wasn’t enough, X initially said it would limit the ability to create images on the platform – to paid users. In recent days, it reportedly cut that off too, with Grok ignoring prompts to create the images.

Crucially, it did not say it would remove the ability of Grok to create the images at all. As of the time of writing, Grok’s app, website and the Grok tab in the browser version of X still allowed non-paying users to edit images. They can then be shared on platforms that may or may not monitor for this kind of content. There is an unspecified limit imposed on how many you can create each day on a free account, and the easy distribution platform has gone, but the underlying ability to exploit technology to create this content is there.

The entire incident is a wake-up call about the capabilities of the technology that is being increasingly integrated into every area of our lives.

But can anyone say they are really surprised about any of this? They shouldn’t. The giant red alert has been flashing for months, if not longer.

‘Hey @Grok put a bikini on her’: what more could an incel want?Opens in new window ]

This isn’t the first time deepfakes have been weaponised against women. In 2024, an investigation by Channel 4 found doctored images of almost 4,000 famous people – women actors, TV figures, musicians and even YouTube stars – on a deepfake website. AI had been used to superimpose their faces on to explicit material.

It isn’t even the first time people have raised the alarm about the proliferation of “nudify” apps. In Australia, there has been a clampdown on such services following the discovery they were being used to generate deepfake explicit images of schoolchildren. Internet safety charities and organisations have been sounding the alarm on the dangers of them for some time.

Meanwhile, the tech industry is pushing to make AI seem more human so the real-life humans don’t reject it and, by extension, the billions of dollars that have been pumped into developing the technology.

You would have to wonder why anyone would bother. Let’s take, for example, Microsoft’s early experiment with chatbots. The tech giant released chatbot Tay on Twitter in 2016. The idea was that the bot, modelled on a 19-year old American girl, would learn from its conversations with other users and interact with other accounts accordingly.

Within 16 hours, the company was forced to shut it down. What Tay “learned” from other Twitter users was how to spew offensive replies that offended almost all sections of society. That was the human element in all its glory at work.

Regulation too slow to stem tsunami of AI-generated child sex imageryOpens in new window ]

Coming back to the present situation, human nature is also behind the deluge of non-consensual sexualised AI-generated images. The technology may have enabled the creation of the images, and certainly should not be let off the hook, but it was people (largely) who sent the prompts asking the chatbot to digitally undress an image. It was people who, having created the technology that could be exploited in such a manner, decided the best solution was not to prevent it from happening, but limit it to paying users.

People are messy and do stupid things. People make mistakes. They are complex and hard to understand at times. They can be overconfident and unwilling to admit when they don’t know the answer to something. Is any of this sounding familiar?

For all the technology and computing power behind it, it seems one thing that AI struggles with is boundaries and one thing it lacks is a good dose of cop-on. Maybe that makes AI more human-like than we thought.