Subscriber OnlyTechnology

ChatGPT-5: Maybe Sam Altman should cool the jets on new AI iterations

OpenAI’s latest upgrade of its AI tool is effectively being tested by 700m users while having potential for adverse real-world impact

OpenAI boss Sam Altman. There didn't need to be a rush to launch GPT-5 before they got the product right. Photograph: Michael Santiago/Getty Images
OpenAI boss Sam Altman. There didn't need to be a rush to launch GPT-5 before they got the product right. Photograph: Michael Santiago/Getty Images

The 1970 film Colossus: The Forbin Project could have been a run-of-the-mill warning of technology betraying its masters. Instead, it told a tale that is all too familiar in the reaction to ChatGPT-5.

In the film, the US government hands over full control of its nuclear arsenal to a supercomputer, the aforementioned Colossus, as it can calculate more quickly than any human and act more rationally when the stakes are potentially catastrophic.

Its lead designer is one Charles Forbin, a brilliant scientific mind convinced of his system’s ability to serve humanity.

That is, of course, until its definition of how best to serve humanity differs from that of its creators. Forbin believes he can outwit it to take back control but is mistaken as his best-laid plans are defeated by Colossus.

Enter GPT-5, the latest iteration of artificial intelligence tool ChatGPT that has caught the imagination of its 700 million weekly users, for good and bad.

At that scale, ChatGPT and its assorted iterations, including GPT-5, is effectively a public utility without any of the safeguards or, really, competencies expected from one.

At a regulatory level, the European Union has been faster to act than with most technologies but even it is still playing catch-up. The rest of the world’s performance in this regard is largely weaker. That means basic tenets such as fairness, reliability or accountability really aren’t in place.

All of which makes sense when you consider that these 700 million users are effectively part of an ongoing live test. Mass adoption has comfortably outpaced any form of structured oversight.

That should give Sam Altman, the founder and face of OpenAI, which is behind the project, pause for thought. His product is iterating so quickly that his trust in the process ought to be tempered somewhat.

When a tool can shape the opinions of and actions of hundreds of millions of people, it’s best to not have it unleashed complete with a heap of glitches

Altman’s own hype hasn’t helped matters. Describing GPT-5 as offering a PhD level of insight upon its launch was brash but proven to be problematic when early flaws in spelling and geography were caught. Altman has rolled back somewhat on that description but he still believes in the vision.

All of these early flaws have, as with previous versions, been greeted with the same excuse. It’s just how AI works and the system needs time to adapt.

That would be fine were its testing phase enclosed to some degree. Instead it is used widely in schools, businesses and in public services. Using ChatGPT as a sounding board for ideas has become normalised.

With a fully polished product that needs only minor tweaks, that’s not all that bad. When GPT-5 clearly has all kinds of issues still to amend, it can needlessly impact the way all of us work and live.

Use of generative AI has become normalised more quickly than any other technology of note in living memory. Personal computers became commercially viable in the 1970s but they didn’t hit this level of usage until this century. Smartphones took 15 years from widespread release to reach 700 million users, while even Facebook needed seven years.

It has taken ChatGPT just two years to match this level of public use. Cultural acceptance is happening more quickly than risk assessment can manage.

There didn’t seem to be a need to rush GPT-5. OpenAI may be among the most discussed businesses in existence but its existing ChatGPT models were already doing enough to keep the public enraptured. There was time to breathe and think before rolling out GPT-5 on a mass scale. Time to work out what type of generative AI it ought to be, to consider the potential for errors large and small.

In simple terms, there was time to get the product right.

That leaves the question of who is in charge. OpenAI isn’t the only player in town when it comes to generative AI but it is the one capturing the public interest. The power and vision of this technology is concentrated in one company, which regulators have still yet to fully comprehend.

Regulation doesn’t exist to slow innovation. As our water and electricity services or the professional services we might use in work show, it is in place to give clear guard rails. Once something reaches a utility level of use, there needs to be greater oversight to ensure safe and cohesive provision of these services.

When a tool can shape the opinions of and actions of hundreds of millions of people, it’s best to not have it unleashed complete with a heap of glitches.

Governments, businesses and citizens want generative AI to work. Yes, there are opponents to its raw principle but there is a clear support for a tool that is shaped well, one that can manage the mundane and make life easier for the rest of us. GPT-5 is clearly not that, at least not in the version it was released.

I’m not worried that GPT-5 will be given the nuclear codes. What concerns me is that Altman will trumpet a version he believes capable of handling something important to society to world leaders before he knows what it will do. Whoops won’t cut it when that happens.