Have you stopped answering the phone to unknown callers? Do you no longer click links in texts? Or have you, at some point, failed to convince a website’s anti-spam filter that you are in fact human?
Perhaps you answered “yes” to all three, “no” to all three, or a mixture. The possibility that a withheld number might contain an invitation to do television makes it too much for my vanity to give that one up. I’ll take my chances with the fraudsters.
Others have their own bespoke arrangements – calling back unknown numbers, for example – to combat fraud or impersonation.
All these habits have the same cause. It is getting harder and harder to prove someone is who they say they are online. Both businesses and households must take ever-greater steps to prove that they, and the people they are dealing with, are really “people” at all.
READ MORE
Advances in computing power, in generative AI and in machine learning all give companies and the state greater speed in responding to attacks of one kind or another: but they also give more powerful tools to spammers, fraudsters and bad actors. Our trust in technology – in the texts we receive, the attachments we open, the forms we fill in – is slowly withering as a result.
I haven’t yet seen a fake news clip or AI-generated video that is good enough to fool a keen observer. (My favourite tells are the ones trained on data so lousy with sexualised images that they are immediately ridiculous. I recently saw a “war correspondent” supposedly reporting for “CNN” in a plunging négligée.)
But two years ago, ChatGPT wrote only about as well as a student struggling to pass their high school diploma. Now it seems to write at more of an undergraduate level. It won’t be long before generative AI will be able to produce video that is indistinguishable from reality to even the most sophisticated viewer.
This same ever-improving AI means that things researchers thought impractical a year ago are now possible. Government, in particular, can use it to get faster and better at making decisions and handling data.
But the technology will bring casualties in its wake as well and one of them may be ecommerce as we know it.
For financial transactions to be both safe and practical online, the ability to verify both who you, and the person you are conducting business with are, is essential. As machines get smarter than humans, or at least, smart enough to fake being human, verification becomes harder and the job of fraudsters easier. The reason why the various tests to “prove you’re a human” online are getting more difficult is that computers are getting smarter.
Fooling
The problem is that the more barriers you have to erect, the more likely it is that people will grow used to working around them and the more effective fraudsters and other bad actors will become at fooling them.
Consumers will trust online and digital transactions less and businesses will behave in riskier ways as cybersecurity asks more and more of employees.
The costs involved can be very large. Marks & Spencer is unusual in being incredibly open about the hit – both financial, in the shape of a £300 million blow to profits, and logistical, in that trading is still affected – of the cyber attack they have experienced. Other businesses and organisations, without customers or shareholders to mollify, often treat these breaches as a personal embarrassment.
And where the problem hits the headlines, the consequences linger on for much longer. The British Library has not fully recovered from the ransomware attack it experienced two years ago. Hackney Council, in east London, is still feeling the effects of a cyber attack half a decade ago.
How can the problem be solved? Some in cyber security fear that the long-term answer is, “It can’t”.
History teaches us that the only way knowledge can be unlearnt is through a societal collapse of a kind no one should wish to live through. So simply prohibiting the use of new, smarter machines is a non-starter. The same technological advances allowing us to improve productivity in the regular economy are the ones that make us more vulnerable to cyber attack, more prone to impersonation, and make fake images and video harder and harder to distinguish from the real world.
What should we do instead? As smart machines do more and more work in research, bureaucracy and design, one solution to the “verification problem” may be that anything that requires peer-to-peer checking increasingly returns to face-to-face encounters. The next wave of jobs may not be as cyber experts, but as bank tellers. – Copyright The Financial Times Limited 2025