The ability of AI and generative AI to create highly convincing images and videos of real people can sometimes be quite amusing but is also extremely dangerous and harmful. At the more comical end of the scale were the deepfake images circulated last year of Pope Francis wearing a Balenciaga-style white puffer jacket and sporting a very bling crucifix. While some people were genuinely offended, most saw the joke.
At the other end of the spectrum were the fake pornographic images of Taylor Swift posted online at the beginning of this year. While Ms Swift was in a position to take immediate action to have the images taken down, the reports that followed of the experiences of other more vulnerable people were deeply disturbing. Teenage girls came forward to report on how their social media pictures had been manipulated to create fake nude images which were later used for attempted blackmail.
Closer to home, some fans of Daniel O’Donnell were duped into making donations to bogus charities following social media engagements with a fake but highly convincing version of the singer.
That was at the lower end of the financial scale. Earlier this year, an employee in the finance department of a major multinational corporation was tricked into making a transfer of $25 million to fraudsters who used deepfake technology to impersonate the company’s chief financial officer.
Water pollution has no one cause but many small steps and working together can bring great change
Empowering women in pharma: MSD Ireland’s commitment to supporting diverse leadership
Super nutritious, wildly versatile and oh, so tasty: Make potatoes your go-to food
Inside Donnybrook Fair: Tasty meals are on the menu every day at one of Ireland’s biggest kitchens
In a stark illustration of just how powerful the technology has become, the fraudsters were able to create a video conference call populated by deepfakes of colleagues who the unsuspecting worker knew and recognised.
It is little wonder in those circumstances that a key sticking point in last year’s Screen Actors Guild strike in America centred around payments for studios’ use of AI-generated avatars or real actors in future film productions. It is envisaged that the technology will soon reach the point where an actor can be hired for a day to be filmed in various poses and performing certain actions and AI will take that data and use it to create extraordinarily lifelike representations of the actor for insertion into film scenes in much the same way as CGI works today.
“There is an ongoing arms race when it comes to deepfakes between those who can benefit from generating and publishing them and those who have an interest in avoiding them,” says Tim Morthorst, director of AI & Automation at EY Ireland. “While deep fakes have long been considered to just be fake video material, it is important to realise that this is now also spreading to both voice and text-based media. While in early days deepfakes mostly damaged people’s reputations, globally they are now used to impersonate people to commit crimes such as financial fraud and examples of people calling parents with imitations of their children’s voices to ask for money have for example been observed.”
And the scammers don’t need to be computer wizards. The AI looks after the coding and effectively brings mass production to online con trickery.
“New forms of AI and generative AI can generate highly realistic content such as text, images and so on and it can be very convincing,” says DCU professor of computing Alan Smeaton, who is also a member of the Government’s AI Advisory Council. “The technology is great at generating content from any kind of data it has been trained on, even DNA sequences. There might be some giveaways such as a person with six fingers but as the technology improves these errors will be eliminated and even automated detection will not be possible.”
The strategies employed by the scammers are also becoming more sophisticated and they exploit the way social media algorithms work to target people with content they will be receptive to. “There will always be malicious bad actors who will come up with content that will keep us in our filter bubbles,” Smeaton notes. “All of the content that comes to us on social media is filtered to keep us happy. We will keep getting content that the algorithm knows you like. It’s not the fault of generative AI, it’s the fault of the algorithm.”
So, death metal fans are unlikely to be contacted by a fake Daniel O’Donnell asking for charitable donations. Instead, they will receive content designed to appeal to their particular tastes.
Ultimately, it will probably come down to our own uniquely human critical faculties to sort out the deepfakes from reality. If something sounds or looks too good to be true, it probably is.
“Be vigilant,” Smeaton advises. “Text scams and phone scams only started a few years ago. They are now part of society and part of what it is like to live in 2024. The same will apply to deepfakes. You need to have your wits about you.”