Upton Sinclair wanted to improve the working conditions of those in American meatpacking at the start of the 20th century. He went undercover for seven weeks working in such a plant and used it as the basis for his book The Jungle, published in 1906.
Sinclair got what he wanted but due to a different kind of reaction. Once readers became aware of the conditions with which the food they were eating was being prepared, the reaction was swift. It largely led to the changes Sinclair wanted but not because of the cause he was pushing to begin with.
Since the phenomenon of generative artificial intelligence (AI) began around two years ago, regulators, activists, and often columnists have stressed the need for regulation. The importance for AI to be accurate and manage data correctly has been a uniting theme among those pushing for regulation.
Whether that is to protect individual privacy, have products that benefit the citizenry, or to ensure a general sense of fairness and accuracy for businesses and consumers, the need for accurate generative AI with clear oversight has been at the core of the push for regulation.
READ MORE
That roused some sections of the public but recent events may have an impact similar to The Jungle. It’s one thing to be told what might happen if AI isn’t regulated well, it’s quite another to see its impact directly, especially when it comes to children.
OpenAI, the company behind ChatGPT, is facing a lawsuit from the parents of Adam Raine. The 16-year-old from California died by suicide in April, and Raine’s parents argue that the chatlogs between him and ChatGPT validated his suicidal thoughts.
While OpenAI has yet, at the time of publication, to formally respond to the suit, it issued a notice last month following its filing. It said that its services will now direct people to actual, human-led, resources for help rather than engaging.
It’s not just ChatGPT. Earlier this year Erin Egan, Meta’s chief privacy officer, told this publication that Europe risked being the “museum of the world” because of shifting requirements in regulation.
Speaking with Ciara O’Brien, Egan said: “We support protecting consumers through legal means. We support legal frameworks. But in the EU it’s such a challenge. The regulatory landscape has evolved, the puck keeps moving. We have the GDPR, it’s been interpreted in an evolving way; we now have the DMA [Digital Markets Act] that’s coming into force.
“We introduced a subscription model which the highest court of the land in Europe said was a legal model. It’s pay versus consent; it’s a model relied on by other players in Europe. And now we’re being told we have to do something else. This makes it difficult to launch innovative products in Europe and it hurts people and consumers.”
This all sounds very interesting except for the fact that Egan was in fact describing how a system of checks and balances works and complaining about it.
We didn’t have to wait long for Egan’s own employer to prove why the regulatory system has to keep evolving and adapting.
Earlier this month, Meta announced it would introduce more guardrails to its AI chatbots to stop them talking to teens about topics that include suicide, self-harm and eating disorders. Once again there was a preceding event that appeared to prompt the action.
Josh Hawley, a US senator, triggered an investigation into Meta after Reuters discovered an internal document that appeared to show the company’s rules for its chatbots to engage children in conversations that are romantic or sensual as well as provide false information.
If Egan doesn’t want the puck to keep moving, then she needs to look at the job her company and others in the AI space are doing at self-regulation.
At some point reading this, I truly hope your brain went “why is AI talking to teenagers about suicide?” If so, you are a more rational being than those who write the code and make the rules around how AI operates.
This is why regulation and oversight are not only needed but have to be agile to adapt. The sheer volume of potential problems makes it all too easy for real-world consequences to occur.
No robot should be talking to a teenager about suicidal ideation. No robot should ever have been allowed to do so either. The reason the world can’t trust AI companies to regulate themselves is because it has not and never will be in their best interest to do so.
They’re not alone. It’s a pretty standard issue across commerce. Big tech companies just act like they are special flowers that must go untouched. The sad truth is that if they don’t want to get the justifiably alarmed reaction from most people on these issues, then they have to let the people they think are spoiling their fun do their jobs.