ChatGPT has pushed artificial intelligence into the public consciousness. Society is at a crossroads regarding AI’s role in our future, forcing us to make societal decisions to avoid bad outcomes.
ChatGPT poses an existential danger, though not of the Terminator type. The threat is significant enough to demand regulation to ensure we use such powerful capabilities for societal benefit. Several efforts are under way to regulate this type of AI, but governments must act now: the window for effective regulation is relatively narrow.
Regulation will always lag behind innovation, especially in the tech sector. At present, there is no EU-wide regulation on AI-based technology in a wide range of products, such as self-driving cars, medical technology and surveillance systems. It may be early 2025 before rules being debated by the European Parliament come into force. That is far too late for products based on large language models such as ChatGPT. The Italian privacy regulator recently banned ChatGPT due to credible breaches of GDPR, and Spain and France have raised similar concerns. But more needs to be done.
[ AI could be the next industrial revolution - but what risks does it bring?Opens in new window ]
The developers of these technologies, including OpenAI, are aware of just some of the possible adverse outcomes. OpenAI hired a team of leading academics over a six-month period to assess the negative impact that GPT-4 might have. Among the outcomes, they found that it was able to find somewhere to make chemical weapons, write hate speech and find unlicensed guns online. Based on this evaluation, OpenAI amended GPT-4 before release. But this is just one limited set of problems. Clearly a huge range of as yet-undiscovered negative consequences remains.
‘Godfather of AI’ Geoffrey Hinton warns of ‘quite scary’ dangers of chatbots as he quits Google
‘People make assumptions about us’: How third level is becoming a real option for people with intellectual disabilities
Norma Foley’s approach to AI in the classroom is breathtakingly naive
Irish company leveraging AI to help brands communicate climate actions responsibly and avoid claims of greenwashing
Is ChatGPT inherently dangerous? Let’s apply a literary notion of robotics to the question. Isaac Asimov, the science fiction author, devised a set of rules to be hard-wired into the software of robots, including a “First Law”, stating that a robot may not injure a human being, or through inaction allow a human being to come to harm. Does ChatGPT abide by this law?
Currently, it has no real power to harm a human directly. Neither has it any notion of right or wrong, or the ability to make moral decisions. It is a generative AI system trained on unlabelled data. It can extrapolate from that data to generate responses to queries that it can then extend beyond its training to develop plausible, but not necessarily accurate, answers.
ChatGPT is based on GPT-4. No one alive can understand this system or explain why it generates responses to queries. This lack of explainability is profoundly worrying
ChatGPT is a data-driven system: it is created based on synthesising data and is only as powerful as the data on which it is based. Powerful and accurate data-driven AI systems require well-defined tasks within constrained environments, using carefully chosen data. For example, some areas of cancer diagnosis involve data-driven AI-based systems that outperform human experts and offer substantial societal benefit. By training this system in a constrained environment (detecting cancer in MRI images) using carefully chosen data – in this case, MRI images with and without cancer present – treatment outcomes can be improved.
ChatGPT satisfies none of these requirements, so it will remain a weak system with limited effectiveness without further modifications. It is trained on an undefined task within totally unconstrained environments, on random data accessible via the web.
Users can ask it any form of query, not a class of questions with well-defined responses – for example, “Tell me all Thai restaurants in Galway”. It is already clear that ChatGPT makes errors, and that it can and has fabricated nonsense answers.
Critical to evaluating the danger of ChatGPT is that this system can only interact via text or voice; there is no inherent set of actions that a ChatGPT-based system may take. This is unlike autonomous systems that act in the world, such as self-driving cars, which are trained using reinforcement learning. Given input data from the car’s sensors, the vehicle is trained to act – for example, drive straight or avoid a pedestrian.
ChatGPT is trained using language-based deep-learning algorithms, with no possible actions involved. As such, ChatGPT has no notion of acting in the world.
Despite this, it is a tool that needs to be regulated. The software driving autopilot systems for commercial aircraft is highly regulated. Contrast the virtually unregulated AI/software industry with other critical systems, such as airline autopilot software. These autopilot systems are explainable because the underlying mathematics, like the software implementing it, is clearly demonstrable.
[ The Irish Times view on artificial intelligence: promise and perilOpens in new window ]
But ChatGPT is based on GPT-4. No one alive can understand this system or explain why it generates responses to queries. This lack of explainability is profoundly worrying, especially if ChatGPT is used for crucial societal decision-making.
The network on which ChatGPT is based is only as good as the data on which it is trained, and ChatGPT is trained on unregulated data from the internet. How is an AI expected to create a theory of right or wrong, or the ability to make moral decisions, without some explicit guidance? This training process has zero regulation, so the outcomes are unregulated and can be dangerous. In addition, the data used may breach privacy laws, leading to decisions such as that of the Italian privacy regulator.
So how might ChatGPT, in its current incarnation, cause harm? Take its use as a search engine. Because we don’t know how ChatGPT creates responses, there is no guarantee that the answers will be accurate, or might be based on data that is abusive or hateful – unlike the Google search engine, which is based on a PageRank algorithm with well-understood mathematical foundations. The possibility that we will replace search engines whose performance we understand with engines whose performance we don’t understand is troubling.
The entire AI software industry needs better regulation. We already know about the damage done by Meta subsidiaries, such as the harm done by Instagram to young girls or Cambridge Analytica through its access to Facebook data. Deepfake technology also poses many problems.
The window for regulation is narrow. Governments must act before it shuts completely.
Prof Gregory Provan is a professor of computer and information technology at University College Cork and leads a Lero research spoke on blended autonomous vehicles