Academic staff at Trinity College Dublin have been issued with advice on how to minimise the threat of cheating in assessments in light of a new artificial intelligence (AI) tool that can produce essays within seconds.
ChatGPT, released in November by the artificial intelligence lab OpenAI, generates accurate and nuanced text in response to short prompts. Videos are circulating online with millions of views that show students using it to write assignments.
On Friday all academic staff at Trinity were provided with suggestions on how to minimise the impact of ChatGPT on assessments during the new term, including a focus on in-class tests and oral presentations.
Staff have been asked to consider reviewing assessment formats, such as essays, to measure the level of risk that AI tools could be used.
‘I know what happened in that room’: the full story of the Conor McGregor case
Eating disorders in later life: Some of my peers have had teenage weight levels for decades
Eoin Burke-Kennedy: Is remote working bad for productivity?
David McWilliams: The potential threats to Ireland now come in four guises after Trump’s election
They have been asked to make explicit in teaching the value of students writing assignments themselves, and to require disclosure where ChatGPT or similar tools have been used in support of an assignment.
Academics have also been asked to consider changing modes of assessment during the second semester, including opting for invigilated or in-person assessment. This would include the use of more presentations, in-class tests or the use of a random sample of additional vivas, or exams where students’ answer questions in speech rather than writing.
The suggestions are contained in an email to staff from David Shepherd, TCD’s senior lecturer and dean of undergraduate studies, Martine Smith, dean of graduate studies and Pauline Rooney, head of academic practice.
ChatGPT – which stands for “generative pre-trained transformer” – is part of a new wave of artificial intelligence. It is the first time such a powerful tool has been made available to the general public through a free and easy-to-use web interface.
An OpenAI representative has said the lab recognised the tool could be used to mislead people and was developing technology to help people identify text generated by ChatGPT.
Quality and Qualifications Ireland (QQI), the watchdog for standards in Irish higher education, said many higher education institutions have initiated full reviews of policies in relation to assessment and academic integrity.
“It is a matter for institutions to take time to explore the impact of these tools on the system and understand how they may harness new technological tools such as ChatGPT, while balancing out any potential risks to academic integrity,” a QQI spokeswoman said.
The National Academic Integrity Network, a group of Irish academics established by QQI, met last month to discuss ways to adapt assessments to minimise the threat of cheating, and to provide guidance for students about the risks and ethics of these tools.
[ ChatGPT is creepily smart, weirdly chummy and a window to your ‘extended mind’Opens in new window ]
[ Breda O’Brien: ChatGPT is the ultimate plagiarism tool. How can teachers respond?Opens in new window ]
Academics are hopeful that detectors may soon be available to root out cheating.
Turnitin, a plagiarism detection service widely used by colleges, said it would incorporate more features for identifying AI, including ChatGPT, later this year.
While some education institutions abroad have banned the technology, a QQI spokeswoman said Irish higher education institutions are free to draw up their own policies on how AI is used on campus.
“Artificial intelligence can be used as an educational tool and students will need to understand how to use AI technology legitimately. It is important that institutions clearly communicate to their students under what circumstances the use of artificial intelligence and other tools will be considered a threat to academic integrity,” she said.
She said students outsourcing their work to an AI system is just as problematic as students outsourcing their work to a person providing a contract cheating service or a relative.
“At this stage, we understand that AI systems still have some limitations – for example, they can be weak on referencing. Understanding these weaknesses may provide institutions with a way of designing their assessments to combat the risk of cheating using artificial intelligence. Artificial intelligence may also lead to the prioritisation of higher-order learning that cannot be automated,” she said.
While flaws with the technology have been flagged, OpenAI is expected to soon release another tool, GPT-4, which it says will be better at generating text than previous versions. Google has built its own chatbot, LaMDA, and several other start-up companies are working on similar forms of generative AI tools.