'An important educational opportunity is lost in a luddite insistence on clinging to old ways'Amika Piplapure for Varsity

Should AI make coursework a thing of the past? My answer: a definitive no. The flaws in exclusively using handwritten exams are well-trodden ground: the cursed serendipity of not feeling your best on exam day, the problems created for students with learning difficulties, and perhaps most of all – none of us have legible handwriting anymore. But I believe there is a stronger case to be made against this idea by questioning its very premise. In my view, AI is not only a non-threat to the integrity of coursework assessment, but a valuable tool that deserves to be embraced.

Following ChatGPT’s launch in 2022, many a disappointed student learned that simply asking it to write your essay for you doesn’t return particularly impressive results. Even simple questions of logic revealed it for the glorified word-association program it really is. Mathematician Kit Yates has amassed a veritable collection of examples showing ChatGPT’s dismal ability to handle maths and logic.

“Using generative AI resourcefully is a skill”

Other LLMs (Large Language Models) have fared little better. Google’s ‘AI Overviews’ have been mercilessly mocked for, for example, recommending users to “eat one small rock per day,” a consequence of having included the satirical news site The Onion in its training data. Though these particular matrix-glitching errors have been resolved, the fundamental truth they expose is inescapable. As much as we may instinctively anthropomorphise AI chatbots (I am personally guilty of this, I can’t bring myself to drop my pleases and thank yous when making requests to ChatGPT), these examples break the illusion that any kind of critical intelligence exists behind AI-generated answers.

ChatGPT cannot formulate an original thought. In fact, it cannot formulate an unoriginal thought. It cannot think at all. Nor does it have any concept of truth – readily inventing citations and quotations to string into its answers like a desperate student who hasn’t done any revision. Any student using an LLM to generate large sections of their coursework cannot aspire to a remotely high grade, not on grounds of plagiarism but simply of quality. To the question of “how should we respond to the threat A.I. poses to the coursework system?” I say “threat? What threat?”

But, LLMs are capable of far more than mediocre essay writing. A well-written prompt could return you a straightforward explanation of a complex concept, a list of recommended reading on a niche field, or summarise a particularly dense section of academic writing. The use of AI to correct grammatical errors and stylistic points is an especially vital aid to students with learning difficulties and ESL speakers. Using generative AI resourcefully is a skill, one that will find use in all careers in the future.

“Throwing out coursework and other forms of non-exam assessment for fear of A.I. is using a jackhammer to swat a fly”

Indeed, the future is now. AI is already widely used to write emails, references, and cover letters. Universities don’t exist in a vacuum: an important educational opportunity is lost in a luddite insistence on clinging to old ways whilst the world around us moves forward. If Student A lazily uses AI to write large portions of their coursework for them, it will be of poor quality anyway. But Student B, whose coursework bears discernable traces of ChatGPT’s ghostly hand in certain paragraphs, whose grammar has been cleaned up, or whose bibliography contains citations that Google could not have reached, well, why penalise them at all? They have simply made intelligent use of the tools of tomorrow.

And as for a hypothetical Student C, who makes use of no AI tools whatsoever, they are a gravely endangered species. A report by the Higher Education Policy Institute published at the start of this year found that 92% of UK university students make some use of AI. Skynet is here to stay, and that doesn’t have to be a bad thing.


READ MORE

Mountain View

The Cambridge workload prioritises quantity over quality

Cambridge is uniquely well-placed to handle the AI revolution due to its focus on in-person supervisions. Even an exceptionally convincing piece of AI plagiarism will quickly be found out when a student cannot prove their knowledge or explain the roots of their argument face-to-face. And if a student becomes sufficiently fluent in explaining the argument and content of their writing that their supervisor cannot tell – well, isn’t that the whole point?

A combination of in-person exams and coursework assignments is the optimal way to do assessment. It offers students a way to prove their knowledge across different skill sets, and gives everyone a fair chance. Throwing out coursework and other forms of non-exam assessment for fear of AI is using a jackhammer to swat a fly, and leaves students unprepared to harness the tools of the 21st-century.

Want to share your thoughts on this article? Send us a letter to letters@varsity.co.uk or by using this form.