Under the revised rules, all academic misconduct cases must be reported to OSCCA, including those resolved within departmentsLOUIS ASHWORTH / GRAPH+SAS / WIKIMEDIA COMMONS HTTPS://CREATIVECOMMONS.ORG/LICENSES/BY-SA/4.0/DEED.EN

Cambridge recorded its first cases of academic misconduct involving artificial intelligence (AI) in 2024, Freedom of Information requests reveal.

Between 25 November 2023 and 24 November 2024, examiners referred 49 cases of exam cheating to the Office of Student Conduct, Complaints and Appeals (OSCCA), the university’s central disciplinary body. Of these, three were linked to AI – the first time such cases have been formally recorded.

The data also shows an increase in overall academic misconduct cases in recent years. Between 2019 and 2023, the number of upheld cases ranged from four to 19 annually. However, the figure jumped to 33 in 2024, following changes to the Student Disciplinary Procedure in October 2023.

Under the revised rules, all academic misconduct cases must be reported to OSCCA, including those resolved within departments. Previously, only cases requiring formal investigation were reported centrally, contributing to the rise in recorded incidents.

These new cases of AI-related cheating in assessments come amid growing concerns about the use of generative AI models such as ChatGPT in education. Last year, the Human, Social and Political Sciences (HSPS) faculty announced handwritten exams would replace online assessments for first and second-year Sociology and Anthropology students, citing a rise in AI use in exams.

Dr Andrew Sanchez, Chair of the HSPS Tripos Management Committee, commented: “This decision responds to concerns around the use of AI in online examinations and follows discussion within the HSPS faculty and Tripos Management Committee last academic year. Throughout this process, the feedback of student members of the Tripos Management Committee has been sought, and those members have been briefed on any changes.”

A 2023 Varsity survey found that nearly half of Cambridge students had used AI for university work, with almost a fifth using it for assessed tasks like coursework. The university prohibits the use of AI in assessed work, classifying it as academic misconduct, but guidance for non-assessed work varies by department.

In March 2024, the HSPS faculty issued an open letter urging students not to use generative AI, warning it could “rob you of the opportunity to learn.” They emphasised that presenting AI-generated text as one’s own would constitute academic misconduct.

English students were told in Lent 2023 that AI could assist with tasks such as “sketching a bibliography” or “early stages of the research process,” provided it was done under supervisor guidance. Some first-year engineering students were also advised they could use ChatGPT to structure coursework, so long as they disclosed its use and included the prompts used.

In February 2023, before the University’s policy on AI was clarified, Cambridge’s pro-vice-chancellor for education, Bhaskar Vira, told Varsity that a ChatGPT ban was not “sensible” because “we have to recognise that this is a new tool that is available.”


READ MORE

Mountain View

HSPS scraps online exams over AI fears

That same year, Dr. Vaughan Connolly and Dr. Steve Watson from Faculty of Education hosted a Q&A titled “ChatGPT (we need to talk)” on the Cambridge website. Watson warned that while “universities and schools must protect academic integrity,” over-regulation risks making institutions “unresponsive to change” as AI is widely adopted.

A university spokesperson said: “The University has strict guidelines on student conduct and academic integrity. These stress that students must be the authors of their own work.

“Content produced by AI platforms does not represent the student’s own original work so would be considered a form of academic misconduct to be dealt with under the University’s disciplinary procedures,” they added.