The challenges posed by generative AI for our assessment practices are not in themselves new, but the now widespread availability of these tools means there is the potential for academic integrity concerns to exist on a greater scale than we have observed previously. There are currently a range of generative AI detectors that claim to identify where such tools have been used, but these are not always reliable and so should be used with caution.
Any assessment submitted that is not a student’s own work, including that written by generative AI tools such as ChatGPT, are in breach of the University’s Code of Practice on Academic Integrity which we have recently updated so that it includes explicit reference to AI generated content:
"1.5. Plagiarism can occur in all types of assessment when a Student claims as their own, intentionally or by omission, work which was not done by that Student. This may occur in a number of ways e.g. copying and pasting material, adapting material and self-plagiarism. Submitting work and assessments created by someone or something else, as if it was your own, is plagiarism and is a form of academic misconduct. This includes Artificial Intelligence (AI)-generated content and content written by a third party (e.g. a company, other person, or a friend or family member) and fabricating data. Other examples of what constitutes plagiarism are set out in Appendix A to this Code of Practice."
All academic members of staff should remind their students about the importance of ensuring academic integrity and the penalties for not doing so. The mis-use of AI technologies in assessments by students should be dealt with in-line with this Code of Practice with advice having first been sought from your School’s Academic Integrity Officer.