Generative Artificial Intelligence and its Role Within Teaching, Learning and Assessment

Generative Artificial Intelligence (AI) describes algorithms, including ChatGPT and Alphabet’s Bard, that can be used to create new content, including text, computer code, images, audio. Whilst the technologies are themselves not new, generative AI was first introduced in chatbots in the 1960s, recent advances in the field have led to a new era where the way in which we approach content creation is fundamentally changing at a rapid pace.

Generative AI tools are becoming accessible to a much wider audience and so will impact our teaching, learning, assessment and support practices in increasing ways. These technologies offer the potential to support academic staff in the creation and assessment of course material, and new opportunities to engage students in problem solving, critical thinking, analysis and communication. But to use these technologies effectively, academic staff will need to understand how generative AI tools work within the context of their disciplines and higher education more widely. It will also be important that students appreciate the role of generative AI in the development of their graduate attributes, and that we as an institution provide policies for our students with clear information on our expectations for disclosing where such AI technologies have been used within their work. 

Generative AI: Assessment and Academic Integrity

The challenges posed by generative AI for our assessment practices are not in themselves new, but the now widespread availability of these tools means there is the potential for academic integrity concerns to exist on a greater scale than we have observed previously. There are currently a range of generative AI detectors that claim to identify where such tools have been used, but these are not always reliable and so should be used with caution. 

Any assessment submitted that is not a student’s own work, including that written by generative AI tools such as ChatGPT, are in breach of the University’s Code of Practice on Academic Integrity which we have recently updated so that it includes explicit reference to AI generated content:

"1.5. Plagiarism can occur in all types of assessment when a Student claims as their own, intentionally or by omission, work which was not done by that Student. This may occur in a number of ways e.g. copying and pasting material, adapting material and self-plagiarism. Submitting work and assessments created by someone or something else, as if it was your own, is plagiarism and is a form of academic misconduct. This includes Artificial Intelligence (AI)-generated content and content written by a third party (e.g. a company, other person, or a friend or family member) and fabricating data. Other examples of what constitutes plagiarism are set out in Appendix A to this Code of Practice."

All academic members of staff should remind their students about the importance of ensuring academic integrity and the penalties for not doing so. The mis-use of AI technologies in assessments by students should be dealt with in-line with this Code of Practice with advice having first been sought from your School’s Academic Integrity Officer. 

Next Steps: Further Information, Resources and Events

Below we include details of the next steps we are taking in response to the developments in generative AI, along with a series of frequently asked questions, including suggestions for assessment design, and links to further information, resources and events to support academic staff in their exploration of these technologies. This page will continue to be regularly updated with new policies, guidance and information as we explore the implications of the developments in generative AI for our teaching, learning, assessment and support practices.

  1. Development of principles on the use of generative AI technologies within teaching and learning. Along with the other universities in the Russell Group we are currently working to define a set of common staff and student-facing principles on the use of generative AI technologies within teaching, learning, assessment and support. We expect to release the staff-facing principles, along with associated guidance and supporting resources, to colleagues by mid-July 2023 with the student-facing principles being available in early September 2023 ahead of the 2023/24 academic year. Our staff and student-facing guidance, along with the development of our institutional policies, will be informed by our University working group on generative AI. This will help us ensure our guidance and policies remain up-to-date which will be important given the increasing pace at which generative AI is evolving. 
  2. Community of practice: generative AI in teaching and learning. To help inform the working group we are also growing a wider community of practice that will explore the opportunities and implications of generative AI for teaching, learning, assessment and support as well as enabling individuals to come together to discuss issues, access advice and guidance, and share ideas and resources.

    This community of practice is facilitated by our Higher Education Futures institute (HEFi); it will host events and share resources and examples of practice. You can find out more, including details of how to contribute and become involved, Here Generative AI Community of Best Practice - Network (Team joining code: bkalwgz).

Frequently Asked Questions

  1. How do generative AI technologies work?

    Generative AI technologies, such as ChatGPT, can best be thought of as ‘conversation prediction’ tools, very much like the prediction tool first seen on a smartphone keyboard. However, unlike the early attempts of such systems to predict the next word in a sentence, which were only coherent within a few words, ChatGPT is able to consider words and phrases that were written much earlier within the text. This allows it the ability to maintain the context of the conversation for much longer. It is also trained using large amounts of data and broadly continues conversations in a way that matches the previous texts and conversations that it was trained upon.  

  2. What can generative AI actually do?
    Generative AI has the ability to create quite detailed written responses on a particular topic by combining information from multiple sources. The key difference here, however, is that rather than simply copying the text from the datasets verbatim, they combine elements of many related texts in different ways each time dependent upon previous user inputs, thereby creating responses that appear unique and mimic those a human might make in relation to similar prompts. 

    Considering current developments in generative AI in the context of Bloom’s Taxonomy, and dependent upon the material on which they have been trained, ChatGPT for example is generally able to replicate the lower-order thinking skills in terms of the recall of facts and basic concepts (Remember), and in creating the impression of being able to explain ideas and concepts (Understanding).  

  3. What are the current limitations of generative AI?
    Generative Al models are only as good as the information they are trained upon. ChatGPT was trained using text databases from the internet including data obtained from books, Wikipedia and online articles. But is not connected to the internet, so it cannot train itself based upon new information or in real-time. Its most recent training data is from September 2021 and so it is operating on outdated data set which means it may not be able to provide accurate or up-to-date information on more recent events or developments. Generative AI can create variations on existing content, but will struggle to create accurate and realistic content when there is little or no existing information available. Another area where they can also struggle is in the repetition of facts or quotations, and in differentiating between accurate references and fake content. They may generate material that appears real at the surface, but upon careful scrutiny by an expert, it is instead clearly wrong. 

    Again considering ChatGPT in the context of Bloom’s Taxonomy, it cannot replicate the higher-order thinking skills such as producing new or original work (Create), justifying a position, decision or argument (Evaluate), or drawing connections between different ideas (Analyse). 

  4. In the short term, are there any ways that I can modify my assessments because I am concerned that generative AI technologies may impact upon student learning?
    • Assess the higher order aspects of Bloom’s Taxonomy: Whilst students are completing an assignment, incorporate a reflective element that asks them to explain and justify (evaluate) their ideas and approaches. For example, why did they take the approach they did? What other options or approaches did they consider? Why did they not pursue them?
    • Incorporate assessment tasks into the classroom: Rather than using in-class time for the delivery of new content, consider ‘flipping’ your approach so that students cover new content independently prior to the session. In-class time can be used to draft, develop or revise assessment tasks. This allows the opportunity for students to discuss with you, and their peers, their work and ideas and answer any questions that might arise.
    • Apply a local, recent or personal context: Generative AI models are currently trained using a broad, but still limited dataset. ChatGPT is based upon a training dataset from before September 2021 and so may not be accurate with more recent developments. Framing assessments in terms of more recent or local events, for example using case studies, may be effective.
    • Use generative AI to personalise assessments: Rather than setting as assessment question for students to answer, use generative AI to present an answer to a question and ask the students to evaluate and improve the response. For example, this might involve providing students AI generated computer code or a mathematical proof and asking them to identify any mistakes or areas where this might be simplified/enhanced. 
  5. What are the longer-term ways that I can mitigate against the potential impacts of generative AI technologies upon student learning?

    • Diversify assessment types: Some assessment types are more resistant to the effects of generative AI than others and can also help students develop, and evidence, their wider graduate attributes. Oral assessments, including assessed seminars and group discussions, might be appropriate and provide an opportunity for students to demonstrate their knowledge, understanding and even skills in persuasion to an examiner and/or their peers. Similarly videos and podcasts allow students to demonstrate their skills in communication. 
    • Consider a group-based approach: Make assessment tasks collaborative with work taking place during teaching sessions. Randomly allocated groups, where there is a natural peer-led review of work, can minimise the use of generative AI by students.
    • Staging assessment tasks: Stagger assessment tasks over multiple weeks or assignments so that students are required to submit these as a series of smaller components that when combined form a solution to a larger problem. Students can receive feedback on these smaller components, which they are then required to embed in future iterations, and it will allow you to observe the evolution of their work and ideas.
    • Encourage the use of AI: Allow students the choice of whether or not to use AI within an assessment. For example, if a student uses generative AI to develop an essay, they can demonstrate, through tracked changes and comments how it has been developed and why. A similar approach can be used in mathematical or scientific disciplines where students are required to fully justify and explain their methods. Such an approach is also likely to help students better understand how they can engage with AI.
    • Consider Bloom’s Taxonomy: Assessments that require students to evaluate, analyse, or apply what they have learned will limit their ability to pass AI generated work off as their own. Consider how this might be aligned with research-led teaching in your discipline. Similarly advanced-level projects, where students are required to create new knowledge are also more immune to AI generated content.
    • Assess synoptically: Explore how an assessment might integrate multiple ideas, concepts or approaches, perhaps spanning several modules within your discipline. This will reduce assessment load and encourage students to investigate and articulate how different disciplinary ideas are connected.
    • Require engagement with specific research-led literature: Require students to robustly cite external, and where appropriate, modern disciplinarily research. This will help students better appreciate the quality and applicability of literature sources, and will also help enhance their skills in critical thinking and analysis.