Generative Artificial Intelligence and its Role Within Teaching, Learning and Assessment

Generative Artificial Intelligence (AI) describes algorithms, including ChatGPT and Google’s Gemini, that can be used to create new content, including text, computer code, images, audio. Whilst the technologies are themselves not new, generative AI was first introduced in chatbots in the 1960s, recent advances in the field have led to a new era where the way in which we approach content creation is fundamentally changing at a rapid pace.

Generative AI tools are becoming accessible to a much wider audience and so will impact our teaching, learning, assessment and support practices in increasing ways. These technologies offer the potential to support academic staff in the creation and assessment of course material, and new opportunities to engage students in problem solving, critical thinking, analysis and communication. But to use these technologies effectively, academic staff will need to understand how generative AI tools work within the context of their disciplines and higher education more widely. It will also be important that students appreciate the role of generative AI in the development of their graduate attributes, and that we as an institution provide policies for our students with clear information on our expectations for disclosing where such AI technologies have been used within their work. 

This guidance provides a framework for the implementation and use of generative AI models within teaching, learning, assessment, and support at the University of Birmingham.

Released in July 2023, first updated in January 2024, and subject to next review in July 2024, it will continue to evolve as generative AI technologies develop. The guidance is not intended to be prescriptive, but instead provide a broad framework for implementation that can be tailored in conjunction with colleagues within your School, your Head of Education, and your College Director of Education.

Guiding Principles

In July 2023, we, along with the other 23 Russell Group Universities, agreed the adoption of a set of common principles that will shape our institutional and programme-level work to support the ethical and responsible use of generative AI.

The five principles recognise the risks and opportunities associated with generative AI in relation to teaching, learning, assessment, and support, and are designed to help staff and students become leaders in an increasingly AI-enabled world.

The principles can be downloaded, and will collectively guide our approach to generative AI as an institution:

  1. Universities will support students and staff to become AI-literate.
  2. Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience.
  3. Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access.
  4. Universities will ensure academic rigour and integrity is upheld.
  5. Universities will work collaboratively to share best practice as the technology and its application in education evolves.

Guiding Framework for the Introduction of Generative AI Within Teaching, Learning and Assessment

General

  1. Academic staff are not required to use generative AI tools within their teaching, learning, assessment, or support practices, but must consider their potential impact upon student learning and assessment.
  2. All students should, however, have opportunities to engage with generative AI tools at all levels throughout their programme of study.
  3. We have recently launched our Birmingham Standards in Generative AI. The Birmingham Standards define the principles that guide the use of generative AI tools within teaching, learning and assessment.
  4. In designing their approach, Schools should ensure that students:
    • understand the significance of generative AI for their studies and future careers.
    • recognise appropriate, and inappropriate, uses of generative AI in supporting learning and assessment.
    • appreciate the strengths and limitations of generative AI when used as part of the learning experience and in the context of the development of their graduate attributes.
    • develop the skills to ethically and successfully utilise generative AI tools to support learning and in appraising their own educational gain.
  5. Academic staff, working with Year or Programme Directors and Heads of Education, should determine how generative AI can be incorporated into course design and learning and teaching activity based upon learning outcomes, pedagogic practices, the development of graduate attributes and skills, disciplinary conventions, individual interest, and accreditation requirements.
  6. The implementation of generative AI should be considered, and regularly reviewed, at programme, School, and College levels, with reporting taking place to School and College Education Committees. This will ensure consistency in the approach of academic staff, and in the messaging to students regarding its ethical use. It will also enable an ongoing response as generative AI tools evolve and our institutional good practice develops. 

Maintaining Academic Integrity

  1. Unless explicitly stated otherwise, students should assume that the use of generative AI within an assessment or assignment is not permitted.
  2. Any assessment submitted that is not a student’s own work, including that written by generative AI tools, are in breach of the University’s Code of Practice on Academic Integrity which has been updated so that it includes explicit reference to AI generated content:

    "1.5. Plagiarism can occur in all types of assessment when a Student claims as their own, intentionally or by omission, work which was not done by that Student. This may occur in a number of ways e.g. copying and pasting material, adapting material and self-plagiarism. Submitting work and assessments created by someone or something else, as if it was your own, is plagiarism and is a form of academic misconduct. This includes Artificial Intelligence (AI)-generated content and content written by a third party (e.g. a company, other person, or a friend or family member) and fabricating data."
  3. Additional updates have been made to the Code of Practice on Academic Integrity to provide clarity on the potential role of generative AI as a proofreading tool within essays, projects and dissertations:

    "A1.6 Unacceptable proof-reading
    Rewriting or editing of text with the purpose of improving the Student’s research arguments or contributing new arguments or rewriting computer code is not acceptable, whether undertaken by a person, by generative AI or by any other means, and may be deemed to be plagiarism.

    In particular, generative AI or other editorial assistance must not be used to:
    • Alter text to clarify and/or develop the ideas, arguments, and explanations.
    • Correct the accuracy of the information.
    • Develop or change the ideas and arguments presented.
    • Translate text into, or from, the language being studied.
    • Or for the purpose of reducing the length of the submission so as to comply with a word limit requirement.
    Generative AI or other editorial assistance may only be used to offer advice and guidance on:
    • Correcting spelling and punctation.
    • Ensuring text follows the conventions of grammar and syntax in written English.
    • Shortening long sentences or reducing long paragraphs without changes to the overall content.
    • Ensuring the consistency, formatting and ordering of page numbers, headers and footers, and footnotes and endnotes.
    • Improving the positioning of tables and figures and the clarity, grammar, spelling and punctuation of any text in table or figure legends.
    Exceptions to these restrictions and what is permitted may exist, e.g. for English language study programmes. The PAU will advise students on the exceptions for specific modules and/or Assessments."
  4. The misuse of AI technologies in assessments and assignments by students, including by improper referencing or non-acknowledgement, should be dealt with in-line with this Code of Practice. Advice should first be sought from your School’s Academic Integrity Officer.
  5. Tools designed to detect the use of generative AI are currently known to produce both ‘false positives’ and ‘false negatives’. At present, the use of any such tools within the University is not allowable and no student work should be uploaded to generative AI detection software.
  6. The University has institutional access to the Turnitin plagiarism detection software which has released an AI writing detection capability. Like many other institutions across the higher education sector, we have not currently enabled this feature. There remains a need to better understand its effectiveness and to assess privacy and data security considerations arising from its use.
  7. We will continue to review the developments associated with generative AI detection software and may allow its future use.

Use of Generative AI by Students and Staff

  1. Generative AI tools have the potential to be used by students to support and enhance their learning experience. Staff members should support and encourage such appropriate use. For example, they might be used by students to summarise or extend key ideas introduced or discussed within lectures or seminars, develop personalised study resources and revision materials, enhance their search techniques, or test their skills in critical thinking and analysis.
  2. However, the use of generative AI within any assessment or assignment is not permitted unless explicitly stated otherwise.
  3. When considering the use of generative AI within learning, teaching, assessment and support practices, academic staff should do so on the basis of how it will support or enhance student achievement of learning outcomes and/or the development of graduate attributes. Where generative AI tools are used, students should be made aware of the rationale for their use.
  4. Within all modules, academic staff should clearly articulate if, and to what extent, the use of generative AI tools is permitted within assessments or assignments by students:
    • This should be detailed within the course outline and all assessment and assignment briefs.
    • Students should also have the position verbally outlined during relevant teaching sessions, via relevant module-specific Canvas pages, and course handbooks.
    • It should include a dedicated and well-signposted Canvas page outlining the nature and rationale for their use, and the extent of the allowable role of generative AI within each assessment and assignment.
  1. Students should be first introduced to the ethical use of generative AI ahead of any summative assessment or assignment where such tools might be used. This might form part of a formative assessment task where clear feedback on their use, and misuse, can be provided to students.
  2. Where generative AI is to be utilised by students as part of their programme of study, free, and age appropriate, versions of such tools should be used to ensure equity of access. Free examples currently include Open AI’s ChatGPT, Google’s Gemini and Microsoft’s Bing Copilot. Bing Copilot allows students to utilise GPT-4 for free and with extended functionality if they sign-up for a free Microsoft account.
  3. All members of University staff now have institutional access to Microsoft Copilot within Edge, a generative AI powered web chat tool that enables free access to GPT-4 and DALL-E 3 within a data protected environment.
  4. Academic staff incorporating generative AI tools within their teaching or assessments should ensure:
    • they are familiar within their limitations and associated ethical issues, and that these are discussed with students. Examples include: privacy and data considerations; potential for bias; inaccuracy and mis-representation of information; ethics codes; plagiarism; and, exploitation.
    • they are familiar with the specific privacy policies or user agreements relating to their use. Students should be explicitly alerted to these policies whenever generative AI is to be used.
  5. Year and programme-level handbooks should be updated to include details of the University’s policy regarding the use of generative AI tools by students and its implementation within the School. This should be highlighted to students during their (re-) induction at the start of each academic year.
  6. Generative AI offers the potential for academic staff to enhance their learning and teaching materials and assessments, for example by allowing the creation of personalised or contextual materials such as case studies and simulations. Where generative AI tools are used by an academic member of staff to create course materials:
    • this should be clearly articulated within those learning materials or assessments.
    • academic staff are individually responsible for ensuring the factual accuracy and quality of any materials created using generative AI tools.
  7. Unless undertaken as part of a University approved trial, generative AI tools should not be used to mark or grade student work. Marking and grading decisions should be undertaken by academic members of staff in line with the University’s Code of Practice on Taught Programme and Module Assessment and Feedback.
  8. The resulting ownership and retention of work uploaded to generative AI tools is currently unclear. No student work should be submitted to generative AI tools, including Microsoft Copilot in Edge, for example for the purpose of obtaining feedback, without the written consent of the student or their ability to opt-out without detriment.
  9. Further guidance on using generative AI to develop teaching materials and assessments will continue to be provided along with case studies of practice. All academic staff are encouraged to seek support though our Generative AI Community of Best Practice.

Assessment Design

  1. Each assessment or assignment specification should clearly specify, as appropriate:
    • whether the use of generative AI tools is permitted.
    • how its use should be acknowledged by students.
  2. Within any assessment or assignment where the use of generative AI tools is explicitly permitted, students are required to confirm how generative AI tools have been used (or otherwise).  Examples might include:
    • Requiring students to include a pre-defined statement that explicitly indicates whether or not they have used generative AI tools.
    • Asking students to share prompts used, outputs or modifications.
    • Requiring students to upload a reflective component detailing how generative AI has been used and their experience of engaging with it.
    • Appropriate or enhanced referencing (see for example, APA style 7th Edition which includes guidance on referencing generative AI tools).
  3. Marking criteria and rubrics should be updated for all assessments and assignments. This should be undertaken irrespective of whether the explicit use of generative AI tools is allowed as such changes form a mechanism for mitigating the effects of their inappropriate or unauthorised use. They should, as appropriate:
    • Reflect how the use of generative AI is being assessed.
    • Proportionately reward successful demonstration of the higher-order thinking skills of Bloom’s Taxonomy (see for example) which generative AI currently finds difficult to replicate.
  4. All academic staff have an individual responsibility to review their assessments and assignments to mitigate the effects of the inappropriate use of generative AI tools.
  5. One of the most effective ways of mitigating the effects of generative AI upon assessments is through assessment redesign and diversity.
  6. Some assessment types are more susceptible to the effects of generative AI than others. Examples include extended-time online examinations, essays based upon broad and well-known concepts, and online quizzes testing the factual recall of basic discipline knowledge. However, mitigation strategies exist including incorporating assessment tasks into the classroom, staging assessment tasks to sequentially build upon each other, and adding a local or specific context to assignments. Further guidance on assessment strategies for mitigating the effects of generative AI can be found below.
  7. As part of institutional response to the rise in generative AI technologies, we are currently piloting and evaluating the use of a suite of online assessment tools in 2023/24. A series of Educational Excellence workpackages related to generative AI tools and their use within teaching, learning, assessment, and support are currently underway across the University and consider the theme of assessment from both a staff and student perspective.

Sources of Support

  1. Our Higher Education Futures Institute (HEFi) will continue to provide advice, guidance, training and resources to support academic staff in relation to the effective and ethical use of generative AI tools within teaching, learning, assessment and support.
  2. Our Academic Skills Centre has developed student-focused resources outlining the role of generative AI with the context of their learning experience, and the opportunities and limitations of its use. Such resources will assist academic staff in discussing generative AI with their students and provide useful information for inclusion on programme and module Canvas pages.
  3. Our network of School Academic Integrity Leads can provide advice and guidance to academic staff on matters related to the potential misuse of generative within assessments.
  4. Support in implementing this framework can be accessed via our growing community of practice that is exploring the opportunities and implications of generative AI for teaching, learning, and assessment as well as enabling individuals to come together to discuss issues, access advice and guidance, and share ideas and resources. This community of practice is facilitated by HEFi. You can find out more, including details of how to contribute and become involved, here Generative AI Community of Best Practice - Network (Team joining code: bkalwgz).
  5. We will continue to review this guidance framework and make updates as appropriate as generative AI develops and our institutional response evolves.

Useful Resources and Links

  1. Our Birmingham Standards on the use of Generative AI within teaching, learning and assessment can be found here.
  2. Guidance for students on using generative AI tools within their studies can be found here.
  3. Example rubrics for programme and module-level handbooks and Canvas pages can be found here.
  4. Examples of how the use of generative AI tools should be acknowledged and cited by students within their assessments and assignments can be found here.
  5. We have adapted the Office for Students’ (OfS) Classification Descriptors for Level 6 Bachelor’s Degrees to include reference to generative AI tools. A copy of the modified descriptors are available here. Examples of how these might be embedded within individual marking and grading schemes are available.
  6. PGT dissertations and generative AI: Guidance for Supervisors of dissertations in 2023/24.
  7. Details of the five Educational Excellence workpackages we have recently initiated related to generative AI can be found here, along with details of how University staff can become involved in their work.
  8. HEFI MicroCPD related to generative AI: Requiring students to include a pre-defined statement that explicitly indicates whether or not they have used generative AI tools.
  9. The Quality Assurance Agency (QAA) curates a growing bank of resources related to generative AI technologies and their role within education.

Frequently Asked Questions

  1. How do generative AI technologies work?
    Generative AI technologies, such as ChatGPT, can best be thought of as ‘conversation prediction’ tools, very much like the prediction tool first seen on a smartphone keyboard. However, unlike the early attempts of such systems to predict the next word in a sentence, which were only coherent within a few words, ChatGPT is able to consider words and phrases that were written much earlier within the text. This allows it the ability to maintain the context of the conversation for much longer. It is also trained using large amounts of data and broadly continues conversations in a way that matches the previous texts and conversations that it was trained upon.  

  2. What can generative AI actually do?
    Generative AI has the ability to create quite detailed written responses on a particular topic by combining information from multiple sources. The key difference here, however, is that rather than simply copying the text from the datasets verbatim, they combine elements of many related texts in different ways each time dependent upon previous user inputs, thereby creating responses that appear unique and mimic those a human might make in relation to similar prompts. 

    Considering current developments in generative AI in the context of Bloom’s Taxonomy, and dependent upon the material on which they have been trained, ChatGPT for example is generally able to replicate the lower-order thinking skills in terms of the recall of facts and basic concepts (Remember), and in creating the impression of being able to explain ideas and concepts (Understanding).  

  3. What are the current limitations of generative AI?
    Generative Al models are only as good as the information they are trained upon. ChatGPT was trained using text databases from the internet including data obtained from books, Wikipedia and online articles. But is not connected to the internet, so it cannot train itself based upon new information or in real-time. Its most recent training data is from September 2021 and so it is operating on outdated data set which means it may not be able to provide accurate or up-to-date information on more recent events or developments. Generative AI can create variations on existing content, but will struggle to create accurate and realistic content when there is little or no existing information available. Another area where they can also struggle is in the repetition of facts or quotations, and in differentiating between accurate references and fake content. They may generate material that appears real at the surface, but upon careful scrutiny by an expert, it is instead clearly wrong. 

    Again considering ChatGPT in the context of Bloom’s Taxonomy, it cannot replicate the higher-order thinking skills such as producing new or original work (Create), justifying a position, decision or argument (Evaluate), or drawing connections between different ideas (Analyse). 

  4. In the short term, are there any ways that I can modify my assessments because I am concerned that generative AI technologies may impact upon student learning?

    • Assess the higher order aspects of Bloom’s Taxonomy: Whilst students are completing an assignment, incorporate a reflective element that asks them to explain and justify (evaluate) their ideas and approaches. For example, why did they take the approach they did? What other options or approaches did they consider? Why did they not pursue them?
    • Incorporate assessment tasks into the classroom: Rather than using in-class time for the delivery of new content, consider ‘flipping’ your approach so that students cover new content independently prior to the session. In-class time can be used to draft, develop or revise assessment tasks. This allows the opportunity for students to discuss with you, and their peers, their work and ideas and answer any questions that might arise.
    • Apply a local, recent or personal context: Generative AI models are currently trained using a broad, but still limited dataset. ChatGPT is based upon a training dataset from before September 2021 and so may not be accurate with more recent developments. Framing assessments in terms of more recent or local events, for example using case studies, may be effective.
    • Use generative AI to personalise assessments: Rather than setting as assessment question for students to answer, use generative AI to present an answer to a question and ask the students to evaluate and improve the response. For example, this might involve providing students AI generated computer code or a mathematical proof and asking them to identify any mistakes or areas where this might be simplified/enhanced. 
  5. What are the longer-term ways that I can mitigate against the potential impacts of generative AI technologies upon student learning?

    • Diversify assessment types: Some assessment types are more resistant to the effects of generative AI than others and can also help students develop, and evidence, their wider graduate attributes. Oral assessments, including assessed seminars and group discussions, might be appropriate and provide an opportunity for students to demonstrate their knowledge, understanding and even skills in persuasion to an examiner and/or their peers. Similarly videos and podcasts allow students to demonstrate their skills in communication. 
    • Consider a group-based approach: Make assessment tasks collaborative with work taking place during teaching sessions. Randomly allocated groups, where there is a natural peer-led review of work, can minimise the use of generative AI by students.
    • Staging assessment tasks: Stagger assessment tasks over multiple weeks or assignments so that students are required to submit these as a series of smaller components that when combined form a solution to a larger problem. Students can receive feedback on these smaller components, which they are then required to embed in future iterations, and it will allow you to observe the evolution of their work and ideas.
    • Encourage the use of AI: Allow students the choice of whether or not to use AI within an assessment. For example, if a student uses generative AI to develop an essay, they can demonstrate, through tracked changes and comments how it has been developed and why. A similar approach can be used in mathematical or scientific disciplines where students are required to fully justify and explain their methods. Such an approach is also likely to help students better understand how they can engage with AI.
    • Consider Bloom’s Taxonomy: Assessments that require students to evaluate, analyse, or apply what they have learned will limit their ability to pass AI generated work off as their own. Consider how this might be aligned with research-led teaching in your discipline. Similarly advanced-level projects, where students are required to create new knowledge are also more immune to AI generated content.
    • Assess synoptically: Explore how an assessment might integrate multiple ideas, concepts or approaches, perhaps spanning several modules within your discipline. This will reduce assessment load and encourage students to investigate and articulate how different disciplinary ideas are connected.
    • Require engagement with specific research-led literature: Require students to robustly cite external, and where appropriate, modern disciplinarily research. This will help students better appreciate the quality and applicability of literature sources, and will also help enhance their skills in critical thinking and analysis.