
AI Safety research experience and scholarship

Join us on a 8-week summer research experience programme in AI Safety and Security project and experience first‑hand the excellence that has placed the University of Birmingham among the country’s top research powerhouses.
Research experience projects in technical AI Safety
Research experience projects in technical AI Safety
The University of Birmingham is at the forefront of developing technology for technical AI Safety. This summer, we offer a unique opportunity for taught students to study alongside leading researchers in AI Safety as part of a collaborative initiative with the University of Manchester, funded by the Advanced Research and Invention Agency (ARIA).
This project offers a generous scholarship worth up to £5000, enabling participants to fully engage in an intensive research-based learning experience in AI Safety. Successful scholarship awardees will pay no programme fees and receive free accommodation at the University of Birmingham halls of residence.
Learn more about the scholarship and how to apply.
Project 1
Automated Manufacturing Design
Research team
Mirco Giacobbe, Leonardo Stella, Adam Szekely, Gokhan Tut
Research objective
Predictive modelling of biopharmaceutical stability remains one of the most significant bottlenecks in modern drug development. While traditional machine learning methods offer speed and flexibility, they often lack the physical grounding required to satisfy rigorous regulatory standards for shelf-life estimation.
In this project, you will explore the development of a framework for assessing biologic degradation by bridging the gap between high-dimensional structural data and fundamental chemical kinetics. The goal is to understand whether a physics-informed modelling framework can provide the level of safety, reliability, and interpretability required to support critical decision-making within the strict regulatory environment of biopharmaceutical manufacturing.
Project 2
Hardware-level Verification
Research team
Mirco Giacobbe, Edwin Hamel-de le Court, Edoardo Manino
Research objective
Hardware-level considerations are often overlooked when evaluating the safety of AI models. Yet details such as quantisation, sampling, and software implementation can substantially alter model behaviour during deployment. AI models, including neural networks, are not immune to these effects.
In this project, you will study research at the intersection of the digital and physical worlds while investigating several key questions: Do AI deployment pipelines truly preserve the intended behaviour of models? Are AI-based digital controllers robust and stable when interacting with a physical environment? Can we formally and automatically verify that AI decision-making systems remain safe when implemented on real hardware?
Project 3
Privacy-preserving Verification
Research team
Pascal Berrang, Mirco Giacobbe, Xiao Yang
Research objective
Formal verification of systems has focused for decades on the challenge of scalability. However, ensuring AI Safety at a global scale introduces an additional challenge–confidentiality. Existing verification technologies typically require access to a formal and detailed description of the system, which can create tension between providers’ desire to protect their intellectual property and regulators’ need to ensure safety.
This project investigates how one can convince an external verifier that a system is safe without revealing the system itself. Participants will explore novel questions at the intersection of formal verification, cryptography, and AI, studying new approaches to safety assurance in settings where confidentiality is essential.
Project 4
Formal Certification for AI Safety
Research team
Mrudula Balachander, Mirco Giacobbe, Grigory Neustroev, Diptarko Roy
Research objective
Formally verifying an AI system involves determining whether it satisfies a precise specification of its intended behaviour. Standard testing techniques are easy to implement but inherently non-exhaustive and cannot provide formal guarantees of safety, whether absolute or probabilistic.
In this project, participants will learn advanced techniques at the intersection of logic and AI while working with a team investigating formal specification languages for AI Safety and algorithms for verifying them. A central goal is to understand what constitutes a formal certificate of safety and how learning systems might eventually be enabled to generate proofs of their own safety.
Outstanding computer science students are invited to apply for our 8‑week AI Safety and Security research experience and the accompanying scholarship worth up to £5000. This programme offers the chance to work directly on high‑impact research projects in AI within the department of Computer Science.
Scholarship FAQs
What does this scholarship cover?
What does this scholarship cover?
The scholarship covers programme fees and accommodation costs, removing the main financial barriers for students wishing to join the summer research experience.
Who is this scholarship intended for?
Who is this scholarship intended for?
The scholarship is aimed at outstanding students with a strong interest in AI safety and security, ideally with some relevant academic background or foundational skills in AI‑related subjects.
How many scholarship(s) are available?
How many scholarship(s) are available?
We will select up to eight exceptional students for this opportunity, and each will receive up to £5,000 worth of scholarship support towards their AI Safety and Security research experience at the University of Birmingham campus.
Any additional costs to be accounted for?
Any additional costs to be accounted for?
Yes, students should budget for personal expenses, such as visa costs, travel to and from the University of Birmingham, meals, local transport, personal insurance, or optional social activities.
How will the scholarship support me financially?
How will the scholarship support me financially?
The scholarship will be awarded in the form of a tuition (programme) fee waiver and accommodation cost waiver. Applicants will NOT receive any cash payment/ stipend.
How to apply for the programme and scholarship?
How to apply for the programme and scholarship?
Follow these simple steps to apply for the AI Safety and Security summer research experience and the associated scholarship:
1. Explore the project titles
Review the available AI Safety and Security research projects and choose the one(s) that best match your interests.
2. Submit your summer research project application
Complete the project application form for your selected project. If applying from the University of Birmingham partner institution, please follow these steps to apply for the AI safety research projects.
3. Complete the scholarship application form
Fill and submit the scholarship form to be considered for summer research experience funding.
4. Submit both applications within the same timeframe
Your project application and scholarship form must be submitted together to be eligible.
5. Attend the final interview (if shortlisted)
Shortlisted applicants will be invited to an interview with the host school or project supervisor. This interview will determine whether you receive an offer and scholarship.
6. Receive your offer letter
Successful candidates will be issued an unconditional offer letter following the interview stage.
When is the application deadline?
When is the application deadline?
Please note that scholarship applications will only be reviewed when submitted together with an application to the AI Safety and Security Summer Research Experience Project.
The final deadline for both the Programme and Scholarship applications is 1 May 2026.