
Summer research in School of Computer Science

Research experience in the School of Computer Science
Research experience in the School of Computer Science
The School of Computer Science is breaking new ground in the theory and practice of computational systems and their applications, it is a progressive, inclusive department, providing specialist teaching and conducting world-leading research in fundamental and applied computer science.
Summer research projects available for summer 2026
Project 1
Evaluating the shift in LLM thoughts in multi-agent systems.
Research lead - Dr Abhirup Ghosh
Research objective
This project aims to evaluate how the internal representations of Large Language Models (LLMs) evolve as multiple agents interact to achieve a task through collaboration. By examining how token embeddings and their relationships shift during multi-agent interaction, we will investigate when and why a collaboration gap arises in multi-agent systems. We will evaluate scenarios in which agents, configured using system prompts, differ in their initial beliefs and must converge to produce a correct final answer. Using tasks where agents possess correct but incomplete information, we will test the hypothesis that the token embeddings associated with key symbols will evolve to reflect changes in the inferred relationships among them.
Research prerequisites - Basic understanding of machine learning and large language models, hands-on coding experience in Python.
You will have the opportunity to gain
- Hands-on understanding of multi-agent LLMs.
- An understanding of embedding spaces for LLMs.
Project 2
What is true and what is not in GraphRAG?
Research lead - Dr Anelia Kurteva
Research objective
In this project, we aim to investigate Large Language Models (LLMs) sensitivity to hallucinations and incorrect information by injecting them with knowledge graphs of both correct and incorrect facts via GraphRAG. By injecting correct and incorrect knowledge graphs via GraphRAG, the research evaluates how models handle contradictory information and whether they can effectively identify and filter out factual inconsistencies. The project examines model behaviour in the presence of contradictory inputs, specifically assessing an LLM's capacity for conflict resolution and error detection within its retrieval‑augmented generation pipeline.
Research prerequisites - Basic understanding of technologies to carry out this project - LLMs, ontologies, knowledge graphs.
You will have the opportunity to
- Study how LLMs handle contradictory knowledge graphs using GraphRAG.
- Learn techniques for detecting hallucinations, conflicts, and errors in retrieval‑augmented generation.
Project 3
Regulatory document summarisation with dynamic chunking and adaptive length control using LLMs.
Research lead - Dr Mubashir Ali
Research objective
This project explores how large language models (LLMs) can be used to summarise long and complex regulatory and legal documents in a reliable and controllable way. It investigates multiple summarisation strategies, i.e. whole‑document, truncation‑based, and hierarchical summarisation and compares their effectiveness on long regulatory texts. The project also examines fixed‑size, semantic, and dynamic chunking strategies and how these interact with LLM‑based summarisation. A key focus is adaptive length control to ensure summaries meet required length and detail constraints while preserving legal meaning and structure, leading to a comparative analysis and prototype pipeline for accurate, scalable, and controllable summarisation of regulatory documents.
You will have the opportunity to
- Explore LLM summarisation methods for long regulatory and legal documents.
- Learn adaptive length control and chunking techniques for accurate, controllable summaries.
Project 4
Active sensing for remote visit.
Research lead - Prof Eyal Ofek
Research objective
This project investigates how a mixed reality display enables users to view their environment as if they are in a remote site. We will look at building a world model on the fly, of the remote site, using both live observations by multiple sensors in the remote site to build a unified display for the user. Special attention will be done on filling in missing information to generate a complete plausible representation of the remote site (including extrapolation, using past models, and rules) as well as planning future capture based on the user's behaviour to minimise missing information.
You will have the opportunity to
- Build real‑time mixed‑reality world models from live multi‑sensor data.
- Learn methods to fill missing information and plan future capture based on user behaviour.