
Socio-technical Systems

The sociotechnical systems (STS) theme combines research to define, evaluate, and influence the effects of advanced computing on society.
Our work spans several key computer science areas:
Distributed systems
Distributed systems
Distributed systems are a collection of independent computers connected via a network that function as a single, unified system to achieve a common goal. Examples of topics we work on:
- Using sensor data for the prediction of environmental risks or sporting performance
- Quantitative analysis of the "Verifier's Dilemma" in Proof-of-Work public blockchains
- Distributed financial environments and technologies
Multi-agent Systems
Multi-agent Systems
A framework where multiple autonomous AI agents interact, collaborate, or compete to solve complex tasks that are difficult for a single agent.
Federated Learning
Federated Learning
Federated Learning is a Privacy Enhancing Technology (PET) and a decentralised machine learning approach that trains AI models on edge devices (like smartphones), or local servers, without exchanging private data. Examples of topics we work on:
- Sharing of data to preserve privacy through Federated Learning
- Automated compliance verification of data sharing in decentralised settings (e.g. federated learning)
- Machine-readable policies for data sharing in decentralised (and centralised) settings
Knowledge Graphs
Knowledge Graphs
Data structures that represent a network of real-world entities (people, places, concepts, events) and define the relationships between them in a machine-readable format.
We investigate both Knowledge Graphs and Property Graphs as tools for responsible data management in AI (eg via Graph RAG).
We also work on ontologies and metadata to make data more Findable, Accessible, Interoperable, Reusable (FAIR) for AI.
Trustworthy and Responsible AI
Trustworthy and Responsible AI
We are dedicated to bridging the gap between technical innovation and ethics by building AI systems that are transparent, fair, and reliable. Our work spans topics such as algorithmic accountability, explainability of AI results, collaborative AI development, sustainability and regulatory compliance technology. Examples of topics we work on:
- Large Language Models and other AI methods to support defect detection in software engineering
- Augmenting LLMs via Graph RAG to mitigate hallucinations and improve trustworthiness
- Responsible data governance and management for AI
- Legal, Ethical, and Responsible uses of AI
Human-centered AI
Human-centered AI
We explore the intersection of machine intelligence and human cognition to create systems that amplify, rather than replace, human potential. Our work is rooted in collaborative intelligence, focusing on intuitive interfaces, explainable outputs, and adaptive learning environments. Through a multidisciplinary approach that combines computer science with behavioural insights, we design AI that respects human agency and addresses complex societal challenges through a user-first lens. Examples of topics we work on:
- Design and development of novel user interfaces, particularly through Augmented Reality or tangible user interfaces
- Empowering individuals by developing interfaces and technologies for greater data sovereignity online
- AI and data literacy initiatives
Our research works across discipline boundaries to explore these issues, and we work closely with colleagues who hold joint appointments with Computer Science and Schools of Law, Government and Politics, and Psychology, as well other disciplines across the University.
Research Paper Spotlight
Research Paper Spotlight
- Murphy, A., Danilowski, M., Chatterjee, S. and Ghosh, A., 2026. NEO: No-Optimization Test-Time Adaptation through Latent Re-Centering. In The Fourteenth International Conference on Learning Representations (ICLR’26)
- Zhang. Y, Kurteva, A. Merono Penuela, A., Simperl. E., 2026, OntoChat Assistant for User Story Generation in Ontology Engineering, ACM Transactions on Intelligent Systems and Technology (Pre-press)
- Al. Ghanmi, H., Ahmadjee S., Bahsoon, R., Adeyemo. H. B., 2026. Exploring Human-Centric Dimensions in Blockchain Smart Contracts. ACM Computing Surveys 58, 7, Article 173 (May 2026), 43 pages
- Bakman, Y.F., Yaldiz, D.N., Kang, S., Zhang, T., Buyukates, B., Avestimehr, S. and Karimireddy, S.P., 2025, July. Reconsidering LLM uncertainty estimation methods in the wild. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 29531-29556)
- Soares, R.G. and Minku, L., 2025. Online ensemble model compression for nonstationary data stream learning. Neural Networks, 185, p.107151
- Webb, N., Huang, Z., Milivojevic, S., Baber, C. and Hunt, E.R., 2025, September. When Robots Say No: Temporal Trust Recovery Through Explanation. In International Conference on Social Robotics (pp. 424-436). Singapore: Springer Nature Singapore
- Smuseva, D., Marin, A., Rossi, S. and Van Moorsel, A., 2025. Verifier's Dilemma in Proof-of-Work Public Blockchains: A Quantitative Analysis. ACM Transactions on Modeling and Computer Simulation, 35(2), pp.1-24