The social science agenda for the AI revolution: From critic to architect
The Centre for AI in Government at Birmingham champions social sciences as co-architects of responsible AI.
The Centre for AI in Government at Birmingham champions social sciences as co-architects of responsible AI.

The social science agenda for the AI revolution: From critic to architect
The rapid advance of artificial intelligence represents the most significant opportunity for societal impact and disciplinary renewal for the social sciences in a generation. In a recent panel discussion at the “Future of Social Sciences Symposium 2025” at the University of Warwick – a major event co-organised with the Academy of Social Sciences and the Economic and Social Research Council (ESRC) – Professor Slava Jankin, Director of the Centre for AI in Government at the University of Birmingham, laid out a forward-looking agenda for the discipline. The panel, “Social Sciences in an Age of Digitisation and AI,” explored a critical question: should the social sciences simply study the impacts of AI, or should they play an active role in shaping its development?
Professor Jankin argued that the traditional posture of post-hoc critique is no longer sufficient. To remain on the sidelines is to accept a future shaped by a narrow set of technical and commercial logics, where societal consequences are an afterthought. He emphasised that the speed and scale of AI demand that social scientists move from being critics of a future designed by others to being the co-creators and architects of a more responsible, just, and effective AI-powered world.
This thinking underpins a new, proactive agenda for the social sciences, built on the understanding that the discipline's theories, methods, and professionals are indispensable for shaping AI's core architecture.
Beyond Impact: A New Locus of Contribution
A core part of this new agenda is to shift social science engagement from simply studying AI's applications to helping build its methodological core. The integration of social science methods is already creating fundamentally better and safer AI systems.
A New Governance Paradigm: From Static Rules to Agile Trust
The traditional model of slow, top-down legislation is ill-suited for the pace of AI development. Professor Jankin highlighted that a more effective path lies in “agile governance” – an iterative and collaborative process that establishes “guardrails, not handcuffs,” to provide safety without stifling innovation.
The primary tool for this is the regulatory sandbox, a controlled environment where companies can test innovative AI products under regulatory supervision. This allows for real-world experimentation, enabling regulators to develop evidence-based rules and giving innovators the legal certainty to bring responsible products to market. This dynamic approach is essential for building public trust, which is the bedrock of successful AI implementation in government.
The New Professional: Cultivating Ambidextrous Talent
This ambitious agenda requires a new type of professional: one who is “ambidextrous,” possessing deep fluency in both social and computational sciences. The demand for these graduates is not speculative; it is a reality. Graduates from pioneering interdisciplinary programmes are being aggressively recruited into high-impact roles as data scientists, policy analysts, and machine learning engineers at leading technology firms like Google and Meta, and in crucial government agencies like the Ministry of Defence and Cabinet Office.
Professor Jankin noted that the University of Birmingham is at the forefront of training this next generation. Pioneering programmes are explicitly designed to cultivate this hybrid expertise, bridging the gap between technical possibility, ethical responsibility, and strategic implementation.
A Vision for the Future: The Social Science Stack for AI
This forward-looking perspective can be synthesised into a powerful conceptual model: “The Social Science Stack for AI.” This framework reframes the discipline’s role as an indispensable, multi-layered architecture for any responsible AI ecosystem.
The core message from the discussion is that each layer of this stack is essential. Without a foundation in social theory, AI development is blind. Without robust methods, it is brittle. Without accountable governance, it is unchecked. And without a new generation of interdisciplinary leaders, it cannot be sustained.
Building the Future at Birmingham
Putting these ideas into practice is the central mission of the Centre for AI in Government and the broader Institute for Data and AI at the University of Birmingham. The work requires deep institutional reform, new modes of collaboration with industry and government, and a renewed sense of purpose within the disciplines themselves.
We invite aspiring students who want to become the architects of a responsible technological future to explore our innovative programmes. Our BSc in AI and Public Policy and our cross-faculty MSc programmes in AI and Government and AI and Sustainable Development are designed to equip the next generation of leaders with the ambidextrous skills needed to navigate this complex new world. Join us in building it right.