Crisis computing and the edge of uncertainty

Researchers from Birmingham and Tokyo are advancing human-centred AI, behavioural science, and immersive tech to boost global crisis preparedness and response.

Scrabble tiles

As data becomes the dominant grammar of crisis, the very notion of judgement is being recalibrated.

Artificial intelligence (AI) has entered the command room of crisis governance. From humanitarian coordination and climate forecasting to peacekeeping and public health, computational models increasingly define how urgency, risk, and response are understood.

Yet as data becomes the dominant grammar of crisis, the very notion of judgement is being recalibrated. The question is no longer only how to predict crises more accurately, but how to think within systems that promise to anticipate them.

Recent academic dialogues, including an international symposium on AI-enabled crisis computing at Hitotsubashi University in Tokyo and a University of Birmingham-organised workshop at the UN AI for Good Summit in Geneva, reveal a field that is both maturing and unsettled. Across disciplines, researchers and practitioners now realise that crisis computing is not a technical discipline alone. It is a social, ethical, and epistemological experiment in how societies organise knowledge and action under conditions of radical uncertainty.

Machine learning systems now forecast floods, model disease spread, map migration flows, and simulate conflicts in real time. They make the future legible, but at a cost. The promise of foresight compresses time; deliberation risks becoming delay. Speed becomes virtue, and reaction masquerades as preparedness.

Dr Martin Wählisch - University of Birmingham

Crisis computing has emerged from the belief that predictive analytics could tame volatility. Machine learning systems now forecast floods, model disease spread, map migration flows, and simulate conflicts in real time. They make the future legible, but at a cost. The promise of foresight compresses time. Deliberation risks becoming delay. Speed becomes virtue, and reaction masquerades as preparedness.

What is gained in temporal precision may be lost in interpretive depth. Algorithms that classify events or prioritise interventions filter signals, establish thresholds, and define what counts as a crisis worthy of attention. In doing so, they influence political tempo and ethical tone long before a decision is made.

As Sumie Nakaya, Associate Professor at Hitotsubashi University’s Centre for Global Online Education and former UN staff member, observes through her research on Japan’s disaster-response systems, bottlenecks in technological innovation often lie in social behaviour rather than computational capability. Data infrastructures and predictive platforms are only as effective as the human systems that interpret them.

Crisis computing succeeds or fails in the human loop and behavioural science becomes a new frontier; how decision-makers interpret algorithmic uncertainty, how automation shifts expectations of authority and accountability, and how response cultures adapt when ‘real time’ becomes the default. Measuring what machines do to human reasoning is emerging as an essential form of crisis analytics in itself.

In parallel, Dr Muhammad Imran, Principal Scientist and Lead of the Crisis Computing team at the Qatar Computing Research Institute, brings insight to how AI, machine learning, and big-data analytics enhance decision-making in complex and time-critical contexts such as emergency response and social resilience.

Dr Imran demonstrates that the first responders in any crisis are often citizens themselves. Real-time data from digital volunteers and affected communities now complement, and at times outpace, official information systems. This turns social networks into distributed early-warning systems. Yet it also introduces new challenges - whose voices are amplified, whose are ignored, and how do algorithms filter human distress into data signals?

Deliberation, simulation, and the human condition

At the University of Birmingham, research in crisis computing extends this human-centred perspective to the design of AI systems - models exploring how AI can facilitate reflective dialogue rather than automate decisions. Our research helps teams to weigh options, visualise uncertainty, and negotiate trade-offs when information is incomplete or conflicting.

By creating dynamic, data-driven environments replicating crises that defy existing experience, virtual and extended reality training allows policymakers, responders, and mediators to rehearse ethical and procedural choices in complex, evolving scenarios. These virtual laboratories test not only the capacity for action but also the resilience of judgement itself - preparing to reason when prediction fails.

Conditional stability and the Akiyama caution

Professor Nobumasa Akiyama, Professor at the School of International and Public Policy at Hitotsubashi University and Director of the Center for Disarmament, Science and Technology at the Japan Institute of International Affairs (JIIA), argues that automation in the nuclear domain can stabilise systems only under conditions of meaningful human control - avoiding a logic of escalation by design.

Applied to humanitarian and environmental crises, the insight is equally urgent. Crisis computing is a fragile equilibrium, its stabilising effects are contingent on oversight, ethics, and institutional learning. When the speed of computation outpaces the capacity for comprehension, the result is not efficiency but epistemic fragility.

Current practice privileges decision-makers, yet marginalises those inhabiting crises directly. The next phase of crisis computing should prioritise technology co-produced with affected populations - reframing preparedness as a public good, not a bureaucratic service.

However, a deeper paradox persists. The more evidence-driven preparedness becomes, the more it risks overconfidence in its own models - creating a new form of vulnerability, dependence on systems generating opaque yet authoritative assumptions. Responsible crisis computing must treat uncertainty not as a defect to be eliminated but as a structural feature to be engaged - allowing reflection, dissent, and adaptation.

The emerging discipline of crisis computing stands at a crossroads. It can either deepen the human capacity to navigate complexity or narrow it to what can be computed. We must ask not only what AI predicts but what it prescribes; not only how it accelerates action but how it transforms judgement. In that synthesis between anticipation and awareness lies the real promise of crisis computing: a system of intelligence that enhances allied to the human ability to act wisely when the world becomes uncertain.