Inter-agency sensemaking and the potential for AI for alternative thinking: overcoming interoperability challenges for more politically feasible and effective serious organised crime (SOC) strategies and operations

This project aims to better understand some of the interoperability challenges for improved multi-agency thinking and working. 

While a growing body of evidence suggests we need to develop more problem-driven, politically feasible strategies and operations on SOC and corruption, there are often differences of opinion in multi-agency teams on what this means and what is needed. Some may translate this as a need to develop responses that reflect contextual realities on the ground and are politically feasible within that context, while others may believe it means that we need more political influence to convince or press local counterparts to focus on our priorities. 

Because of this, we are also likely to find framing challenges among different agencies and teams involved, in terms of:

  • how they define the problem (for example, one of security, politics, society, economics and so on);
  • what they think of as the right starting point or solution (for example, military, law enforcement, conflict prevention, diplomacy, aid, civil society, social policy, psychology and so on);
  • where they see the ethical and moral parameters for strategy and action;
  • what assumptions they bring in and typical mental models;
  • what the primary purpose of analytical products are (such as, to inform a short-term operational response versus developing a longer-term approach to tackle underlying causes and drivers of particular threats); and so on.

In other words, there appears to be a typical interoperability challenge at play here.

There are well known problems in human decision-making at both individual and group levels, and sensemaking is useful as a first pass in framing the situation. As our previous research has shown, ‘Sense-making happens when you experience a 'gap', or contradiction, in your understanding of the context in which you are currently acting; it is a means by which uncertainty or discomfort can be dealt with through the recruitment of prior experiences or new information’. We will apply Klein’s Data-Frame Model as a lens to explore how people working in multi-agency teams on SOC-related issues select ‘data’ (evidence that is available to them) and combine these into a ‘frame’ (an explanatory model) in terms of their prior knowledge, beliefs and expectations. As new data become available, so the frame can be elaborated or questioned. This offers a reasonable description of experts dealing with ambiguous and uncertain data. 

Drawing on semi-structured interviews and focus group discussions with SOC policymakers and practitioners from a range of agencies, we will adapt a model originally developed by Professor Baber for the National Cyber Security Centre focused on single agency work on intelligence and AI. The team will design a workshop where participants in multi-agency groups reflect on 'critical incidents' (that is, situations from their experience in which things did not go as planned or required adaptation).  Once groups have a 'scenario' to work from, the next stage involves producing a timeline of events within the scenario, including mapping different actors, groups, organisations and so on.

Following this preliminary stage, participants will reflect on which organisations are responsible for specific events and information and how these organisations share information. The result will be a set of high-level questions relating to information sharing, managing conflict or competition, and how the different organisations might have different interpretations that they apply to the information. The final phase of the workshop will involve critically reviewing the scenarios, particularly in terms of bottlenecks for communications and implications of missing or ambiguous information, and proposing potential solutions to these problems.

Expected impact 

While generating useful insights in its own right, this scoping phase will also inform the potential development of a Cooperative AI system that draws on multiple possible versions of workshop scenarios in order to create a 'many-worlds' perspective on problems ranging from a 'world' in which all information was correct and unambiguous, but with competing aims of different organisations, to others in which aims aligned or information was ambiguous. The ambition for this Cooperative AI system is to help better bring together multi-agency teams and to improve decision-making for SOC strategies and interventions in the future.

Contact 

Professor Heather Marquette (heather.marquette@fcdo.gov.uk or h.a.marquette@bham.ac.uk)

 

Principal Investigator
Professor Christopher Baber, University of Birmingham
howes-andrew Co-Investigator
Professor Andrew Howes, University of Birmingham
Heather MarquetteCo-Investigator
Professor Heather Marquette, University of Birmingham