It is widely accepted that the introduction of more complex and capable systems to UxVs will result in increasing pressure on operators as they attempt to process and interpret large volumes of data in a timely fashion. For the purpose of this demonstration, the team looked at Unmanned Underwater Vehicles (UUVs) equipped with multiple advanced sonars. Sonar displays tend to be one of two types: a “waterfall” display (a graphical representation of signal amplitude or strength across a fixed frequency range varying with time), or, for hull mounted sonars, a 360o plan position indicator display (a radar-like representation, presented in polar coordinates of range and angle, with the origin representing the source of the signal). Current operational solutions are typically based on a single sonar or single sensor system, feeding one display monitored by one operator. As more and more devices are added in the future, each with a significantly enhanced capability, this may result in the need for more operators and shipborne workstations, with associated cost, space and human-system performance implications. In the past, very little attention has been paid to how best to engage the human operator within integrated UUV and support vessel systems, or how to assimilate adaptive machine learning within changeable operating conditions.