Robotics and Human Interface Technologies

The fusion of state-of-the art artificial intelligence with robotic systems is opening up new opportunities in medical surgery, autonomous vehicles, manufacturing and work in hazardous environments such as decommissioning of nuclear facilities – where humans cannot go.

robotics

Real-life robots are still a world away from the movie versions, but the ones we’ve built – or helped to build – are still pretty amazing. From developing robots that can be used in the home, as security guards or even carers to world class research in Virtual and Augmented reality or ‘serious games’, Birmingham scientists are brining robotic machines ‘to life’ like never before.

Robotics

Robots have been used in homes for some time (according to the international Federation of Robotics, two million were bought for domestic use in 2010, with a further 14 million units expected to be sold by 2014), but until now these automatons have been capable of carrying out only one, specific task.

The goal, now, is to make robots ‘smarter’ so they can carry out multiple or complex activities. And that’s what we’re working on here in Birmingham – with high levels of success.

Led by Professor Jeremy Wyatt , Professor Ales Leonardis, Dr Michael Mistry and Dr Nick Hawes, and in collaboration with robotics labs across Europe, we have made significant progress in the development of a new generation of humanoid. 

Let us introduce you to two of them:

Dora the Explorer

The culmination of a four-year project called CogX (Cognitive Systems that Self-Understand and Self-Extend), Dora is a task-driven robot, capable of finding objects in a building through its understanding of an environment. For example, ‘she’ is able to locate a box of cornflakes in a house by using ‘probabilistic reasoning and planning’ to exploit the programmed knowledge that cornflakes are usually found in kitchens and on a table or work surface of kitchens. 

Dora is able to reason about what she knows, what she doesn’t know and how what she knows changes under the actions she performs. For example, if you put the robot in a room and she doesn’t know what type of room it is, she is able to run a set of routines to find out. She can plan to do that knowing that it will change her knowledge state. This is called ‘epistemic learning’.

This intelligence – a combination of vision and laser-based mapping – incorporates ways to deal with uncertainty, such as where a CD might be located (in a bedroom or a living room). 

Dora is capable of completing several tasks, such as finding ‘all the people in the building’, as well as ‘imagining’ places she knows nothing about. For example, if human beings want to find a meeting room within a building, they have to imagine the existence of a meeting room before attempting to locate it. Likewise, Dora is able to hypothesise the existence of entities she doesn’t know anything about – and refine those hypotheses as she goes.

Dora’s abilities don’t end there: If the robot is asked to find an object that has been deliberately hidden, she can explain her failure to locate it. Not only that, she can come up with several possible explanations using her pre-programmed knowledge. So, for example, she might deduce that the magazine she’s failed to locate in the meeting room is inside a container in that room. She then has a new goal – to discover if that is correct. To do this, she can ask questions of humans.

Dora was seen by 12,000 people at the Robotville Festival at London’s Science Museum in December 2011, and was subsequently featured in national and international media.

Curious George

So-called because George is driven by curiosity – a huge leap forward in the development of ‘cognitive robotics’.

When presented with a group of objects it doesn’t recognise, George is sophisticated enough to hold a dialogue, which ‘he’ drives, with a human researcher, so that the robot can resolve ambiguities of language. For instance, if asked to ‘pick up the mug’ on a table where there are two mugs, George will ask: ‘Which one?’ 

Each time he is given additional information about the properties of an object – such as ‘the box is blue’ – George will store it to update his model system so that he can use it to recognise objects in future. He can also ask questions when baffled by something, such as ‘can you tell me the name of that new red drinks can?’ 

Lecturer in Intelligent Robotics Dr Nick Hawes comments: ‘By putting these advances together, we will one day produce robots that can learn and adapt to their environments naturally, much as humans do; using what they learn to do useful jobs for their owners. This will bring the vision of a robot in every home one step closer.’ 

Human Interface Technologies

Professor Bob Stone, School of Electronic, Electrical and Computer Engineering, leads the Birmingham Human Interface Technologies team to undertake world-class, high-impact,human-centred research and development in Virtual and Augmented Reality, ‘serious games’and multimodal technologies for training and visualisation solutions in the fields of defence,healthcare and cultural heritage. 

In the defence sector for example, the SubSafe games based simulation project culminated in an extensive experimental programme, exposing real naval recruits to a unique virtual submarine environment. The experiments generated significant results in favour of adopting interactive 3D technologies to enhance submarine safety awareness, and these results have since influenced the Royal Navy and its Australian and Canadian counterparts to adopt similar technologies for future submarine training. 

Operators of the advanced British bomb disposal robot, CUTLASS, based in the UK, Gibraltar and Cyprus, are also now benefiting from unique simulation technologies developed by us.

 

We have more than £3.5 million of research funding for projects either under way or due to start soon. Works-in-progress include:

  • PacMan: This takes robotic manipulation of objects a step further by programming them to ‘know’ that objects are made up of parts, some of which are easier to grasp, and thus pick up, than others. The aim is to enable them to carry out tasks such as loading a dishwasher or serving a cup of tea.
  • CoDyCo: Whole-body Compliant and Dynamical Contacts in Cognitive Humanoids: Robotics lecturer Dr Mike Mistry is working on this project, which focuses on how to programme walking robots so that their ‘muscles’ have a certain amount of ‘floppiness’ in them – as humans’ do – to make them more dexterous.
  • STRANDS: Spatio-Temporal Representations and Activities for Cognitive Control in Long-Term Scenarios: Coordinated by Dr Nick Hawes in collaboration with Dr Jeremy Wyatt in the IRLab, this explores how a mobile robot can perform intelligent autonomous behaviour for long periods (up to four months) in human-populated environments. To do this, STRANDS robots extract quantitative and qualitative spatio-temporal structure from sensory experience, building representations that allow them to understand and exploit the dynamics of everyday activities.
  • Robotics at Birmingham is also part of a larger Centre for Computational Neuroscience and Cognitive Robotics. In this centre neuroscientists and robotics researchers are working together to develop new models of robot control based on insights from neuroscience, to use robots to test predictions made by such models, and to use robots as rehabilitation devices to enable people to recover from traumatic brain injury