Current projects 

Our projects span from basic work on understanding brain function, through using advanced imaging, robotic devices and virtual reality environments to promote recovery in neurological patients, to applied work on developing intelligent robots.

CogX – Cognitive Systems that Self-Understand and Self-Extend

CogX logoIn this four year project, led by Birmingham, and involving six universities and 35 researchers, we developed algorithms and software to enable robots to monitor their own knowledge and plan with this. The result was a series of robot systems. Dora mapped her world, planning to acquire new knowledge as necessary to perform the tasks she had. George conversed with a human to find out more about the world. Dexter grasped and manipulated objects. Dora and George both plan their activity in order to acquire new information, representing what they know, what they don’t know, and how what they know will change as they act.

Participants: Jeremy Wyatt, Nick Hawes, Aaron Sloman, Richard Dearden

More information can be found on the CogX project website.

 

GeRT – Generalising Robot Manipulation Tasks

GeRT logoIn this three year project we worked on object manipulation tasks. In order to work naturally in human environments such as offices and homes, robots of the future will need to be much more flexible and robust in the face of novelty than those of today. In GeRT we will develop new methods to cope with novelty in manipulation tasks.

Humans cope so seamlessly with novel objects that we do not think of grasping a new cup, or screwing the lid off a jar we haven't seen before as challenging. But this kind of everyday novelty in manipulation tasks is hard for a robot. Currently the most advanced robots can perform a task such as making a drink, which involves grasping, pouring, and twisting off a cap from a jar. But the rules for how to pick up every single object must be programmed. So the ability to manipulate fifty different objects means writing fifty different programs. Also when the robot encounters an object it hasn't seen before, it can't grasp the object. Even worse than this is the fact that if one object in a task changes then the program for the whole task may need to be rewritten. This is because how we manipulate an object depends on the task, and the other objects involved. Suppose, for example, that we substituted a mug for a glass in a task that involved pouring liquid from the mug or glass into another object. The pouring position would change, as would the grasping position. Perhaps we would grasp the mug by the handle, and then tip it sideways to pour.

All of this means that if robots are ever going to be useful in natural settings where manipulation is involved that they need ways of generalising on the fly to cope with novel objects, and perhaps novel tasks. Our approach is to take a small set of existing robot programs, for a certain robot manipulation task, such as serving a drink and to give the robot the ability to adapt them to a novel version of the task. These programs constitute a database of prototypes representing that class of task. When confronted with a novel instance of the same task the robot needs to establish appropriate correspondences between objects and actions in the prototypes and their counterparts in the novel scenario. In this way the robot can solve a task that is physically substantially different but similar at an abstract level. To achieve this we will use a variety of techniques from machine perception, machine learning, and artificial intelligence techniques such as automated planning.

More information is available at the GeRT project website.

Participants:Jeremy Wyatt , Rustam Stolkin, Richard Dearden

PacMan - Probabilistic and Compositional Representations of Objects for Robot Manipulation

PacMan logoIn this three year project, led by Birmingham, we are developing new methods for object perception, representation and manipulation so that a robot is able to robustly grasp objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action.

PacMan graspingAs an example imagine a robot that must load a dishwasher with cookware and crockery. Some of the object instances in the task are familiar to the robot, some are novel in the sense that they are unfamiliar instances drawn from familiar object categories. For example, perhaps the robot has previously manipulated mugs, but not a water jug. This new object has a handle, and so the robot can try to adapt a grasp it already knows for the handle of the mug to that of the jug.

Our approach assumes that objects are made of parts, and those parts of smaller parts. We have already shown that this approach enables much more robust computer vision systems that can reliably recognise objects at the category level (dogs, cars, faces, bicycles etc). We are now applying it to object manipulation. We are also studying how the robot’s fingers and cameras actively gather information to help manipulation.

More information is available at the PacMan project website

Participants: Jeremy Wyatt, Ales Leonardis

Strands – Spatio-temporal Representations and Activities for Cognitive Control in Long-Term Scenarios

Strands logoSTRANDS aims to enable a robot to achieve robust and intelligent behaviour in human environments through adaptation to, and the exploitation of, long-term experience. Our approach is based on understanding 3D space and how it changes over time, from milliseconds to months. We are developing novel approaches to extract spatio-temporal structure from sensor data gathered during months of autonomous operation. Extracted structure will include reoccurring 3D shapes, objects, people, and models of activity. We will also develop control mechanisms which exploit these structures to yield adaptive behaviour in highly demanding, real-world security and care scenarios.

Visualisation of Strands robotsBy autonomously modelling spatio-temporal dynamics, our robots will be able run for significantly longer than current systems (at least 120 days by the end of the project). Long runtimes provide previously unattainable opportunities for a robot to learn about its world. We are integrating our advances into complete robot systems to be deployed and evaluated at two end-user sites: a care home for the elderly in Austria, and an office environment patrolled by a security firm in the UK. The tasks these systems will perform are impossible without long-term adaptation to spatio-temporal dynamics.

STRANDS will produce a wide variety for results, from software components to an evaluation of robot assistants for care staff. These results will benefit society in a range of ways: researchers will be able to access our results as open-access papers, software and data; our methodology for creating long-running robots will encourage roboticists to tackle this unsolved problem in our field; industrialists will see how cognitive robots can play a key role in their businesses, and access prototypes for their own use; and society will benefit as robots become more capable of assisting humans, a necessary advance due to, for example, the demographic shifts in the health industry.

Participants: Nick Hawes, Jeremy Wyatt

More information is available at the Strands project website.

CoDyCo – Whole Body Compliant Dynamical Contacts in Cognitive Humanoids

CoDyCo logoThe aim of CoDyCo is to advance the current control and cognitive understanding about robust, goal- directed whole-body motion interaction with multiple contacts. The CoDyCo project is a four-years long project and starts in March 2013. At the end of each year a scenario will be used to validate our theoretical advances on the iCub Robot.

child reaches for a cup and robot reaches for a cupMethodologically, CoDyCo activities are divided into four categories, deeply intertwined: control theory, machine learning, human behavioural experiments and software development. Each of them serve CoDyCo’s objectives to: 1) develop a general software toolkit for whole-body dynamics computation with multiple external contacts, 2) conduct human behavioural studies for understanding human use of external contact, including interpersonal cooperative contacts in natural whole body tasks, 3) develop a control architecture for whole body coordination and regulation of whole body compliance, and 4) leverage machine learning methods for acquiring models of compliant contact with the environment and physical interactions with humans.

At Birmingham we are studying human behaviour and developing new methods for compliant control.

Participants: Michael Mistry, Chris Miall, Alan Wing

More information is available at the CoDyCo project website.

Rhythms in the Mind, Brain, and Action

Sensory information is often temporally structured and our brain exploits the regularity for perception. We study how expectations can influence our perception of rhythm, music, and timing of stimuli.

The state of activity in the brain fluctuates regularly. The characteristic of the oscillations and their synchronization is related to external stimulation and cognitive processing. Using physiological recordings we investigate the parameters and predictability of neurocognitive models.

The properties of the environment as well as our internal states can influence how we move. We record repetitive actions as in dance and music production to investigate what are the stimulus properties, cognitive processing, and muscular actuation that allow us to synchronize to external rhythms.

Principal Investigators

Dr Max Di Luca
Dr Mark Elliot
Dr Simon Hanslmayr
Professor Alan Wing 
Professor Kimron Shapiro
Professor Jane Raymond
Dr Maria Wimber

Funding

Marie Curie ‘TICS’

Focus

Temporal and Rhythmic Perception. Neural Oscillations. Sensorimotor Synchronization.

Techniques

Psychophysics to measure perception
Electroencephalography (EEG) to look at internal oscillations Transcranial stimulation (tDCS, tACS, TMS) to perturb brain rhythm Motion Capture to record body movements Computational Modeling to integrate the three streams of research

Exploiting structure in human environments

This work uses qualitative and probabilistic models to learn then exploit the structure and predictable dynamics of the appearance, configuration and human activities present in every day environments. Examples include learning the typical spatial arrangements of objects on tabletops over time to support robot object search, or the fastest routes for a mobile robot to take through an environment at a particular time of day.

Principal Investigators:Toomas_Lobby3

Dr Nick Hawes
Dr Jeremy Wyatt

Funding:

EU, EPSRC

Focus:

Learning semantic structure from human environments.

Techniques:

Mobile robots; probabilistic and/or qualitative modelling of space, time, activity or function; machine learning; AI planning; computational vision.

The Meta-Morphogenesis Project

Summary:

A long term project attempting to understand the mechanisms and intermediate stages in evolution of biological information processing, including precursors of human intelligence.
Further information:

www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html 

Investigators:aaron

Professor Aaron Sloman

Funding:

Unfunded. There are no plans to apply for funding.

Focus:

Attempt to understand biological intelligent systems and possible future intelligent robots with similar capabilities, by investigating the major transitions in biological information processing from pre-biotic materials onwards. The transitions include changes in opportunities and problems posed by the environment, changes in sensory-motor morphology, changes in information content aquired, stored, derived, and used, changes in forms of representation, changes in mechanisms for manipulating information, changes in forms of information and mechanisms used in reproduction, changes in forms of learning, both across generations and within individuals, and changes in social/cultural forms of information processing and information transmission, including teaching and education.

Techniques:

Wild speculation tamed by philosophical analysis, study of literature, critical analysis and, when possible, experiments on humans and other animals. Eventually construction of working intelligent systems will be required, when ideas become sufficiently precise. There is already a vast amount of relevant literature, from many disciplines, including ethology, developmental psychology,  linguistics, AI, Robotics and philosophy.

Training for robot-mediated motor learning and stroke rehabilitation

Summary:

This research project is part of a project aiming to develop a novel robot-mediated training method to improve reaching movements of stroke patients. In the training task, the patient will hold the robotic arm that will allow both recording of movement parameters and providing assistive and/or resistive forces to guide the impaired movement. The novel adaptive algorithm applies a selects next training movements based on recent movement performance parameters and a general learning principle. Additional objectives of this research include thorough characterization of upper-limb reaching performance and learning in healthy and stroke populations and evaluation of recovery-related neural reorganization.

Robotic devices map upper limb performance and adaptively schedule rehabilitationRobotic devices map upper limb performance and adaptively schedule rehabilitation

Investigators:

Professor Chris Miall
Professor Alan Wing
Dr Orna Rosenthal
Dr Jeremy Wyatt

Funding:

MRC


Focus:

Optimizing motor learning and recovery (of the upper-limb) based on general learning principles

Robot-mediated motor training

Kinetic (spatiotemporal) and kinematic (muscle-joint) aspects of reaching movements across the work space

Mechanisms underlying motor recovery

Techniques:

Psychophysics, kinetic analysis, robotics, EMG (muscle activity), motion analysis, MRI

Brain-Computer-Interface (BCI)

Summary:

A Brain-Computer-Interface (BCI) connects the brain with an external device, e.g. a mouse pointer, allowing this device to be driven by EEG-signals. We envisage a broad range of commercial and non-commercial applications for the BCI, such as gaming, stroke rehabilitation, real-time neurofeedback, and vehicle alertness monitoring. In a recent pilot study, we developed a simple BCI for a LEGO Mindstorms NXT robot arm. This project aims to build on this success and extend the current functionality of the LEGO-BCI.

Investigators:

Dietmar Heinke
Saber Sami

Techniques:

Machine learning, EEG, MatLab-programming

Cognitive Neuroscience with LEGO Mindstorms NXT

Summary:

In a recent publication we showed that LEGO Mindstorms NXT can contribute to our understanding of the brain, especially the interactions between visual cognition and motor behaviour. Based on this work numerous projects are possible: 

  • There is a wealth of data on human reaching and grasping. However, these data have never been modelled in such a framework.
  • In recent years, more and more evidence supports the idea that humans utilize feed-forward models (well-known from technical control theory) to guide reaching and grasping. Our Lego robot constitutes an ideal environment to test this theory in a natural set-up.
  • Recent progress in developmental psychology shows that toddlers' interactions with the environment foster their learning progression. With the LEGO robot, we can mimic these interactions (developmental robotics) and for instance, explore how the recognition of objects can be learnt in such a scenario.

Investigator:

Dietmar Heinke

Techniques:

JAVA, Lego Mindstroms NXT, Image processing.

Bayesian Brain

Summary:

Using data from experiments in perception, decision making and learning we are developing advanced statistical models to compare human behaviour to optimal Bayesian observer models. We compare these models to data from brain imaging to learn how the brain performs these computations.Graph_model

Principal Investigators:

Dr Ulrik Beierholm
Dr Massimiliano Di Luca
Professor Andrew Howes
Professor Uta Noppeney

Funding:

EU, Marie Curie

Focus:

Models are being developed to improve our understanding of how the brain performs multi-sensory integration, perception, learning, and economic decision making.

Techniques:

Statistical model, Reinforcement learning models, psychophysics, EEG, fMRI

Combining machine and neural processing for object
detection

Summary:

This research project aims at improving object detection by optimally combining the complementary strengths of human and machine perception. The basic idea is to provide machines not only with environmental inputs for object detection, but also with neural activity features (i.e. EEG signals) recorded in humans exposed to exactly those sensory signals during object detection. In the second step we will train humans to enhance these neural activity features and gate them into awareness to improve human object detection performance.

Principal Investigators: 

Professor Ales Leonardis
Professor Uta Noppeney

Funding:

EU

Focus:

Understanding and enhancing cognition and performance;

Exploring the role of cognitive neuroscience in understanding, managing and optimizing human performance. 

Techniques:

Computational Vision, Machine Learning, EEG

The multisensory brain

Summary:

Psychophysics, neuroimaging and computational modelling are combined to investigate how the human brain integrates signals from multiple senses into a coherent percept of our environment. Computational operations are described using models of Bayesian inference and learning. Neural mechanisms are characterized by combining multimodal imaging (fMRI, EEG, TMS) with advanced neuroimaging analyses including multivariate decoding (e.g. support vector machines) and effective connectivity analyses (e.g. Dynamic Causal Modelling).

Investigators:audiovisual brain imaging

Dr Ulrik Beierholm
Dr Massimiliano Di Luca
Dr Robert Gray
Professor Uta Noppeney
Professor Alan Wing

Funding:

ERC, EU, Marie Curie, BBSRC

Focus:

Understanding the computational operations and neural mechanisms of information integration

Techniques:

Psychophysics, multimodal imaging (fMRI, EEG, TMS), computational modelling (e.g. Bayesian inference)

Haptics

Summary:

Haptics (=active touch) is the combination of tactile and proprioceptive information during contact and manipulation. The haptic modality is involved in both perceiving the environment and manipulating it physically. The influence of sensory information on action and the reverse influence of active movement on perception are far from being understood. Our research in the field of haptics thus aims at building computational models of human behaviour and perception. Such models can be utilized in technical applications (i.e., robotic systems, force-feedback interfaces, telepresence, reabilitation devices).

Investigators:

Haptic force feedback deviceDr Massimiliano Di Luca
Dr Satoshi Endo
Dr Robert Gray
Dr Ansgar Koene
Dr Hoi Fei Kwok
Dr Markus Rank
Dr Roberta Roberts
Professor Chris Miall
Professor Alan Wing
Professor Jeremy Wyatt

Funding:

The European Commission, EPSRC, NERC, The Royal Society, The Leverhulme Trust, The British Council, The Department of Trade and Industry, AWM, DAAD

Focus:

Role of active exploration in perception
Trade-off between exploration and exploitation
Rehabilitation for patients with motor deficits
Learning from human grasping to improve robotic grasping

Techniques:

Psychophysics, motion analysis, robotics, computational modelling

Neuropsychological screening and rehabilitation

Summary:

Developing clinically applicable techniques for cognitive screening after brain injury, examining cognitive deficits in relation to brain lesion, using fMRI to develop neural-level models of neurorehabilitation.

Voxel-based morphological analysisPrincipal Investigators:

Professor J Riddoch with Professor G Humphreys, Dr P Rotshtein

Funding:

MRC and Stroke Association

Focus:

Predicting neural recovery from multi-modal brain imaging Techniques: Neuropsychology, MRI (structural and functional), computational modelling

Visual perception and the brain 

Summary:

Using advanced pattern classifier systems to distinguish the neural response to different depth cues to assess whether depth information is coded in a common manner. Also uses psychophysical work to characterise how depth cues may be analysed in computer vision and used to create graphic surfaces.

Principal Investigators:

Professor Z Kourtzi, Dr P Tino and Dr A Welchman

Stereoscopic depth responses in the human brain recorded at 7TFunding:

BBSRC, EPSRC, EU

Focus:

Understanding the neural basis of depth perception

Techniques:

Psychophysics, fMRI, computational modelling

Understanding neural networks through intervention and modelling

Summary:

Using fMRI with neuropsychological patients and combined with TMS to show the necessary role of different regions with cortical networks, and using computational models to define the functional roles of different regions.

Principal Investigators:

Dr H Allen

Dr J Braithwaite

Professor G Humphreys

Professor Z Kourtzi

Dr C Mevorach

Professor U Noppeney

Brain scanFunding:

BBSRC, MRC

Focus:

Using neuropsychological-fMRI and fMRI-TMS to decompose interacting neural networks, computational modelling.

Predictive perception and action

Summary:

The role of the cerebellum in forward planning of action studied through robotic systems where force feedback is varied.

Principal Investigators:

Professor C Miall, Professor A Wing, Dr J Wyatt

Psychophysical study with robotic control of forceFunding:

BBSRC, Wellcome, HFSPO, EU, DTi

Focus:

Perception and action

Techniques:

Human psychophysics, robotics, fMRI

The learning brain 

Summary:

Using and developing multi-modal imaging techniques and multi-voxel classifier systems to understand individual differences in perception and learning, and using this to guide procedures for maintaining plasticity across the age ranges. We are also modelling human action.

Light-point displays used for modelling human actionPrincipal Investigators:

Professor Uta Noppeney, Professor Z KourtziDr A Bagshaw, Dr H Allen, Dr A Welchman, Professor A Wing

Funding:

BBSRC, EPSRC, ESRC

Focus:

Understanding and exploiting neural plasticity

Techniques:

Multi-modal imaging, human psychophysics, computational modelling

The feeling brain

Summary

Examining the functional and neural mechanisms of sensory feedback in motor control and the role of active movement in sensory discrimination (haptic perception).

Principal Investigators:

Professor G Humphreys

Professor C Miall

Dr R Roberts

Professor A Wing

Dr A Welchman

Dr J Wyatt

Virtual reality system for visual tactile interactionFunding:

BBSRC, EU

Focus:

Understanding sensory-motor interactions in movement control and touch

Techniques:

Human psychophysics, robotic systems, motion analysis, fMRI 

Multi-modal imaging techniques 

Summary:

We are developing procedures for multi-modal imaging, combining the time course of EEG with spatial resolution of MRI, and combined MRI and Trans-cranial Magnetic Stimulation to provide improved analyses of neurological disorders and ageing.

Principal Investigators:

Example of EEG-FMRI used to localise the site of epileptic seizures

Dr T Arvanitis

Dr A Bagshaw

Professor Z Kourtzi

Professor Uta Noppeney

Funding:

BBSRC, EPSRC, EU

Focus:

Developing techniques for multi-model imaging, combining fMRI, EEG, diffusion tensor imaging and magnetic resonance spectroscopy (MRS), applied to understanding cognition after brain injury, epilepsy, in adults and children. 

Cognitive robotics 

Summary:

Combining the development of robotic systems with human psychophysical studies in order to optimise robotic function while also learning about human systems by testing in robotic models. Also the application of robotic systems to rehabilitation.

Principal Investigators:

Dr J Wyatt with Dr R Dearden and Dr N Hawes, and Professor A Sloman, Professor G Humphreys, Professor C Miall

Human robot interactionFunding:

EPSRC, EU, NERC

Focus:

Development of active robotic systems

Techniques:

Computer science, Artificial Intelligence, Computational modelling, Machine Learning

Agent-based modelling of human social behaviour

Summary:

An agent-based model (ABM) consists of independent agents interacting with each other and the environment. The behaviour of each agent is defined by a set of rules, only prominent to each agent. This type of model allows the researcher to observe how individual behaviours of agents (microscopic level) result in patterns on the macroscopic level (emergent behaviour). These properties make ABMs an ideal approach to modelling human social behaviour. For instance, the seminal Schelling model showed that social segregation can emerge from a minor preference to live in a neighbourhood populated by a similar social group. In collaborations with my colleagues from the social psychology group I aim to apply ABMs to a broad range of social phenomena, e.g. help seeking of ethnic minorities, social destigmatisation, social housing market, etc.

Output from neural network modelPrincipal Investigators:

Dr Dietmar Heinke

Funding:

EPSRC, ESRC, BBSRC

Techniques:

MatLab-programming