Current projects 

Our projects span from basic work on understanding brain function, through using advanced imaging, robotic devices and virtual reality environments to promote recovery in neurological patients, to applied work on developing intelligent robots.

Rhythms in the Mind, Brain, and Action

Sensory information is often temporally structured and our brain exploits the regularity for perception. We study how expectations can influence our perception of rhythm, music, and timing of stimuli.

The state of activity in the brain fluctuates regularly. The characteristic of the oscillations and their synchronization is related to external stimulation and cognitive processing. Using physiological recordings we investigate the parameters and predictability of neurocognitive models.

The properties of the environment as well as our internal states can influence how we move. We record repetitive actions as in dance and music production to investigate what are the stimulus properties, cognitive processing, and muscular actuation that allow us to synchronize to external rhythms.

Principal Investigators

Dr Max Di Luca
Dr Simon Hanslmayr
Professor Alan Wing 
Professor Kimron Shapiro
Professor Jane Raymond
Dr Maria Wimber

Funding

Marie Curie ‘TICS’

Focus

Temporal and Rhythmic Perception. Neural Oscillations. Sensorimotor Synchronization.

Techniques

Psychophysics to measure perception
Electroencephalography (EEG) to look at internal oscillations Transcranial stimulation (tDCS, tACS, TMS) to perturb brain rhythm Motion Capture to record body movements Computational Modeling to integrate the three streams of research

The Meta-Morphogenesis Project

Summary:

A long term project attempting to understand the mechanisms and intermediate stages in evolution of biological information processing, including precursors of human intelligence.
Further information:

www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html 

Investigators:aaron

Professor Aaron Sloman

Funding:

Unfunded. There are no plans to apply for funding.

Focus:

Attempt to understand biological intelligent systems and possible future intelligent robots with similar capabilities, by investigating the major transitions in biological information processing from pre-biotic materials onwards. The transitions include changes in opportunities and problems posed by the environment, changes in sensory-motor morphology, changes in information content aquired, stored, derived, and used, changes in forms of representation, changes in mechanisms for manipulating information, changes in forms of information and mechanisms used in reproduction, changes in forms of learning, both across generations and within individuals, and changes in social/cultural forms of information processing and information transmission, including teaching and education.

Techniques:

Wild speculation tamed by philosophical analysis, study of literature, critical analysis and, when possible, experiments on humans and other animals. Eventually construction of working intelligent systems will be required, when ideas become sufficiently precise. There is already a vast amount of relevant literature, from many disciplines, including ethology, developmental psychology,  linguistics, AI, Robotics and philosophy.

Training for robot-mediated motor learning and stroke rehabilitation

Summary:

This research project is part of a project aiming to develop a novel robot-mediated training method to improve reaching movements of stroke patients. In the training task, the patient will hold the robotic arm that will allow both recording of movement parameters and providing assistive and/or resistive forces to guide the impaired movement. The novel adaptive algorithm applies a selects next training movements based on recent movement performance parameters and a general learning principle. Additional objectives of this research include thorough characterization of upper-limb reaching performance and learning in healthy and stroke populations and evaluation of recovery-related neural reorganization.

Robotic devices map upper limb performance and adaptively schedule rehabilitationRobotic devices map upper limb performance and adaptively schedule rehabilitation

Investigators:

Professor Chris Miall
Professor Alan Wing
Dr Orna Rosenthal

Funding:

MRC


Focus:

Optimizing motor learning and recovery (of the upper-limb) based on general learning principles

Robot-mediated motor training

Kinetic (spatiotemporal) and kinematic (muscle-joint) aspects of reaching movements across the work space

Mechanisms underlying motor recovery

Techniques:

Psychophysics, kinetic analysis, robotics, EMG (muscle activity), motion analysis, MRI

Bayesian Brain

Summary:

Using data from experiments in perception, decision making and learning we are developing advanced statistical models to compare human behaviour to optimal Bayesian observer models. We compare these models to data from brain imaging to learn how the brain performs these computations.Graph_model

Principal Investigators:

Dr Ulrik Beierholm
Dr Massimiliano Di Luca
Professor Andrew Howes

Funding:

EU, Marie Curie

Focus:

Models are being developed to improve our understanding of how the brain performs multi-sensory integration, perception, learning, and economic decision making.

Techniques:

Statistical model, Reinforcement learning models, psychophysics, EEG, fMRI

Combining machine and neural processing for object
detection

Summary:

This research project aims at improving object detection by optimally combining the complementary strengths of human and machine perception. The basic idea is to provide machines not only with environmental inputs for object detection, but also with neural activity features (i.e. EEG signals) recorded in humans exposed to exactly those sensory signals during object detection. In the second step we will train humans to enhance these neural activity features and gate them into awareness to improve human object detection performance.

Principal Investigators: 

Professor Ales Leonardis

Funding:

EU

Focus:

Understanding and enhancing cognition and performance;

Exploring the role of cognitive neuroscience in understanding, managing and optimizing human performance. 

Techniques:

Computational Vision, Machine Learning, EEG

The multisensory brain

Summary:

Psychophysics, neuroimaging and computational modelling are combined to investigate how the human brain integrates signals from multiple senses into a coherent percept of our environment. Computational operations are described using models of Bayesian inference and learning. Neural mechanisms are characterized by combining multimodal imaging (fMRI, EEG, TMS) with advanced neuroimaging analyses including multivariate decoding (e.g. support vector machines) and effective connectivity analyses (e.g. Dynamic Causal Modelling).

Investigators:audiovisual brain imaging

Dr Ulrik Beierholm
Dr Massimiliano Di Luca
Dr Robert Gray
Professor Alan Wing

Funding:

ERC, EU, Marie Curie, BBSRC

Focus:

Understanding the computational operations and neural mechanisms of information integration

Techniques:

Psychophysics, multimodal imaging (fMRI, EEG, TMS), computational modelling (e.g. Bayesian inference)

Haptics

Summary:

Haptics (=active touch) is the combination of tactile and proprioceptive information during contact and manipulation. The haptic modality is involved in both perceiving the environment and manipulating it physically. The influence of sensory information on action and the reverse influence of active movement on perception are far from being understood. Our research in the field of haptics thus aims at building computational models of human behaviour and perception. Such models can be utilized in technical applications (i.e., robotic systems, force-feedback interfaces, telepresence, reabilitation devices).

Investigators:

Haptic force feedback deviceDr Massimiliano Di Luca
Dr Robert Gray
Dr Hoi Fei Kwok
Dr Roberta Roberts
Professor Chris Miall
Professor Alan Wing

Funding:

The European Commission, EPSRC, NERC, The Royal Society, The Leverhulme Trust, The British Council, The Department of Trade and Industry, AWM, DAAD

Focus:

Role of active exploration in perception
Trade-off between exploration and exploitation
Rehabilitation for patients with motor deficits
Learning from human grasping to improve robotic grasping

Techniques:

Psychophysics, motion analysis, robotics, computational modelling

Neuropsychological screening and rehabilitation

Summary:

Developing clinically applicable techniques for cognitive screening after brain injury, examining cognitive deficits in relation to brain lesion, using fMRI to develop neural-level models of neurorehabilitation.

Voxel-based morphological analysisPrincipal Investigators:

Dr P Rotshtein

Funding:

MRC and Stroke Association

Focus:

Predicting neural recovery from multi-modal brain imaging Techniques: Neuropsychology, MRI (structural and functional), computational modelling

Visual perception and the brain 

Summary:

Using advanced pattern classifier systems to distinguish the neural response to different depth cues to assess whether depth information is coded in a common manner. Also uses psychophysical work to characterise how depth cues may be analysed in computer vision and used to create graphic surfaces.

Principal Investigators:

Dr P Tino

Stereoscopic depth responses in the human brain recorded at 7TFunding:

BBSRC, EPSRC, EU

Focus:

Understanding the neural basis of depth perception

Techniques:

Psychophysics, fMRI, computational modelling

Understanding neural networks through intervention and modelling

Summary:

Using fMRI with neuropsychological patients and combined with TMS to show the necessary role of different regions with cortical networks, and using computational models to define the functional roles of different regions.

Principal Investigators:

Dr C Mevorach

Brain scanFunding:

BBSRC, MRC

Focus:

Using neuropsychological-fMRI and fMRI-TMS to decompose interacting neural networks, computational modelling.

Predictive perception and action

Summary:

The role of the cerebellum in forward planning of action studied through robotic systems where force feedback is varied.

Principal Investigators:

Professor C Miall, Professor A Wing

Psychophysical study with robotic control of forceFunding:

BBSRC, Wellcome, HFSPO, EU, DTi

Focus:

Perception and action

Techniques:

Human psychophysics, robotics, fMRI

The learning brain 

Summary:

Using and developing multi-modal imaging techniques and multi-voxel classifier systems to understand individual differences in perception and learning, and using this to guide procedures for maintaining plasticity across the age ranges. We are also modelling human action.

Light-point displays used for modelling human actionPrincipal Investigators:

Dr A Bagshaw, Professor A Wing

Funding:

BBSRC, EPSRC, ESRC

Focus:

Understanding and exploiting neural plasticity

Techniques:

Multi-modal imaging, human psychophysics, computational modelling

Multi-modal imaging techniques 

Summary:

We are developing procedures for multi-modal imaging, combining the time course of EEG with spatial resolution of MRI, and combined MRI and Trans-cranial Magnetic Stimulation to provide improved analyses of neurological disorders and ageing.

Principal Investigators:

Example of EEG-FMRI used to localise the site of epileptic seizures

Dr T Arvanitis

Dr A Bagshaw

Funding:

BBSRC, EPSRC, EU

Focus:

Developing techniques for multi-model imaging, combining fMRI, EEG, diffusion tensor imaging and magnetic resonance spectroscopy (MRS), applied to understanding cognition after brain injury, epilepsy, in adults and children. 

Characterising Creativity in Tool-using Activities

How are tools (by which I mean physical objects) used in creative practices (like painting, drawing, sculpting)? I am interested in what creative practitioners do with their tools and how this practice relates to cognition.  A recent paper raised some of the issues of interest (https://link.springer.com/article/10.1007/s13347-017-0292-0). There are several types of project that could be considered in relation to this:

  • Data from sensors (on tools or on the person) allow us to characterise discrete actions, but how can we better understand sequences of movements? In other words, how can time-series analysis help understand complex movement?
  • Is there a relationship between cognitive aspects of creativity and physical actions (as implied in the paper)? In other words, what does an embodied theory of creativity look like?
  • Do different creative practices have similar underlying phenomena or are there distinct phenomena surrounding each practice?

Techniques:

The projects could involve some of the following (but we can work around most of these if you don’t have the prior knowledge): Signal processing (with R or MatLab), Basic electronics (making things with sensors and Arduinos), Data collection in the field (at the workplaces of practising artists), Phenomenological interviews (or other form of in-depth interviewing to gain insight into experiential aspects of creative practice).

Investigator:

Chris Baber

Sensemaking with All-source Intelligence

The ability to draw inferences from a collection of disparate information sources is key to many intelligence analysis professions.  I’ve been looking into how ‘uniformed’ professions might go about making sense of complex data.  Here’s a fairly recent piece of work on this: https://dl.acm.org/citation.cfm?id=2870212. Projects that relate to this area could involve the following:

  • How do people create a ‘gist’ (or working model) of inferences when dealing with lots of different sources of data?  Does this involve the creation of narratives (as work on jury decision making suggests) or does it involve the creation of diagrams (or sketches)?
  • How do people challenge their own (and other people’s) inferences during sensemaking?
  • How can technology help or hinder people’s abilities to make sense of multiple information sources?

Techniques:

It would be quite nice to make a software prototype for this (using one of the many Javascript toolkits, like d3), but it is equally possible to make a paper-based set of material for experiments (this is what I’ve done in the North x Southwest Exercise that is reported in the paper linked above).  It would be good to run an experiment, with individuals or small groups working together.  I’d quite like to run a study using Agent-Based Models to see how information sharing has an impact on sensemaking.

Investigator:

Chris Baber

Reinforcement-based motor learning

We have shown in several recent publications that participants’ ability to learn a new motor action through reinforcement-based (reward) feedback is highly variable. Specifically, while 66% of people are able to learn, 33% show a complete inability to learn. Our goal moving forward is to examine the difference between learners vs. non-learners in more detail and across a range of tasks; simple reaching, force-field adaptation, effort production and chord (similar to learning the piano) learning. With help from people in the lab, this will involve programming a novel task using one of several possible pieces of equipment (KINARM robotic manipulandum, motion capturing equipment, force capturing equipment) with either Matlab or Simulink (graphical Matlab). It will be expected that participants will be recruited and tested, then analysis will be performed on this data from basic kinematic analysis to developing computational models.   

Relevant publications:

  • The contribution of explicit processes to reinforcement-based motor learning.
    Holland PJ, Codol O, Galea JM.
    J Neurophysiol. 2018 Mar 14. doi: 10.1152/jn.00901.2017. [Epub ahead of print]
  • Predicting explorative motor learning using decision-making and motor noise.
    Chen X, Mohr K, Galea JM.
    PLoS Comput Biol. 2017 Apr 24;13(4):e1005503. doi: 10.1371/journal.pcbi.1005503. eCollection 2017 Apr.

Techniques

Matlab, Simulink

Computational Psychology Lab (CPL)

  • Agent-based modelling of human social behaviour
  • Cognitive Neuroscience with LEGO Mindstorms NXT
  • Modelling of EEG-data
  • What is wrong with Deep Neural Networks?
  • Computational modelling of tool innovation

Agent-based modelling of human social behaviour

Summary:

An agent-based model (ABM) consists of independent agents interacting with each other and the environment. The behaviour of each agent is defined by a set of rules, only prominent to each agent. This type of model allows the researcher to observe how individual behaviours of agents (microscopic level) result in patterns on the macroscopic level (emergent behaviour). These properties make ABMs an ideal approach to modelling human social behaviour. For instance, the seminal Schelling model showed that social segregation can emerge from a minor preference to live in a neighbourhood populated by a similar social group. In collaborations with my colleagues from the social psychology group I aim to apply ABMs to a broad range of social phenomena, e.g. help seeking of ethnic minorities, social destigmatisation, social housing market, etc.

Output from neural network modelPrincipal Investigators:

Dr Dietmar Heinke

Funding:

EPSRC, ESRC, BBSRC

Techniques:

MatLab-programming

Cognitive Neuroscience with LEGO Mindstorms NXT

Summary:

In a recent publication we showed that LEGO Mindstorms NXT can contribute to our understanding of the brain, especially the interactions between visual cognition and motor behaviour. Based on this work numerous projects are possible: 

  • There is a wealth of data on human reaching and grasping. However, these data have never been modelled in such a framework.
  • In recent years, more and more evidence supports the idea that humans utilize feed-forward models (well-known from technical control theory) to guide reaching and grasping. Our Lego robot constitutes an ideal environment to test this theory in a natural set-up.
  • Recent progress in developmental psychology shows that toddlers' interactions with the environment foster their learning progression. With the LEGO robot, we can mimic these interactions (developmental robotics) and for instance, explore how the recognition of objects can be learnt in such a scenario.

Investigator:

Dietmar Heinke

Techniques:

JAVA, Lego Mindstroms NXT, Image processing.

Modelling of EEG-data

Contemporary EEG techniques focus on a range of statistical methods, such as fitting general linear models, correlations, time-frequency analysis, dynamic causal models, etc. However, there has been little work on connecting EEG signals directly to computational models of cognitive abilities. Such models are similar to methods in artificial intelligence (AI) which aim at mimicking human cognitive abilities (e.g. playing and winning the ancient board game, GO). The aim of this project is to develop a novel method that tests such computational models by benchmarking them against EEG signals. In other words, this novel method will allow us to establish more directly links between human cognition and EEG signals than contemporary EEG methods. Consequently, we will be able to advance our understanding of neural mechanisms underlying cognitive abilities.

This project will focus on evidence from visual search tasks as test case for this novel approach. Visual search tasks aim to understand how humans analyse complex visual scenes and find behaviourally relevant object in them.

Techniques

Behavioural experiments, EEG experiments, MatLab, Computational modelling

Investigator:

Dietmar Heinke, Dominic Standage

References

Narbutas, V., Lin, Y.-S., Kristan, M., & Heinke, D. (2017) Serial versus parallel search: A model comparison approach based on reaction time distributions. Visual Cognition, 1-3, 306-325. https://doi.org/10.1080/13506285.2017.1352055

Lin, Y., Heinke, D. & Humphreys, G. W. (2015) Modeling visual search using three-parameter probability functions in a hierarchical Bayesian framework. Attention, Perception, & Psychophysics, 77, 3, 985-1010.

Heinke, D. & Backhaus, A. (2011) Modeling visual search with the Selective Attention for Identification model (VS-SAIM): A novel explanation for visual search asymmetries. Cognitive Computation, 3(1), 185-205.

What is wrong with Deep Neural Networks?

Deep neural networks (DNNs) have been behind recent headline-grabbing successes for artificial intelligence, such as AlphaGo beating the world's best player in the board game Go. Nevertheless, in many areas DNNs have not lead to technical solutions that match human abilities, especially with regards to object recognition. Perhaps the most obvious reason for such failures stems from the fact that DNNs employ very different processing mechanisms to those found in humans (e.g. Lake et al. 2016).

This project aims to compare DNN’s abilities with human abilities and ascertain how the two differ. Based on this understand we aim to develop novel approaches to automated object recognition.

 

Techniques

Tensorflow, Python

Investigator:

Dietmar Heinke

References

Lake, B., Ullman, T., Tenenbaum, J., & Gershman, S. (2016). Behavioral and Brain Sciences, 1-101.

Computational modelling of tool innovation

Humans’ ability to use tools has drastically transformed our planet, and is a skill we use in our daily lives. Building autonomous agents that possess this ability is crucial in allowing them to function in the world we live in. Recent developments in machine learning, such as deep reinforcement learning (DRL), has resulted in numerous successes in creating software and robot agents that are able to effectively use man-made tools. However, transferring the ability to innovate, such as inventing new tools or new uses for existing tools, has received relatively little attention so far in artificial intelligence (AI) research.

This project aims to develop an algorithm for inventing tools in the very simple scenario of the LEGO task. The LEGO task uses pairs of novel objects built with LEGO blocks. Participants are asked to determine how one object can transport (either lift or shift) the other object. They had three minutes to complete the task (see https://tinyurl.com/yck2rgfj and https://tinyurl.com/y9ug8snt for two examples of videos. The aim of the project is mimic human behaviour in these tasks.

Techniques

MatLab

Investigator:

Dietmar Heinke

References

Osiurak, F. (2014). What neuropsychology tells us about human tool use? The four constraints theory (4CT): mechanics, space, time, and effort. Neuropsychol. Rev., 24 (2), 88-115.

Osiurak, F., & Heinke, D. (2018) Looking for Intoolligence: A unified framework for the cognitive study of human tool use and technology. American Psychologist, 73(2), 169-185. http://dx.doi.org/10.1037/amp0000162.

Intelligent Robotics Lab (IRLab)

  • Tell me why: Explainable Representation and Reasoning in Robotics
  • Theory of Intentions and Affordances for Assistive Robots

Tell me why: Explainable Representation and Reasoning in Robotics

Mobile robots assisting humans in homes or warehouses receive different descriptions of incomplete domain knowledge and uncertainty. Information about the domain often includes commonsense knowledge (e.g., "textbooks are usually in the library"), and quantitative descriptions extracted from sensor inputs (e.g., "I am 90% certain I saw the robotics book on the table"). We have developed an architecture that combines the complementary strengths of non-monotonic logical reasoning and probabilistic reasoning  to represent and reason with such descriptions. We can now pose interesting questions such as:

1. How to enable a robot to incrementally and interactively acquire domain knowledge from sensor inputs and high-level human feedback?

2. How to enable a robot to adapt its representation and reasoning at different resolutions to provide human-understandable explanations of beliefs, experiences and decisions?

Students will develop algorithms to address these questions in specific application domains, and evaluate these algorithms in simulation and on physical robot platforms.

Requirements:

  • (Essential) Proficiency in probability theory, statistics, linear algebra and calculus.
  • (Essential) Proficiency in an object-oriented language or scripted programming language of your choice.
  • (Optional) Prior knowledge of logic or machine learning, and their use in robotics.
  • (Optional) Prior experience of working with physical robots.

Investigator:

Dr. Mohan Sridharan

 

Theory of Intentions and Affordances for Assistive Robots

Mobile robots are increasingly being used to assist humans in complex domains characterized by partial observability and non-deterministic action outcomes. To truly collaborate with humans in such domains, a robot has to anticipate the intentions of its human collaborator, estimate the action capabilities (also known as affordances) of the collaborator, and offer assistance when the human's capabilities are likely to be insufficient to achieve the desired intention. We have developed theoretical claims about how to represent, reason with and learn these intentions and affordances. We can now pose interesting questions such as:

1. How to instantiate these thereotical claims in software architectures in the context of a robot collaborating with humans in complex domains?

2. How to exploit these theoretical claims to support reliable, computationally efficient, and explainable decision making?

Students will develop algorithms to address these questions in specific application domains, and evaluate these algorithms in simulation and on physical robot platforms.

Requirements:

  • (Essential) Proficiency in probability theory, statistics, linear algebra and calculus.
  • (Essential) Proficiency in an object-oriented language or scripted programming language of your choice.
  • (Optional) Prior knowledge of logic or machine learning, and their use in robotics.
  • (Optional) Prior experience of working with physical robots.

Investigator:

Dr. Mohan Sridharan