Current projects 

Our projects span from basic work on understanding brain function, through using advanced imaging, robotic devices and virtual reality environments to promote recovery in neurological patients, to applied work on developing intelligent robots.

Rhythms in the Mind, Brain, and Action

Sensory information is often temporally structured and our brain exploits the regularity for perception. We study how expectations can influence our perception of rhythm, music, and timing of stimuli.

The state of activity in the brain fluctuates regularly. The characteristic of the oscillations and their synchronization is related to external stimulation and cognitive processing. Using physiological recordings we investigate the parameters and predictability of neurocognitive models.

The properties of the environment as well as our internal states can influence how we move. We record repetitive actions as in dance and music production to investigate what are the stimulus properties, cognitive processing, and muscular actuation that allow us to synchronize to external rhythms.

Principal Investigators

Dr Max Di Luca
Professor Alan Wing 
Professor Kimron Shapiro
Professor Jane Raymond

Funding

Marie Curie ‘TICS’

Focus

Temporal and Rhythmic Perception. Neural Oscillations. Sensorimotor Synchronization.

Techniques

Psychophysics to measure perception
Electroencephalography (EEG) to look at internal oscillations Transcranial stimulation (tDCS, tACS, TMS) to perturb brain rhythm Motion Capture to record body movements Computational Modeling to integrate the three streams of research

The Meta-Morphogenesis Project

Summary:

A long term project attempting to understand the mechanisms and intermediate stages in evolution of biological information processing, including precursors of human intelligence.
Further information:

www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html 

Investigators:aaron

Professor Aaron Sloman

Funding:

Unfunded. There are no plans to apply for funding.

Focus:

Attempt to understand biological intelligent systems and possible future intelligent robots with similar capabilities, by investigating the major transitions in biological information processing from pre-biotic materials onwards. The transitions include changes in opportunities and problems posed by the environment, changes in sensory-motor morphology, changes in information content aquired, stored, derived, and used, changes in forms of representation, changes in mechanisms for manipulating information, changes in forms of information and mechanisms used in reproduction, changes in forms of learning, both across generations and within individuals, and changes in social/cultural forms of information processing and information transmission, including teaching and education.

Techniques:

Wild speculation tamed by philosophical analysis, study of literature, critical analysis and, when possible, experiments on humans and other animals. Eventually construction of working intelligent systems will be required, when ideas become sufficiently precise. There is already a vast amount of relevant literature, from many disciplines, including ethology, developmental psychology,  linguistics, AI, Robotics and philosophy.

Training for robot-mediated motor learning and stroke rehabilitation

Summary:

This research project is part of a project aiming to develop a novel robot-mediated training method to improve reaching movements of stroke patients. In the training task, the patient will hold the robotic arm that will allow both recording of movement parameters and providing assistive and/or resistive forces to guide the impaired movement. The novel adaptive algorithm applies a selects next training movements based on recent movement performance parameters and a general learning principle. Additional objectives of this research include thorough characterization of upper-limb reaching performance and learning in healthy and stroke populations and evaluation of recovery-related neural reorganization.

Robotic devices map upper limb performance and adaptively schedule rehabilitationRobotic devices map upper limb performance and adaptively schedule rehabilitation

Investigators:

Professor Chris Miall
Professor Alan Wing
Dr Orna Rosenthal

Funding:

MRC


Focus:

Optimizing motor learning and recovery (of the upper-limb) based on general learning principles

Robot-mediated motor training

Kinetic (spatiotemporal) and kinematic (muscle-joint) aspects of reaching movements across the work space

Mechanisms underlying motor recovery

Techniques:

Psychophysics, kinetic analysis, robotics, EMG (muscle activity), motion analysis, MRI

Bayesian Brain

Summary:

Using data from experiments in perception, decision making and learning we are developing advanced statistical models to compare human behaviour to optimal Bayesian observer models. We compare these models to data from brain imaging to learn how the brain performs these computations.Graph_model

Principal Investigators:

Dr Ulrik Beierholm
Dr Massimiliano Di Luca
Professor Andrew Howes

Funding:

EU, Marie Curie

Focus:

Models are being developed to improve our understanding of how the brain performs multi-sensory integration, perception, learning, and economic decision making.

Techniques:

Statistical model, Reinforcement learning models, psychophysics, EEG, fMRI

Combining machine and neural processing for object
detection

Summary:

This research project aims at improving object detection by optimally combining the complementary strengths of human and machine perception. The basic idea is to provide machines not only with environmental inputs for object detection, but also with neural activity features (i.e. EEG signals) recorded in humans exposed to exactly those sensory signals during object detection. In the second step we will train humans to enhance these neural activity features and gate them into awareness to improve human object detection performance.

Principal Investigators: 

Professor Ales Leonardis

Funding:

EU

Focus:

Understanding and enhancing cognition and performance;

Exploring the role of cognitive neuroscience in understanding, managing and optimizing human performance. 

Techniques:

Computational Vision, Machine Learning, EEG

The multisensory brain

Summary:

Psychophysics, neuroimaging and computational modelling are combined to investigate how the human brain integrates signals from multiple senses into a coherent percept of our environment. Computational operations are described using models of Bayesian inference and learning. Neural mechanisms are characterized by combining multimodal imaging (fMRI, EEG, TMS) with advanced neuroimaging analyses including multivariate decoding (e.g. support vector machines) and effective connectivity analyses (e.g. Dynamic Causal Modelling).

Investigators:audiovisual brain imaging

Dr Ulrik Beierholm
Dr Massimiliano Di Luca
Dr Robert Gray
Professor Alan Wing

Funding:

ERC, EU, Marie Curie, BBSRC

Focus:

Understanding the computational operations and neural mechanisms of information integration

Techniques:

Psychophysics, multimodal imaging (fMRI, EEG, TMS), computational modelling (e.g. Bayesian inference)

Haptics

Summary:

Haptics (=active touch) is the combination of tactile and proprioceptive information during contact and manipulation. The haptic modality is involved in both perceiving the environment and manipulating it physically. The influence of sensory information on action and the reverse influence of active movement on perception are far from being understood. Our research in the field of haptics thus aims at building computational models of human behaviour and perception. Such models can be utilized in technical applications (i.e., robotic systems, force-feedback interfaces, telepresence, reabilitation devices).

Investigators:

Haptic force feedback deviceDr Massimiliano Di Luca
Dr Robert Gray
Dr Hoi Fei Kwok
Dr Roberta Roberts
Professor Chris Miall
Professor Alan Wing

Funding:

The European Commission, EPSRC, NERC, The Royal Society, The Leverhulme Trust, The British Council, The Department of Trade and Industry, AWM, DAAD

Focus:

Role of active exploration in perception
Trade-off between exploration and exploitation
Rehabilitation for patients with motor deficits
Learning from human grasping to improve robotic grasping

Techniques:

Psychophysics, motion analysis, robotics, computational modelling

Neuropsychological screening and rehabilitation

Summary:

Developing clinically applicable techniques for cognitive screening after brain injury, examining cognitive deficits in relation to brain lesion, using fMRI to develop neural-level models of neurorehabilitation.

Voxel-based morphological analysisPrincipal Investigators:

Dr Pia Rotshtein

Funding:

MRC and Stroke Association

Focus:

Predicting neural recovery from multi-modal brain imaging Techniques: Neuropsychology, MRI (structural and functional), computational modelling

Visual perception and the brain 

Summary:

Using advanced pattern classifier systems to distinguish the neural response to different depth cues to assess whether depth information is coded in a common manner. Also uses psychophysical work to characterise how depth cues may be analysed in computer vision and used to create graphic surfaces.

Principal Investigators:

Dr P Tino

Stereoscopic depth responses in the human brain recorded at 7TFunding:

BBSRC, EPSRC, EU

Focus:

Understanding the neural basis of depth perception

Techniques:

Psychophysics, fMRI, computational modelling

Understanding neural networks through intervention and modelling

Summary:

Using fMRI with neuropsychological patients and combined with TMS to show the necessary role of different regions with cortical networks, and using computational models to define the functional roles of different regions.

Principal Investigators:

Dr C Mevorach

Brain scanFunding:

BBSRC, MRC

Focus:

Using neuropsychological-fMRI and fMRI-TMS to decompose interacting neural networks, computational modelling.

Predictive perception and action

Summary:

The role of the cerebellum in forward planning of action studied through robotic systems where force feedback is varied.

Principal Investigators:

Professor C Miall, Professor A Wing

Psychophysical study with robotic control of forceFunding:

BBSRC, Wellcome, HFSPO, EU, DTi

Focus:

Perception and action

Techniques:

Human psychophysics, robotics, fMRI

The learning brain 

Summary:

Using and developing multi-modal imaging techniques and multi-voxel classifier systems to understand individual differences in perception and learning, and using this to guide procedures for maintaining plasticity across the age ranges. We are also modelling human action.

Light-point displays used for modelling human actionPrincipal Investigators:

Dr A Bagshaw, Professor A Wing

Funding:

BBSRC, EPSRC, ESRC

Focus:

Understanding and exploiting neural plasticity

Techniques:

Multi-modal imaging, human psychophysics, computational modelling

Multi-modal imaging techniques 

Summary:

We are developing procedures for multi-modal imaging, combining the time course of EEG with spatial resolution of MRI, and combined MRI and Trans-cranial Magnetic Stimulation to provide improved analyses of neurological disorders and ageing.

Principal Investigators:

Example of EEG-FMRI used to localise the site of epileptic seizures

Dr T Arvanitis

Dr A Bagshaw

Funding:

BBSRC, EPSRC, EU

Focus:

Developing techniques for multi-model imaging, combining fMRI, EEG, diffusion tensor imaging and magnetic resonance spectroscopy (MRS), applied to understanding cognition after brain injury, epilepsy, in adults and children. 

Characterising Creativity in Tool-using Activities

How are tools (by which I mean physical objects) used in creative practices (like painting, drawing, sculpting)? I am interested in what creative practitioners do with their tools and how this practice relates to cognition.  A recent paper raised some of the issues of interest (https://link.springer.com/article/10.1007/s13347-017-0292-0). There are several types of project that could be considered in relation to this:

  • Data from sensors (on tools or on the person) allow us to characterise discrete actions, but how can we better understand sequences of movements? In other words, how can time-series analysis help understand complex movement?
  • Is there a relationship between cognitive aspects of creativity and physical actions (as implied in the paper)? In other words, what does an embodied theory of creativity look like?
  • Do different creative practices have similar underlying phenomena or are there distinct phenomena surrounding each practice?

Techniques:

The projects could involve some of the following (but we can work around most of these if you don’t have the prior knowledge): Signal processing (with R or MatLab), Basic electronics (making things with sensors and Arduinos), Data collection in the field (at the workplaces of practising artists), Phenomenological interviews (or other form of in-depth interviewing to gain insight into experiential aspects of creative practice).

Investigator:

Chris Baber

Sensemaking with All-source Intelligence

The ability to draw inferences from a collection of disparate information sources is key to many intelligence analysis professions.  I’ve been looking into how ‘uniformed’ professions might go about making sense of complex data.  Here’s a fairly recent piece of work on this: https://dl.acm.org/citation.cfm?id=2870212. Projects that relate to this area could involve the following:

  • How do people create a ‘gist’ (or working model) of inferences when dealing with lots of different sources of data?  Does this involve the creation of narratives (as work on jury decision making suggests) or does it involve the creation of diagrams (or sketches)?
  • How do people challenge their own (and other people’s) inferences during sensemaking?
  • How can technology help or hinder people’s abilities to make sense of multiple information sources?

Techniques:

It would be quite nice to make a software prototype for this (using one of the many Javascript toolkits, like d3), but it is equally possible to make a paper-based set of material for experiments (this is what I’ve done in the North x Southwest Exercise that is reported in the paper linked above).  It would be good to run an experiment, with individuals or small groups working together.  I’d quite like to run a study using Agent-Based Models to see how information sharing has an impact on sensemaking.

Investigator:

Chris Baber

Reinforcement-based motor learning

We have shown in several recent publications that participants’ ability to learn a new motor action through reinforcement-based (reward) feedback is highly variable. Specifically, while 66% of people are able to learn, 33% show a complete inability to learn. Our goal moving forward is to examine the difference between learners vs. non-learners in more detail and across a range of tasks; simple reaching, force-field adaptation, effort production and chord (similar to learning the piano) learning. With help from people in the lab, this will involve programming a novel task using one of several possible pieces of equipment (KINARM robotic manipulandum, motion capturing equipment, force capturing equipment) with either Matlab or Simulink (graphical Matlab). It will be expected that participants will be recruited and tested, then analysis will be performed on this data from basic kinematic analysis to developing computational models.   

Relevant publications:

  • The contribution of explicit processes to reinforcement-based motor learning.
    Holland PJ, Codol O, Galea JM.
    J Neurophysiol. 2018 Mar 14. doi: 10.1152/jn.00901.2017. [Epub ahead of print]
  • Predicting explorative motor learning using decision-making and motor noise.
    Chen X, Mohr K, Galea JM.
    PLoS Comput Biol. 2017 Apr 24;13(4):e1005503. doi: 10.1371/journal.pcbi.1005503. eCollection 2017 Apr.

Techniques

Matlab, Simulink

Computational Psychology Lab (CPL)

  • Modelling interactions between visual cognition and motor behaviour
  • Modelling behavioural experiments
  • What is wrong with Deep Neural Networks?
  • Computational modelling of tool innovation

 

Modelling interactions between reaching and visual processing

Recently Strauss et al. (2015) developed a model of interaction between visual cognition and motor behaviour. This work has laid the foundations for a collaborations with Joo-Hyun Song at Brown University funded by the UK-ESRC. We are currently working on novels models by integrating standard perceptual decision making models with standard models of motor behaviour (see abstract). MSc students can get involved in this work.

Investigator:

Dietmar Heinke

Techniques:

MatLab, Quantitative model fitting, stochastic modelling, non-linear differential equations.

References

Strauss, S., Woodgate, P.J.W., Sami, S. A., & Heinke, D. (2015) Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information. Neural Networks, 72, 3-12. http://dx.doi.org/10.1016/j.neunet.2015.10.005

Strauss, S. & Heinke, D. (2012) A robotics-based approach to modeling of choice reaching experiments on visual attention. Front. Psychology, 3:105. http://http://dx.doi.org/10.3389/fpsyg.2012.00105

 

Modelling behavioural experiments

Narbutas et al. (2017) developed a computational model visual search experiments. Critical for the successful of this work was that they were able to fit the model to existing data.  Such quantitative fitting of models allow us to gain deeper insights into the processes underlying perceptual decision making. Future work aims to utilize this novel technique and apply it other experiment paradigms such the Eriksen Flanker task.

Investigator:

Dietmar Heinke

Techniques:

MatLab, Quantitative model fitting, stochastic modelling, non-linear differential equations.

References

Narbutas, V., Lin, Y.-S., Kristan, M., & Heinke, D. (2017) Serial versus parallel search: A model comparison approach based on reaction time distributions. Visual Cognition, 1-3, 306-325. https://doi.org/10.1080/13506285.2017.1352055

Lin, Y., Heinke, D. & Humphreys, G. W. (2015) Modeling visual search using three-parameter probability functions in a hierarchical Bayesian framework. Attention, Perception, & Psychophysics, 77, 3, 985-1010.

 

What is wrong with Deep Neural Networks?

Deep neural networks (DNNs) have been behind recent headline-grabbing successes for artificial intelligence, such as AlphaGo beating the world's best player in the board game Go. Nevertheless, in many areas DNNs have not lead to technical solutions that match human abilities, especially with regards to object recognition. Perhaps the most obvious reason for such failures stems from the fact that DNNs employ very different processing mechanisms to those found in humans (e.g. Lake et al. 2016).

This project aims to compare DNN’s abilities with human abilities and ascertain how the two differ. This project is a collaboration with Charles Leek at Liverpool University (see here for more details)

Techniques

PyTorch, Python

Investigator:

Dietmar Heinke

References

Lake, B., Ullman, T., Tenenbaum, J., & Gershman, S. (2016). Behavioral and Brain Sciences, 1-101. Computational modelling of tool innovation.

 

Computational modelling of tool innovation

Humans’ ability to use tools has drastically transformed our planet, and is a skill we use in our daily lives. However, so far there is no machine learning method matching our tool use abilities. In recent work we developed a deep reinforcement learning (DRL)-method based on Duelling Double DEEP-Q (Hasselt et al., 2015) which successfully learns to use simple tools in a very simple environment. This MSc project aims to extend these results to a more complex environment.

Techniques

Python, Pygame, PyTorch

Investigator:

Dietmar Heinke

References

Osiurak, F., & Heinke, D. (2018) Looking for Intoolligence: A unified framework for the cognitive study of human tool use and technology. American Psychologist, 73(2), 169-185. http://dx.doi.org/10.1037/amp0000162.

Pygame Community. (2020). Pygame. Pygame.Org. https://www.pygame.org/

Hasselt, van, Guez, A., & Silver, D. (2015). Deep Reinforcement Learning with Double Q-learning. ArXiv.Org. https://arxiv.org/abs/1509.06461

Intelligent Robotics Lab (IRLab)

  • Tell me why: Explainable Representation and Reasoning in Robotics
  • Theory of Intentions and Affordances for Assistive Robots

Tell me why: Explainable Representation and Reasoning in Robotics

Mobile robots assisting humans in homes or warehouses receive different descriptions of incomplete domain knowledge and uncertainty. Information about the domain often includes commonsense knowledge (e.g., "textbooks are usually in the library"), and quantitative descriptions extracted from sensor inputs (e.g., "I am 90% certain I saw the robotics book on the table"). We have developed an architecture that combines the complementary strengths of non-monotonic logical reasoning and probabilistic reasoning  to represent and reason with such descriptions. We can now pose interesting questions such as:

1. How to enable a robot to incrementally and interactively acquire domain knowledge from sensor inputs and high-level human feedback?

2. How to enable a robot to adapt its representation and reasoning at different resolutions to provide human-understandable explanations of beliefs, experiences and decisions?

Students will develop algorithms to address these questions in specific application domains, and evaluate these algorithms in simulation and on physical robot platforms.

Requirements:

  • (Essential) Proficiency in probability theory, statistics, linear algebra and calculus.
  • (Essential) Proficiency in an object-oriented language or scripted programming language of your choice.
  • (Optional) Prior knowledge of logic or machine learning, and their use in robotics.
  • (Optional) Prior experience of working with physical robots.

Investigator:

Dr. Mohan Sridharan

 

Theory of Intentions and Affordances for Assistive Robots

Mobile robots are increasingly being used to assist humans in complex domains characterized by partial observability and non-deterministic action outcomes. To truly collaborate with humans in such domains, a robot has to anticipate the intentions of its human collaborator, estimate the action capabilities (also known as affordances) of the collaborator, and offer assistance when the human's capabilities are likely to be insufficient to achieve the desired intention. We have developed theoretical claims about how to represent, reason with and learn these intentions and affordances. We can now pose interesting questions such as:

1. How to instantiate these thereotical claims in software architectures in the context of a robot collaborating with humans in complex domains?

2. How to exploit these theoretical claims to support reliable, computationally efficient, and explainable decision making?

Students will develop algorithms to address these questions in specific application domains, and evaluate these algorithms in simulation and on physical robot platforms.

Requirements:

  • (Essential) Proficiency in probability theory, statistics, linear algebra and calculus.
  • (Essential) Proficiency in an object-oriented language or scripted programming language of your choice.
  • (Optional) Prior knowledge of logic or machine learning, and their use in robotics.
  • (Optional) Prior experience of working with physical robots.

Investigator:

Dr. Mohan Sridharan