Computational Cognitive Neuroimaging

The goal of the Computational Cognitive Neuroimaging Group is to better understand the neural systems that allow us to acquire, represent and retrieve knowledge about our multisensory environment. We address this central research question from three complementary perspectives: (1) Multi-sensory Integration, (2) Language and (3) Concept Learning.

Complex cognitive functions emerge from interactions amongst many different brain areas within large-scale neural systems. Therefore, our approach is to characterize the response properties of individual regions and to establish the functional and effective connectivity between regions using neurophysiologically plausible observation models.

From a methodological perspective, understanding the neural basis of cognitive functions requires an approach that utilises multiple techniques. The group therefore combines the complementary strengths of psychophysics, functional imaging (fMRI, M/EEG), perturbation approaches such as concurrent TMS-fMRI and neuropsychological studies in patients.

To gain a more informed perspective on the underlying computational and neural mechanisms, we combine functional imaging with models of Bayesian inference and learning.

View the Centre for Computational Neuroscience and Cognitive Robotics pages.

Multisensory Integration

audiovisual brain imagingInformation integration is critical for the brain to interact effectively with our multisensory environment. Sensory signals in our natural world are inherently ambiguous. Our central aim is to understand how the human brain integrates information from multiple senses with prior knowledge to form a coherent and more reliable percept of its environment.

Within the cortical hierarchy, multisensory perception emerges in an interactive process with top-down prior information constraining the interpretation of the incoming sensory signals. To characterize the hierarchical structure of multisensory perception, we selectively manipulate the statistics of the bottom-up sensory inputs and participants’ top down prior knowledge and experience.

Currently, our research focuses on the following questions:

  • Where and how are different types of sensory features combined at multiple levels of the cortical hierarchy?
  • How is the relative timing (e.g. synchrony) of multiple inputs coded across sensory modalities?
  • To what extent does multisensory integration depend on the higher cognitive resources such as attention?
  • Which factors determine inter-trial and inter-subject variability in multi-sensory integration?
  • How does the brain adjust the parameters that govern multisensory integration to the changing statistical properties of the sensory inputs at multiple timescales?

Language

The tri-partite architecture of language encompasses phonology, semantics and syntax. One of our general aims is to understand how the language system emerges from and influences non-verbal audio-visual processing. Semantics may form a natural interface between verbal and non-verbal processing. Semantic concepts are a prerequisite for thought and language. Object concepts can be characterized by various sensory features (e.g. auditory, visual) and referred to by an arbitrary linguistic label. In a series of studies, we have sought to identify the organizational principles of semantic memory. Together with other groups, our studies suggest that deep semantic processing may not only involve a fronto-temporal core semantic system, but also sensory-motor regions. For instance, processing of tool and action concepts has been shown to rely on re-activation of action representations in the visuo-motor system via top- down modulation.

Concept Learning

Throughout life, humans are able to form, learn and adjust concepts in response to experience and task demands. The group has started to investigate how the human brain learns novel categories and their linguistic labels via structured input. Humans have the amazing ability to infer approximate extensions of concepts from only a few positive examples. These abilities of inductive generalization and categorization enable them to predict unobserved physical features or behaviour from a few observed data points. Currently, we are investigating, whether prior unsupervised training that allows subjects to estimate the underlying stimulus distributions can modulate the difficulty of consecutive supervised category learning.

Funding

ERC, BMBF Bernstein, MRC-ARUK, EU Marie Curie