Research in this area is divided into four themes:
Stereoscopic Vision (Dr Andrew Welchman)
Dr Welchman’s research focuses on how the brain uses the range of different sources of information about depth (depth cues) it obtains from the eyes and how the brain combines these different pieces of information to estimate the layout of the world around us.
Dr Welchman has considered the advantage we get from having two eyes (binocular vision), focusing on the use of binocular disparities (differences between the images formed in the left and right eyes) and information provided by our eye muscles about the position of the eyes in the head (extra-retinal cues). Current projects examine the role of natural constraints imposed by the geometry of binocular vision and the brain's plasticity in using and combining different depth cues.
Optic Flow (Dr Mike Harris)
Movement about the world produces a smooth transformation of the image that contains a great deal of information that is potentially useful for guiding and timing our actions, and for working out the 3D layout of visible surfaces. Our research focuses upon whether and how this information is used in such everyday tasks as recognizing and manipulating objects, and in walking or driving about the world.
We are interested in all aspects of these problems from the properties of the biological mechanisms that analyse and combine motion information, to the strategies that make use of motion in such tasks as steering and braking in a car. We use primarily psychophysical, computational and some simple robot-based techniques. Current topics include the identification of the crucial cues in navigating curved trajectories, the role of non-visual cues in driving, and the extent to which the visual system achieves its very robust performance by an ability to solve the same problem in a variety of different ways.
Pictorial Depth Cues and Object Segmentation (Dr Andrew Schofield)
This work focuses on the low level interactions between cues to surface shape such as shading and texture.
When a textured surface is corrugated and illuminated, the corrugations produce luminance variations that can be used to interpret surface shape (shape from shading). This shading is accompanied by changes in the local amplitude of the texture and we have now shown the relationship between these cues in key to the interpretation of surface shape. We have studied these interactions using traditional psychophysical methods. These studies indicate that luminance and texture amplitude are detected separately although cross-adaptation does exist between the cues.
We have also studied the interactions between texture amplitude and orientation and again found little interaction at the point of detection followed by some cross-over of adaptation. In contrast, there is little evidence for a transfer of adaptation between texture cues and stereoscopic depth. More recently have introduced a haptic surface match task in which the observer has to match the properties of a virtual haptic surface to the depth profile perceived from pictorial depth cues. This method lets us investigate the higher-level interactions between cues that occur as the visual system extracts shape information from the primary cues. (This work is conducted in collaboration with Prof Mark Georgeson).
In a separate strand we have been studying the perception of transparency producing some interesting results which suggest that humans match two transparencies placed over different backgrounds using both mean luminance and contrast as a guide (this work is in collaboration with Rick Gurnsey and Fred Kingdom).
Functional Imaging of Shape Perception (Prof Zoe Kourtzi)
This research team uses functional imaging and psychophysical methods to investigate the neural mechanisms that the brain employs to solve the puzzle of unified perception and support higher cognitive processes, namely visual recognition and categorization.
The main goals are -
to investigate how local image features are integrated into global coherent shapes
to examine higher cognitive processes that contribute to the recognition and categorization of natural objects and scenes
to understand the learning-dependent and developmental plasticity of the neural mechanisms that underlie these processes
Our early work in this area (carried out at the Max Plank Institute in Tübingen) has shown that the human Lateral Occipital Complex (LOC), an area in the lateral occipital cortex that extends anterior into the temporal cortex is involved in the analysis of the perceived object shape. Further studies have shown that shape information is processed not only in ventral but also in dorsal areas known to be involved primarily in motion processing, suggesting interactions between form and motion analysis. More recent fMRI studies have shown that unified shape perception involves both early and higher visual areas that integrate local elements to global shapes at different spatial scales. We have also investigated the processing of coherent shapes in complex visual scenes, the temporal characteristics of unified shape perception, its experience-based neural plasticity and developmental time-course.
Our ongoing work investigates -
the combination of different cues for the perception of 3D dynamic objects and natural scenes
the effects of perceptual learning and categorization on object perception
the development of coherent visual perception and categorization.
In summary, our research aims in advancing our understanding of the neural mechanisms that the human brain employs to achieve visual awareness of a unified world and successful interactions in our complex environments.