Speaker: Prof. Christian Wallraven, Korea University
The processing of shape is one of the most fundamental abilities of the human brain, enabling us to efficiently recognize and interact with objects in the environment. Given the importance of touch and interaction - both in the early stage of human development as well as in general - I argue that shape and object representations should be regarded as multisensory, dynamic entities. To date, the vast majority of research into object processing, however, has been conducted in the visual modality with static, noninteractive images. Spurred by new developments in multisensory rendering technologies, rapid prototyping, and interactive technologies, I will present results from several experiments investigating the active and multisensory components of object processing. A first line of experiments demonstrates that humans are able to represent complex shape spaces surprisingly well by touch alone. Additional experiments show that the integration of vision and haptic representations even allows for haptic face recognition of visually learned faces (and vice versa). Experiments with novel objects using a tablet interface furthermore show that object representations become significantly more detailed when objects are actively explored (versus passive exploration). Finally, I will discuss how to transfer these results to a humanoid robot in order to endow it with multisensory perceptual skills that help it to learn and recognize objects more efficiently.