The aim is of research is to understand how the brain achieves temporal and spatial accuracies required for manually intercepting an object in motion by coordinating the eye and the hand movements.
Object interception tasks, such as catching a gently thrown ball or grabbing the other’s hand for handshaking, seems to be trivial scenarios of human sensorimotor skill that even look too simple to study, but imitating those skills with a robot is an incredibly hard problem that none of the modern advance robots is fully able to do.
By taking a close look at our eye and hand movement patterns during object interception, we can understand a sophisticated strategy of the brain to relate sensory information (eye movement) to motor output (hand movement) to achieve temporal and spatial accuracy of the movement. For example, our eye movement uses two distinctive strategies, called catch-up saccades and smooth pursuit, during the ball catching and those strategies is closely synchronised with corresponding hand movement strategies, called open-loop and closed-loop movement.
We are building a probabilistic, computational model to simulate these behaviours and we hope that the model will lead us to a deeper understanding of the brain.
What will be the outcomes of this research?
We build a computational model of the brain-eye-hand in object interception that can be widely used in many related areas, including humanoid robotics, realistic character animation, sport and coaching science, and neuro-motor rehabilitation.