Media Gallery

Common Sense Reasoning

These videos demonstrate work described in our 2019 JAIR article on a Refinement-based architecture for representing and reasoning with common sense knowledge on robots (project funded by US ONR).

Commonsense Reasoning (Part 1 of 3)

Robot Waiter - Planning and Diagnosis demonstration

Common sense Reasoning (Part 2 of 3)

Robot Waiter demonstration

Commonsense Reasoning (Part 3 of 3)

Planning demonstration.

Real-Time Gaze Estimation

The following video demonstrates work described in our 2018 ECCV paper on ground truth gaze estimation in natural settings; it also resulted in a challenging new dataset for gaze estimation.

Dataset generation.

Variable Impedance Manipulation

This video demonstrates work described in our 2019 Humanoids paper on using incrementally learned feed-forward models and a hybrid force-motion controller for variable impedance (compliant) control of manipulation in continuous contact tasks. (collaboration with the EU Honda Research Institute).

Online Learning of Feed-Forward Models for Task-Space Variable Impedance (all tasks)

Motion Retargeting by Disentangling Pose and Movement

This video demonstrates work described in our 2019 BMVC paper , which introduced a deep learning framework for unsupervised motion retargeting that separately learns frame-by-frame poses and overall movement.

Motion Retargeting by Disentangling Pose and Movement

Dora the Explorer

This video describes our 2016 AIJ paper on robot task planning and explanation in open and uncertain worlds. (work from CogX project).

This video describes our robot Dora. Dora is able to plan and act in new environments. Dora solves the 40 year old problem of open world planning in robots. Open world planning is planning in worlds where you don't know everything you need to know to perform your task. So you have to plan to find it out as you go. Dora does this by using models of what she knows and doesn't know. The robot also needs two special abilities: she needs common sense knowledge of the world, and she needs the ability to make assumptions about the world. Common sense knowledge plus the ability to make assumptions is what allows the robot to plan in open worlds. More than this, if you add diagnostic knowledge (a kind of common sense knowledge about how things can go wrong), then Dora can explain her failures.

Robot Task Planning and Explanation in Open and Uncertain Worlds

The robot is described in more detail in our Artificial Intelligence Journal paper.

One Shot Learning of Dexterous Grasps for Novel Objects

This video describes our 2016 IJRR paper . The method allows learning of dexterous grasps in one shot per grasp type. The method generalises to novel objects. (work from PaCMan project).

This work is part of the PacMan project. Work primarily by Marek Kopicki, Jeremy Wyatt and Renaud Detry. Great contributions also from Maxime Adjigble (designer of the Boris Robot Platform) and Prof Ales Leonardis. The idea is simple: to enable a dextrous grasp type to be learned from one example. Then when presented with an object of quite different global shape to adapt each grasp type, and to select among these adapted grasp types. The method can generate, refine and rank about 2500 grasps in 30 seconds on an i7.

One Shot Grasp Learning, Selection and Adaptation for Novel Objects

You can see a technical report underpinning our journal submission.