A full day workshop on 1 October 2018 at the International Conference on Intelligent Robots and Systems (IROS)


IMPORTANT: the talks at this workshop will be LIVE STREAMED. Please check the youtube channel here!

This workshop is supported by IEEE/RAS Technical Committees on:

  • Robotic Hand, Grasping and Manipulation,
  • Mobile Manipulation,
  • Cognitive Robotics, and
  • Robot Learning.

If you require further information on support/endorsement for this workshop, please email: a.ghalamzanesfahani@bham.ac.uk

Task-Informed Grasping (TIG) for rigid and deformable object manipulation


Smart grasping and manipulation (i.e. making stable contacts on an object's surface and acting smartly on it) are crucial for robots functioning in our society. Recent advances in robotic grasping have shown promising results; however, for making robots see, perceive, decide and act in a way a human or a primate does, many challenges still need to be addressed. Cognitive science revealed that primates anticipate the outcome of grasping actions.

This permits successful manipulation of deformable and rigid-body objects. For instance, to tie a knot or simultaneous manipulation and cutting a deformable tissue during medical procedures, a human predicts that grasping parts of the deformable object enables successful task completion. Indeed, scientists have shown these smart anticipatory grasping and object coordination- i.e. Task-Informed Grasping (TIG)- is mostly performed based on the (i) optimal affordance, (ii) minimum spending energy, and (iii) providing maximum reachability along an intended trajectory.

To better integrate robots into our society, TIG must be properly understood and addressed to mimic human predictive grasp planning.

Topics of interest include (but not limited to):

  • Deep learning for task-informed grasping
  • Affordance informed grasping
  • Safety informed grasping
  • Grasping with minimum energy manipulation
  • Grasping enables collision free manipulative motions
  • Task-informed Human Grasping: challenges of rigid-body and deformable object manipulation,
  • Grasping and manipulation in robotic surgery
  • Challenges of Human-Robot collaborative manipulation,
  • TIG suitable for Human-Robot collaborative manipulation,
  • Challenges of TIG in the context of teleoperation and/or mixed/shared control,
  • Challenges of grasp learning and computing grasp quality metric suitable for TIG,
  • TIG and manipulator Kinematic/dynamics (KD), e.g. how KD during manipulative actions affects TIG?
  • TIG in the context of soft-tissue (deformable object) manipulation,
  • Challenges of grasp planning for manipulating deformable objects,
  • TIG by soft/continuum robots,


 Amir Ghalamzan 09:00 - 09:05  Welcome
 Aude Billard  09:05 - 09:45  Modelling skillful bimanual manipulation in humans 
 Jeremy Wyatt  09:45 - 10:15  Data efficient grasp learning 
Edward Johns  10:15 - 10:35 Deep Learning for Grasping via Simulation
 --  10:35 - 11:00  Lightning talks of posters ( each author presents the paper in 5 mins -- the order of presentations follow the papers numbers )
 --  11:00 - 11:30  Coffee break
 --  11:30 - 12:40  Interactive poster session
  Alberto Rodriguez  11:40 - 12:10  A vision for tactile dexterity
  David Navarro-Alarcon  12:10 - 12:30  Shape servoing of deformable objects
  Tamim Asfour   12:30 - 12:55  Progress in Humanoid Grasping and Manipulation in the Real World
  Maximo A. Roa   12:55 - 13:30  The role of variable stiffness in grasping tasks
  --   13:30 - 14:30  Lunch
  Oliver Brock   14:30 - 15:05  The benefits of staying in Touch
  Robert Platt   15:05 - 15:35  Deictic abstractions for robotic manipulation
  Sami Haddadin   15:35 - 16:10   The Art of Manipulation: Learning to Manipulate Blindly

 Dmitry Berenson

(Dale McConachie)

  16:10 - 16:30  What matters for deformable object manipulation
 --   16:30 - 17:00  Coffee break
Fanny Ficuciello   17:00 - 17:30   Grasping and manipulation in surgical tasks
--   17:30 - 18:30  Panel discussion

Invited Speakers


Professor Aude Billard: Director of Learning Algorithm and Systems Laboratory (LASA) EPFL, Switzerland 

Talk: Modelling skillful bimanual manipulation in humans 


Abstract: This talk describes efforts to model the acquisition of fine bimanual manipulation skills in watch making and at transferring these skills in similar competences for robots.

Bio: TBD



Professor Oliver Brock: Director of Robotic and Biology Lab at TU Berlin, Germany

Talk: The Benefits of Staying in Touch: From Soft Manipulation to In-Hand Manipulation


Abstract: TBD

Bio: Oliver Brock is the Alexander-von-Humboldt Professor of Robotics in the School of Electrical Engineering and Computer Science at the Technische Technische Universität Berlin Berlin in Germany. He received his Diploma in Computer Science in 1993 from the Technische Technische Universität Berlin Berlin and his Master's and Ph.D. in Computer Science from Stanford University in 1994 and 2000, respectively. He also held post-doctoral positions at Rice University and Stanford University. Starting in 2002, he was an Assistant Professor and Associate Professor in the Department of Computer Science at the University of Massachusetts Amherst, before to moving back to the Technische Technische Universität Berlin Berlin in 2009. The research of Brock's lab, the Robotics and Biology Laboratory, focuses on mobile manipulation, interactive perception, grasping, manipulation, soft material robotics, interactive machine learning, deep learning, motion generation, and the application of algorithms and concepts from robotics to computational problems in structural molecular biology. He is the president of the Robotics: Science and Systems foundation.

Tentative time: afternoon



Professor Jeremy Wyatt, Professor of Computer Science and Centre for Computational Neuroscience and Cognitive Robotics (CNCR) at the University of Birmingham. 

Talk: Data efficient grasp learning


ABSTRACT: In this talk, I'll cover methods for learning grasps from small numbers of demonstrated grasps. The core is an algorithm based on a product of experts formulation that generalises well to novel objects. I will also present extensions tailored for grasping from a single view and grasp evaluation using physics simulation. This allows data-intensive training of grasps without recourse to many real grasp trials. 

Bio: Jeremy was the Project Coordinator for the FP7 funded project PacMan on robot manipulation and CogX project on robots that plan and learn in the face of knowledge gaps. He worked in the project Strands on long term autonomy and spatial temporal mapping, and He was part of the projects GeRT on robot manipulation, and CoSy on Cognitive Robotics.



Professor Dmitry Berenson, Assistant Professor, Director of Autonomous robotic Manipulation Lab Laboratory at University of Michigan, USA

Talk: What Matters for Deformable Object Manipulation?



Bio: Dmitry Berenson received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He completed a post-doc at UC Berkeley in 2012 and was an Assistant Professor at WPI 2012-2016. He started as an Assistant Professor in the EECS Department and Robotics Institute at the University of Michigan in 2016. He has received the IEEE RAS Early Career award and the NSF CAREER award.

Tentative time: afternoon



Professor Robert Platt Jr., Assistant Professor, Director of the Helping Hands Laboratory at Northeastern University, USA

Talk: Deictic Abstractions for Robotic Manipulation


ABSTRACT: In applications of deep reinforcement learning to robotics, it is often the case that we want to learn pose invariant policies: policies that are invariant to changes in the position and orientation of objects in the world. For example, consider a peg-in-hole insertion task. If the agent learns to insert a peg into one hole, we would like that policy to generalize to holes presented in different poses. Unfortunately, this is a challenge using conventional methods. In this talk, I will describe a novel state and action abstraction that is invariant to pose shifts called "deictic image maps" that can be used with deep reinforcement learning. I will provide broad conditions under which optimal abstract policies are optimal for the underlying system. Finally, I will show that the method can help solve challenging robotic manipulation problems.

Bio: Dr. Robert Platt is an Assistant Professor of Computer Science at Northeastern University. Prior to coming to Northeastern, he was a Research Scientist at MIT and a technical lead at NASA Johnson Space Center, where he helped develop the control and autonomy subsystems for Robonaut 2, the first humanoid robot in space.



Dr. David Navarro-Alarcon, Assistant Professor of Robotics, Department of Mechanical Engineering, Hong Kong Polytechnic University, Hong Kong

Talk: Shape Servoing of Deformable Objects: Modelling, Online Estimation, and Control


ABSTRACT: Over the past years, there has been an increasing interest in the design of sensor-guided methods for controlling the shape of deformable objects with robot manipulators. This shape control problem has many potential applications in growing fields such as surgical robotics, automated food processing, garment industry, home robotics, etc. I refer to these types of feedback tasks as (visual) shape servoing, an approach that contrasts with standard (eye-in-hand) visual servoing — viz. à la Chaumette — in that the servo-loop is formulated in terms of the object's deformable shape and not in terms of the rigid pose of the robot/object. My aim in this talk is to present the basic formulation of this new type of sensor-guided manipulation tasks. To tackle this challenging (and still open) manipulation problem, in the past few years we have developed a new vision-based methodology that allow us to: characterise the infinite dimensional object's shape with a compact vector of feedback parameters, to online estimate/approximate the deformation properties of an unknown manipulated soft body, and to explicitly servo-control the shape/deformations of the object with active robot motions. I will introduce our recent work on this problem. Examples of our vision-based methods, algorithms and estimators will be demonstrated; open problems, challenges, and opportunities will also be discussed.

Bio: Dr David Navarro-Alarcon received his PhD degree in mechanical and automation engineering in 2014 from the Chinese University of Hong Kong (CUHK), where afterwards worked as a Postdoctoral fellow in soft object manipulation, and then as a Research Assistant Professor with the Medical Robotics Group of the T Stone Robotics Institute. Since July 2017, he works as an Assistant Professor of Robotics at the Hong Kong Polytechnic University (PolyU). His current research interests include shape/deformation servoing of soft objects, uncalibrated algorithms for sensor-guided robots, and multimodal methodologies for servo-control. Dr Navarro-Alarcon is an Associate Editor of the journal Frontiers in Robotics and AI, Specialty Section on Soft Robotics, the co-organiser of two previous IROS workshops (2016 and 2017) on “Multimodal Sensor-Based Robot Control”, and is currently the organiser of a new IROS 2018 Special Session on "Methods and Algorithms for Manipulation of Deformable Objects". He is a member of the IEEE and the Robotics and Automation Society (RAS).

Tentative time: before noon

Dissemination consent: yes

Suggested panel question: How do humans adaptively compute/recalibrate the sensorimotor model for an uncertain task? (namely the task Jacobian matrix) Can we use this bio-inspired model to provide robots with better adaptation capabilities?




Dr. Fanny Ficuciello, Associate Professor, senior member of PRISMA Lab (Projects of Robotics for Industry and Services, Mechatronics and Automation), Università di Napoli Federico II, Italy

Talk: Grasping and Manipulation in Surgical Tasks


ABSTRACT: Besides humanoid robots and prosthetics applications, other areas such as the minimally invasive laparoscopic surgery could benefit by the use of suitably designed hands able to enter the patient?s body through the trocar and to replace the hands of the surgeon by equaling dexterity and sensory ability at the same time. Common forceps used in robotic systems like the daVinci robot have limited dexterity and lack of sensors to measure interaction forces. Therefore, traditional laparoscopic surgery is affected by the need of new instruments to facilitate surgical manoeuvres. In this work, we want to make a step forward towards robotic solutions that can improve the manipulation capabilities of the surgical instruments up to the adoption of artificial hands in the surgical field where the use of anthropomorphic prehensile devices can make the difference.

Bio: Fanny Ficuciello received the Laurea degree magna cum laude in Mechanical Engineering and the Ph.D. degree in Computer and Automation Engineering both at University of Naples Federico II in 2007 and 2010 respectively. From September 2009 to March 2010 she was a visiting scholar in the Control Engineering Group at the University of Twente, The Netherlands. Currently, she is Assistant Professor of Industrial Bioengineering at the University of Naples Federico II. Her research activities are focused on biomechanical design and bio-aware control strategies for anthropomorphic artificial hands, grasping and manipulation with hand/arm and dual arm robotic systems, human-robot interaction control, variable impedance control and redundancy resolution strategies. Recently, she is involved in surgical robotics research projects, as a member of the ICAROS centre (Interdepartmental Center for Advances in Robotic Surgery) of University of Naples Federico II. She is the recipient of a National Grant within the "Programma STAR Linea 1" under which she is the PI of the MUSHA project. She is responsible for the research objective "MRI-TRUS fusion algorithms and control strategies for a robot-assisted biopsy" for the national project "Bioptic Advanced Robotic Technologies in OncoLOgy - B.A.R.T.O.LO". Since 2008 she is a Member of the IEEE Robotics and Automation Society, and a Senior Member since 2017. Since 2018 she is in the Technology Committee of the European Association of Endoscopic Surgery (EAES). She is involved in the organization of international conferences and workshops. Currently, she serves as Associate Editor of Journal of Intelligent Service Robotics (JIST).


maximoroaDr. Maximo A. Roa, Research Scientist at Institut für Robotik und Mechatronik (DLR)

Talk: : The role of variable stiffness in grasping tasks



ABSTRACT: : This talk presents current work on the development of low cost, antagonistic hands that provide an inherent mechanical intelligence for adapting the shape without increasing the control complexity, plus the possibility to change the finger stiffness for performing different tasks. Combined with suitable grasp planners that consider the smart usage of the stiff environment, these hands enable a safe and reliable grasp of challenging objects, as demonstrated in picking heavy and delicate objects such as fruits and vegetables.

Bio: Dr. Máximo A. Roa, PMP, is a Group Leader at the Institute of Robotics and Mechatronics in the German Aerospace Center – DLR, where he works since 2010. Since 2015 he also works part-time for the DLR Spin-off company Roboception, where he is Expert in Robotic Applications and manager of strategic alliances and research projects. Dr. Roa serves since 2013 as co-chair of the IEEE-RAS Technical Committee on Mobile Manipulation. His main research areas include grasping, manipulation, motion planning and humanoid robots.

Tentative time: mid day


tamimProf. Dr. Tamim Asfour is full Professor at the Institute for Anthropomatics and Robotics, where he holds the chair of Humanoid Robotics Systems and is head of the High Performance Humanoid Technologies Lab (H2T).

Talk: Progress in Humanoid Grasping and Manipulation in the Real World 


ABSTRATCT: The ability to grasp and manipulate objects is key for robotics. Recently, progress has been made in this area. However, where are the robots that can grasp any object in the real world? I will present our progress towards humanoid robots able to perform task-specific grasps and execute complex bimanual manipulation tasks in a kitchen environment and in industrial maintenance setups based on combining vision, force and haptics. I will discuss lessons learned from the past, limitations of the current state of the art and promising research directions. 

BIO: Tamim Asfour is full Professor of Humanoid Robotics at the Institute for Anthropomatics and Robotics, High-Performance Humanoid Technologies at the Karlsruhe Institute of Technology (KIT). His research focuses on the engineering of high performance 24/7 humanoid robotics as well as on the mechano-informatics of humanoids as the synergetic integration of mechatronics, informatics and artificial intelligence methods into humanoid robot systems, which are able to predict, act and interact in the real world. Tamim is the developer of the ARMAR humanoid robot family. In his research, he is reaching out and connecting to neighbouring areas in large-scale national and European interdisciplinary projects in the area of robotics in combination with machine learning and computer vision.

sami1Professor Dr.-Ing. Sami Haddadin, Director of Munich School of Robotics and Machine Intelligence, Technical University of Munich, Germany

Talk: The Art of Manipulation: Learning to Manipulate Blindly 


ABSTRACT: The Art of Manipulation: Learning to Manipulate Blindly

Bio: Performing skilful manipulation is a very challenging task for robots. So far, even experts could barely program them to e.g. perform the well known peg-in-hole problem in the real world. Autonomously acquiring such skills, let alone generalizing them to new tasks, is still a major challenge. Typically, manipulation learning is approached with the help of large computation power, very long learning times, or possibly both. However, the performance achieved up to now is still far from human performance. We show the results of our new paradigm of robot manipulation. It bridges and unifies basic motor control, simple and complex manipulation strategies and high-level manipulation planning. The robots show autonomous skill learning, intra-class and inter-class generalization of insertion skills at the human-level performance.



Edward Johns is a Lecturer (Assistant Professor) at Imperial College London, where he holds a Royal Academy of Engineering Research Fellowship, and leads the Robot Learning Lab.

Talk:Deep Learning for Grasping via Simulation


ABSTRACT: In recent years, deep learning has made a significant impact on computer vision, by enabling task-specific representations to be learned directly from raw image pixels. It is natural for us now to consider whether this can be extended to the learning of physical behaviours, such as controlling a robot arm to grasp objects, by studying image representations which express not only perception but also dynamics. However, due to the reliance of deep learning on huge, manually-labelled datasets, it is impractical to train such a controller with physical robots, particularly if we desire complex and flexible behaviour which generalises to novel objects and environments. In this talk, I will demonstrate the use of simulation to scale up data diversity in pursuit of this generalisation, and I will present a number of ways to leverage the benefits of simulated data and to transfer robot controllers from simulation to the real world.

Bio: Edward Johns is a Lecturer (Assistant Professor) at Imperial College London, where he holds a Royal Academy of Engineering Research Fellowship and leads the Robot Learning Lab. His research interests lie at the intersection of computer vision, robotics, and machine learning, with a particular emphasis on developing deep learning and reinforcement learning methods for robot manipulation. He received a BA and MEng in 2006 and 2007 from Cambridge University in Electrical and Information Engineering, and a PhD in 2014 from Imperial College London in visual localisation for mobile robots. In 2017 he was then awarded a Royal Academy of Engineering Research Fellowship for his project "Empowering Next-Generation Robots with Dexterous Manipulation: Deep Learning via Simulation".


rodriguezProfessor Alberto Rodriguez, Assistant Professor, Director of Manipulation and Mechanisms Laboratory (MCube) at Massachusetts Institute of Technology (MIT), USA

Talk: A vision for tactile dexterity

Unfortunately Alberto could not make the talk due to illness. 



Bio: TBD



Submissions are welcome in any of the two categories:

            1. extended abstract (Maximum 2 pages): new ideas on Task-informed grasping and/or late-breaking results;
            2. full paper (maximum 6 pages in): will be accepted based on their quality, originality, and relevance to the workshop. Authors of selected papers may be asked to submit extended versions of their papers for an RA-L special issue. Submitted papers should not be under consideration for publication anywhere else.

Submission of papers and review process will be handled through EasyChair conference management system. Submissions should follow the IROS format. Please submit your contributed paper as well as extended abstract on Easychair and also email them to workshop.tig@gmail.com by the following deadline.

Important dates: 

          • Papers submission deadline: July 15th, 2018 -> August 15th, 2018
          • Acceptance Notification: July 30th, 2018 -> August 30th, 2018
          • Camera Ready deadline: September 1st, 2018 -> September 15th, 2018
          • Registration deadline: see IROS web site
          • Workshop date: October 1st, 2018

Accepted papers:

  1. Hao Tian, Changbo Wang, Dinesh Manocha and Xinyu Zhang, "Interactive Grasping for High-genus Objects using Autonomous Learning". (pdf)
  2. Changjoo Nam, Jinhwi Lee and Changhwan Kim "Planning for efficient rearrangement of obstacles for grasping objects in cluttered environments". (pdf)
  3. Luca Monorchio, Daniele Evangelista, Marco Imperoli and Alberto Pretto "Learning from Successes and Failures to Grasp Objects with a Vacuum Gripper". (pdf)
  4. Matthew Broadway, Jeremy L. Wyatt, Michael J. Mathew, Ermano Arruda, "Data Efficient Direct Policy Search Using Bayesian Optimisation". (pdf)
  5. Tommaso Pardi, Rustam Stolkin, Amir Ghalamzan "Grasp planning according to post-grasp objectives". (pdf)

Main Organiser

Amir Masoud Ghalamzan Esfahani, Ph.D. 
Research Fellow at Extreme Robotic Lab, Elms Rd, School of Metallurgy and Materials
University of Birmingham, The United Kingdom


Farshid Alambeigi, PhD candidate 
Johns Hopkins University, USA

Sahba Aghajani Pedram, PhD candidate 
Bionic Lab
University of California, USA 

Renaud Detry, Ph.D.
Research scientistJet Propulsion Laboratory (JPL) - NASA
M/S 198-219 
4800 Oak Grove Drive 
Pasadena, CA 91109, USA

Veronica J. Santos, Ph.D. 
Associate Professor
Director of Biomechatronic Lab
University of California, Los Angeles, USA

Rustam Stolkin, Ph.D.
Professor, Director of Extreme Robotic Lab 
University of Birmingham, The United Kingdom

Programming Committee 

Tommaso Pardi

Dr Christopher Paxton

Dr Manolis Chiou

Mission successfully completed:

We had a great lineup of featured speakers and more than 100 people registered people. irostigspeakers

More than 100 people attended our workshop and sometimes people had to stand or sit on the floor.


And finally, we conclude with a fruitful panel discussion. We hosted experts from academia and industry, representative of Amazon and Shadow robotics, in the panel discussion.