Research activities within Institute of Robotics

Manipulation Grasping

Hand and object interaction modelling

Contact: Dr Hyung Jin Chang

Summary

The estimation of hand and object pose and shape during interactions is critical in augmented and virtual reality applications. However, current methods rely on explicit physical constraints and known objects, which limits their scope. Understanding hand-object manipulation requires the ability to reason about the physical contacts between them. Our research aims to develop algorithms that are not constrained by object models and can learn the physical rules governing hand-object interaction. Our experiments, using widely-used benchmarks, demonstrate that our frameworks achieve state-of-the-art accuracy in 3D pose estimation, and accurately recover dense 3D hand and object shapes. Moreover, incorporating a contact map to guide hand-object interactions further improves the accuracy of the reconstructions.

Images depicting camera and hand views

hand positions when using a robotic glove

Selected publications

Tze Ho Elden Tse, Kwang In Kim, Ales Leonardis, Hyung Jin Chang,

‘Collaborative Learning for Hand and Object Reconstruction with Attention-guided Graph Convolution [PDF]’, IEEE Proc. Computer Vision and Pattern Recognition (CVPR), June, 2022.

Tze Ho Elden Tse*, Zhongqun Zhang*, Kwang In Kim, Ales Leonardis, Feng Zheng, Hyung Jin Chang, ‘2Contact: Graph-based Network for 3D Hand-Object Contact Estimation with Semi-Supervised Learning’, European Conference on Computer Vision (ECCV), October, 2022.

John Yang*, Hyung Jin Chang*, Seungeui LEE, Nojun Kwak, SeqHAND:RGB-Sequence-Based 3D Hand Pose and Shape Estimation, European Conference on Computer Vision (ECCV), August, 2020. (* indicates equal contribution)

6D Object Pose Estimation. Object pose, manipulation

Contact: Dr Hyung Jin Chang

Summary

The estimation of hand and object pose and shape during interactions is critical in augmented and virtual reality applications. However, current methods rely on explicit physical constraints and known objects, which limits their scope. Understanding hand-object manipulation requires the ability to reason about the physical contacts between them. Our research aims to develop algorithms that are not constrained by object models and can learn the physical rules governing hand-object interaction. Our experiments, using widely-used benchmarks, demonstrate that our frameworks achieve state-of-the-art accuracy in 3D pose estimation, and accurately recover dense 3D hand and object shapes. Moreover, incorporating a contact map to guide hand-object interactions further improves the accuracy of the reconstructions.

Shape recognition and plottingShapes-posed

Selected publications

Tze Ho Elden Tse, Kwang In Kim, Ales Leonardis, Hyung Jin Chang, ‘Collaborative Learning for Hand and Object Reconstruction with Attention-guided Graph Convolution [PDF]’, IEEE Proc. Computer Vision and Pattern Recognition (CVPR), June, 2022.

Tze Ho Elden Tse*, Zhongqun Zhang*, Kwang In Kim, Ales Leonardis, Feng Zheng, Hyung Jin Chang, ‘2Contact: Graph-based Network for 3D Hand-Object Contact Estimation with Semi-Supervised Learning’, European Conference on Computer Vision (ECCV), October, 2022.

John Yang*, Hyung Jin Chang*, Seungeui LEE, Nojun Kwak, SeqHAND:RGB-Sequence-Based 3D Hand Pose and Shape Estimation, European Conference on Computer Vision (ECCV), August, 2020. (* indicates equal contribution)

Smart Gripper (bioGrasp)

Contact: Samia Nefti-Meziani and Dr Steve Davis

Summary

The problem statement represents an active debris removal mission with the capture of an uncontrolled spacecraft by the service spacecraft using a robotic manipulator. The primary objective is to capture and rigidize the uncontrolled spacecraft without generation of additional debris both microscopic and macroscopic. To achieve this objective, an efficient (weight, cost, and robust) robotic manipulator as shown in Fig 1, must be developed that poses certain challenges such as maintenance of its stability, minimising errors in force control algorithm, and predicting mechanical properties of the debris that are to be grasped.

First, a novel hybrid soft-rigid design is introduced to develop a bioinspired dexterous four-finger gripper capable of​ pinching and grasping simultaneously that is directly inspired by the motion and architecture of a biological fingered hand consisting of rigid bones (carbon fibres in our design), enveloped in muscles and cartilages (thermoplastic polyurethane in our design) as shown in Fig 2.

The second significant novelty of this design lies in its capability to easily grasp and prevent slippage of both delicate as well as hard/heavy objects with the help of its hybrid soft/rigid structure. Furthermore, the novel dual-tendon actuation system implemented in this design facilitates pinching action without the addition of an extra actuator. Hence, the weight, resilience, and dynamics are all optimized in the process. Fig. 3 shows the dual-tendon mechanism setup for bioGrasp gripper fingers that is reconfigurable facilitating pinching and flat surface grasping capability. Additionally, a video demonstrating the gripper’s actuation is attached.

The third innovation highlighted on the smart gripper is its ability to detect and avoid slippage. This feature facilitates stable grasping and manipulation of objects/debris while preventing further breakdown of the debris which is one of the main objectives of ADR. Slip avoidance is achieved by a computation efficient algorithm based on luminance differencing. Therefore, slightest movement of the objects handled by the gripper is precisely detected and fed into the control system to counter. Computation efficiency of the algorithm facilitates deployment and autonomous control of the gripper on a single ARM-based processor.

Computer-aided design of the smart gripper (prototype) Smart gripper manufactured version (prototype)

Fig. 1: Computer-aided design of the smart gripper and its actual manufactured version (prototype)

Bioinspired bone-cartilage structure of smart gripper’s finger prototypes Bioinspired bone-cartilage structure of smart gripper’s finger prototypes

Fig. 2: Bioinspired bone-cartilage structure of smart gripper’s finger prototypes.

Dual-tendon mechanism implemented on the smart gripper fingers for enhanced dexterity

Fig. 3: Dual-tendon mechanism implemented on the smart gripper fingers for enhanced dexterity.

Top view of the slip detector system embedded on the palm of the gripper Bottom view of the slip detector system embedded on the palm of the gripper

Fig. 4: Top and bottom view of the slip detector system embedded on the palm of the gripper.

Autonomous systems and AI

Connected and autonomous systems for electrified vehicles

Contact: Dr Quan Zhou

Summary

Dr Zhou aspires to harness the emerging power of AI to reshape vehicle design and control, helping attain a more sustainable society. His research interests include fuzzy inferences, evolutionary computation, deep and reinforcement learning, and their applications in automotive engineering. With a track record of more than 70 research papers published in international journals (e.g., IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Industrial Informatics) and conference proceedings and 9 patent inventions, Dr Zhou has gained recognition from industry and academia. He has close collaboration with several world-leading research institutes, e.g., EU Joint Research Centre, Nanyang Technological University, Tsinghua, RTWH Aachen.

The future vehicles will be connected, automated, shared and electrified (CASE) through the emerging Internet of Vehicles. My research aims to make a timely contribution to zero-emission transport through advanced modelling and control of large-scale electric vehicle platoons. The research is conducted based on the state-of-the-art X-in-the-loop testing facility (e.g., AVL Testbed.Connect, AVL PUMA, ETAS LABCAR, IPG-CarMaker) and the research outcome will include an open-source 3D driving environment and onboard control software for real-time multi-objective (energy economy, durability) optimal control.

At the core of this research is the development of a multi-agent reinforcement learning algorithm which is capable of environment perception and decision-making from two main dimensions, i.e., a deep inspection of vehicle powertrain systems (e.g., battery system or fuel cell system) and a global interaction with the traffic infrastructure and the vehicles driving around. We will examine the robustness, reliability, and feasibility of the developed models and algorithms with industry partners.

levels of use

Cognitive computation and AI

Self-Supervised Multi-modal Machine Learning

Contact: Dr Jianbo Jiao

Summary

Jianbo’s research is around the general problems in Computer Vision, Machine Learning and Healthcare. His current research interests mainly focus on learning from limited supervision (e.g. self-supervised/weakly-supervised/semi-supervised learning, transfer learning) and from multi-modal sensory data (e.g. image/video, speech/audio, text/NLP, depth/stereo, gaze/saliency, motion, etc.). For more information with an updated publication list, please see Dr Jianbo Jiao's Google Scholar profile.

Selected publications

Wang, J., Jiao, J., Bao, L., He, S., Liu, W., & Liu, Y. H. (2021). Self-supervised video representation learning by uncovering spatio-temporal statistics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(7), 3791-3806.

Jiao, J., & Henriques, J. F. (2021). Quantised Transforming Auto-Encoders: Achieving Equivariance to Arbitrary Transformations in Deep Networks. British Machine Vision Conference (BMVC), 2021.

Jiao, J., Cai, Y., Alsharid, M., Drukker, L., Papageorghiou, A. T., & Noble, J. A. (2020). Self-supervised contrastive video-speech representation learning for ultrasound. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part III 23 (pp. 534-543). Springer International Publishing.

Wang, J., Jiao, J., & Liu, Y. H. (2020). Self-supervised video representation learning by pace prediction. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII 16 (pp. 504-521). Springer International Publishing.

Jiao, J., Cao, Y., Song, Y., & Lau, R. (2018). Look deeper into depth: Monocular depth estimation with semantic booster and attention-driven loss. In Proceedings of the European conference on computer vision (ECCV) (pp. 53-69).

Jiao, J., Wei, Y., Jie, Z., Shi, H., Lau, R. W., & Huang, T. S. (2019). Geometry-aware distillation for indoor semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2869-2878).

Autonomous systems research

Connected and autonomous systems for electrified vehicles

Contact: Dr Quan Zhou

Summary

Dr Zhou aspires to harness the emerging power of AI to reshape vehicle design and control, helping attain a more sustainable society. His research interests include fuzzy inferences, evolutionary computation, deep and reinforcement learning, and their applications in automotive engineering. With a track record of more than 70 research papers published in international journals (e.g., IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Industrial Informatics) and conference proceedings and 9 patent inventions, Dr Zhou has gained recognition from industry and academia. He has close collaboration with several world-leading research institutes, e.g., EU Joint Research Centre, Nanyang Technological University, Tsinghua, RTWH Aachen.

The future vehicles will be connected, automated, shared and electrified (CASE) through the emerging Internet of Vehicles. My research aims to make a timely contribution to zero-emission transport through advanced modelling and control of large-scale electric vehicle platoons. The research is conducted based on the state-of-the-art X-in-the-loop testing facility (e.g., AVL Testbed. Connect, AVL PUMA, ETAS LABCAR, IPG-CarMaker) and the research outcome will include an open-source 3D driving environment and onboard control software for real-time multi-objective (energy economy, durability) optimal control.

At the core of this research is the development of a multi-agent reinforcement learning algorithm which is capable of environment perception and decision-making from two main dimensions, i.e., a deep inspection of vehicle powertrain systems (e.g., battery system or fuel cell system) and a global interaction with the traffic infrastructure and the vehicles driving around. We will examine the robustness, reliability, and feasibility of the developed models and algorithms with industry partners.

Medical and Surgical

Medical Robotics: Robot-assisted Rehabilitation of Lower Limb for Post-stroke Patients

Contact: Dr Mozafar Saadat

Summary

Every year approximately 150,000 people in the UK have a stroke and 90% of stroke survivors are left with significant impairment. Gait impairment is a large contributor to long-term disability where many patients lose the ability to walk independently, and a large proportion do not regain their normal walking speeds following a stroke.

As the majority of stroke survivors need intensive gait rehabilitation therapy, robotic systems have gained remarkable attention in recent years as a tool to decrease the strain on physical therapists while increasing the precision and repeatability of the exercise. However, although some of the current methods for robot-assisted rehabilitation have had many positive and promising outcomes, there is moderate evidence of improvement in walking and motor recovery using existing robotic devices compared to traditional practice.

We aim to develop a new robot-assisted gait rehabilitation system for neurologically impaired individuals. We have developed a variety of parallel kinematic machines as a passive/active robotic platform to allow the user to perform a variety of lower limb mobility tasks as therapeutic exercises.

The group’s current focus is to design and implement a robotic device which integrates several force sensors with an AI-based control scheme, hence enabling haptic rendering of a virtual floor. This will allow the user to walk in a natural way as they shift their weight from one limb to the other and control their body centre of mass position. This strategy considers the patient’s failure in performing exercises while fostering error correction mechanisms and changes in intra-limb coordination.

The overall aim of our research is to develop a progressive, robot-assisted lower limb rehabilitation therapy towards an effective and accelerated recovery path for post-stroke patients.

A sensor foot stand A person on a treadmill

Selected publications

Maddalena, M., & Saadat, M. (2023), Efficient Observer Design for Ambulatory Estimation of Body Centre of Mass Position, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Online.

Scone, T., Saadat, M., Barton, H., & Rastegarpanah, A. (2023), Effects of Variations in Hemiparetic Gait Patterns on Improvements in Walking Speed, IRBM, 44(1), 100733.

Maddalena, M., & Saadat, M. (2021), Simulated muscle activity in locomotion: implications of co-occurrence between effort minimisation and gait modularity for robot-assisted rehabilitation therapy, Computer Methods in Biomechanics and Biomedical Engineering, 24(12), 1380-1392.

Rastegarpanah, A., Scone, T., Saadat, M., Rastegarpanah, M., Taylor, S. J., & Sadeghein, N. (2018), Targeting effect on gait parameters in healthy individuals and post-stroke hemiparetic individuals, Journal of Rehabilitation and Assistive Technologies Engineering, 5, 2055668318766710.

Rastegarpanah, A., & Saadat, M. (2016), Lower limb rehabilitation using patient data, Applied Bionics and Biomechanics, 2016.

Medical robotics: automated intracytoplasmic sperm injection (ICSI)

Contact: Dr Mozafar Saadat

Summary

Our research is focused on developing a robotic system utilising vision recognition and AI to minimise the risks of inconsistencies and damage potential of manual egg manipulation and sperm injection in intracytoplasmic sperm injection (ICSI), a popular infertility treatment in IVF clinics.

Currently ICSI is done manually by an embryologist through visual observation using a microscope and manual manipulation of egg and sperm using joystick-operated micromanipulators. The operation is inconsistent as it is highly dependent on the operative’s experience/judgement and typically results in low success rate. Automating ICSI will help significantly improve fertility success rates, higher accessibility for couples, and increased profitability for clinics.

We are developing micron-accuracy fully automated egg manipulation for positioning and orientation with no cell damage, whilst selecting the best available sperm, prior to automated injection under full environmental control.

Selecting a healthy sperm based on its morphology and motility is a critical aspect of ICSI. We are developing automatic sperm sorting, single sperm isolation, immobilisation, and delivery of a healthy sperm ready for injection. Our research focus in sperm sorting is in two distinctive, but complementary areas:

  1. development of an efficient and optimised manufacturing method for a 3D microfluidic device that is functional in a clinical environment;
  2. development of an automated, AI-based sperm sorting solution based on vision sensing of sperm cells within a microfluidic environment.

The visual sensing system that is being developed will analyse sperm and measures its key quality characteristics in order to provide clinics the opportunity for a data-driven approach to the ICSI process.

Eye positional graphic A microfluidic device

Selected publications

Sadak, F., Saadat, M., & Hajiyavand, A. M. (2020). Real-time deep learning-based image recognition for applications in automated positioning and injection of biological cells. Computers in Biology and Medicine, 125, 103976.

Saadat, M., Taylor, M., Hughes, A., & Hajiyavand, A. M. (2020). Rapid prototyping method for 3D PDMS microfluidic devices using a red femtosecond laser. Advances in Mechanical Engineering, 12(12), 1687814020982713.

Sadak, F., Saadat, M., & Hajiyavand, A. M. (2019). Three dimensional auto-alignment of the ICSI pipette. IEEE Access, 7, 99360-99370.

Hajiyavand, A. M., Saadat, M., Abena, A., Sadak, F., & Sun, X. (2019). Effect of injection speed on oocyte deformation in ICSI. Micromachines, 10(4), 226.

Sadak, F., Saadat, M., & Hajiyavand, A. M. (2019). Three dimensional auto-alignment of the ICSI pipette. IEEE Access, 7, 99360-99370.

Saadat, M., Hajiyavand, A. M., & Singh Bedi, A. P. (2018). Oocyte positional recognition for automatic manipulation in ICSI. Micromachines, 9(9), 429

Human robot interaction and collaboration

Hybrid planning

Contact: Dr Masoumeh Mansouri

Summary

My research is primarily related to developing hybrid robot planning methods for unstructured environments shared with humans. In particular, I focused on methods that integrate automated task/motion /coverage planning, scheduling, as well as temporal and spatial reasoning. The objectives are to:

  1. combine heterogeneous knowledge representation, e.g., discrete and continuous planning domains, that are able to express the many nuances of real-world robotics problems;
  2. to develop efficient methods for reasoning with these hybrid representations, and
  3. to include contextual modelling of human behaviours in these hybrid methods.

Over the past few years, I have gained extensive experience in the development of such hybrid planning methods for many applications where intelligent, human-aware multi-robot systems are essential, such as construction; warehouse/factory automation; and mining.

Selected publications

Salvado, J., Mansouri, M., and Pecora, F., 2022. DiMOpt: a Distributed Multi-robot Trajectory Optimization Algorithm. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).


Mansouri, M., Pecora, F. & Schüller, P., 2021. Combining task and motion planning: challenges and guidelines. In: Frontiers in Robotics and AI. 8, 12 p. 637888.


Surma, F., Kucner, T.P. and Mansouri, M., 2021. Multiple Robots Avoid Humans To Get the Jobs Done: An Approach to Human-aware Task Allocation. In 2021 European Conference on Mobile Robots (ECMR) pp. 1-6. IEEE.


Behrens, J. K., Lange, R. & Mansouri, M., 2019. A constraint programming approach to simultaneous task allocation and motion scheduling for industrial dual-arm manipulation tasks. In the 2019 International Conference on Robotics and Automation (ICRA). IEEE Computer Society P.


Mansouri, M., Lacerda, B., Hawes, N. & Pecora, F., 2019. Multi-robot planning under uncertain travel times and safety constraints. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19). Kraus, S. (ed.), pp.478-484. 

Critical cultural robotics

Contact: Dr Masoumeh Mansouri

Summary

There is a growing trend to attempt to introduce social and cultural behaviours and practices into the deployment and daily engagements of robotics. The main driver of this trend is a push to enhance robots’ likeability and trustworthiness, and the comfort of their users. However, the vast majority of approaches to introducing cultural factors into human-robot interactions rely on imaginations of “national cultures.” It has been shown that the oversimplified confounding of culture and nationality results in implicit support for conservative social policies, the reproduction of lazy and/or harmful cultural stereotypes, and, more pragmatically, simply inaccurate models of the wants or needs of an identified user base.

To avoid ineffective and actively harmful social robots, and to produce robot behaviour that better conforms to human cultural expectations, we need to develop a more accurate understanding of the diversity of relevant cultural needs, as well as be able to account for the different interpretations of “culture” in current and historical AI methods.

In my research, I attempt with a group of multidisciplinary researchers to:

  1. explore the relatively new field of study dedicated to the effects of the integration of cultural models into robots;
  2. develop cultural robotics which starts from a critical perspective; and
  3. call for a better understanding of the new forms of culture created in human-robot interactions, the impacts of these cultures on participants, the contexts within which interactions take place and on the wider societies.

Critical Cultural Robotics Network web page.

Selected publications

Candiotto, L. and Mansouri, M., 2022. Can we have cultural robotics without emotions. Proceedings to Robophilosophy. pp. 259-266


Ornelas, M.L., Smith, G.B. and Mansouri, M., 2022. Redefining culture in cultural robotics. AI & SOCIETY, pp.1-12.


Mansouri, M., 2022. A call for epistemic analysis of cultural theories for AI methods. AI & SOCIETY, pp.1-3.


Brandao, M., Mansouri, M. & Magnusson, M., 2022. Editorial: Responsible Robotics. In: Frontiers in Robotics and AI. 9, 3 p., 937612.

Satisficing Trust in Human Robot Teams

Contact: Professor Chris Baber

 Summary

In this project, we design and develop Human-Robot Teams (using experiments with real robots and modelling with Reinforcement Learning) to conduct urban search and related activity.

A team will consist of 1-3 human operators and 2-6 robots. We extend the definition of a 'team' beyond robots and humans on the ground. Drawing an analogy with the management of major incidents (in UK Emergency Services), operational activity is performed at the 'bronze' level, i.e., by the local human-robot team which is overseen by tactical coordinators at the 'silver' level, e.g., providing guidance on legal or other constraints, and which answers to high-level strategic command at 'gold' level, e.g., redefining goals for the mission etc.

In this way the 'team' is more than local coordination and trust applies through the command hierarchy as well as horizontally across each level. Communication may be intermittent, and the mission's goals and constraints might change during the mission. This is a further driver of variation in trust, along with mission, activity, situation etc.

Each team member, human or robot, will be allocated tasks within the team and perform these in an autonomous manner. Key to team performance will be the ability to acquire and maintain Distributed Situation Awareness, i.e., team members will have their own interpretation of the situation as they see it, and their own interpretation of the behaviour of their teammates.

Teammate behaviour could be inferred from observation of what teammates are doing in a given situation, and whether this is to be expected. This creates behavioural markers of trust. We also consider the confidence with which teammates might express the Situation Awareness, e.g., in terms of their interpretation of the data they perceive in the situation. From the interpretation of teammate behaviour, we explore appropriately scaled trust (using the concept of a 'ladder of trust' on which trust moves up and down depending on the quality of situation awareness, the behaviour of teammates, the threat posed by the situation).

From the Distributed Situation Awareness, we also explore counter-factual ('what-if') reasoning to cope with uncertain and ambiguous situations (where ambiguity might relate to permissions and rights to perform tasks, or to the consequences of an action, as well as Situation Awareness).

Manufacturing and remanufacturing

Robotic Assembly/Disassembly of Complex Products

Contact: Dr Mozafar Saadat

Summary

Robotic disassembly is the main step of dealing with End-Of-Life (EOL) products. It is a key enabling technology for autonomous remanufacturing, where the core components of a product at its end of life stage are retrieved and remanufactured to produce a new product with the original warranty. It sits centrally within the concept of Industry 4.0 and offers the objectives of net zero circular economy.

Robotic disassembly proves to be an efficient solution to reduce process costs. Using robotic disassembly system, a significant recovery value can be achieved. However, the nature of the disassembly process and high-level uncertainty process for automated disassembly is challenging.

In this project our research focus are in the following areas:

  • Developing intelligent robotic disassembly planning approaches and new automation methods based on the artificial intelligence.

  • Investigating new method for an efficient and reliable human-robot collaborative disassembly process.

  • Robotic and automated assembly/disassembly operations within the concept of Factory-in-a-Box (FIAB).

A sensor foot stand A person on a treadmill

Selected publications

Zhang, Z., & Saadat, M. (2022). Multi-objective grasp pose optimisation for robotic 3D pipe assembly manipulation. Robotics and Computer-Integrated Manufacturing, 76, 102326.


Parsa, S., & Saadat, M. (2021). Human-robot collaboration disassembly planning for end-of-life product disassembly process. Robotics and Computer-Integrated Manufacturing, 71, 102170.


Parsa, S., & Saadat, M. (2019). Intelligent selective disassembly planning based on disassemblability characteristics of product components. The International Journal of Advanced Manufacturing Technology, 104, 1769-1783.


Parsa, S., & Saadat, M. (2018). Intelligent planning using genetic algorithm for automated disassembly. In Advances in Manufacturing Technology XXXII (pp. 189-194). IOS Press.

Robotic Disassembly/Industrial automation

Contact: Dr Yongjing Wang

Summary

The existing procedure and state-of-the-art techniques for disassembly automation usually require a comprehensive analysis of a disassembly task, correct design of sensing and compliance facilities, efficient task plans, and reliable system integration. It is usually a complex, expensive and time-consuming process to implement a robotic disassembly system. This project will develop a self-learning mechanism to allow robots to learn disassembly tasks and the respective control strategies autonomously, by combining multidimensional sensing and machine learning techniques. This capability will help build a more plug-and-play disassembly automation system, and reduce the technical difficulties and the implementation costs of disassembly automation. It is expected the next generation industrial robotics can be adopted in more complex and uncertain tasks such as maintenance, cleaning, repair, remanufacturing and recycling, where many processes are contact-rich. Disassembly is a typical contact-rich task. The PI envisages that self-learning robotic disassembly will provide key understandings and technologies that can be adopted to the automation of other types of contact-rich tasks in the future to encourage a wider adoption of robots in the UK industry.

Robotic Disassembly for Autonomous Remanufacturing

Contact: Professor Duc Truong Pham

Summary

Remanufacturing is the process of “returning a product to at least its original performance with a warranty that is equivalent (to) or better than that of the newly manufactured product.” (British Standard BS 8887 – Part 2).


Remanufacturing can be more sustainable than manufacturing from new “because it can be profitable and less harmful to the environment …” (Matsumoto and Ijomah 2013). Several industry sectors have reported substantial energy savings and CO2 emission reductions – up to 83% and 87% for the automotive sector, respectively (Ortegon et al 2014). In addition to benefits relating to energy consumption and the environment, there are other practical reasons for a company to remanufacture its products: high demand for spare parts, brand protection from independent operators and long lead time for new components (Seitz 2007).


Disassembly, a key step in remanufacturing, has proved challenging to robotise due to variability in the condition of the product to be taken apart. Our research aims to develop strategies for robotic disassembly and create autonomous disassembly systems underpinned by a fundamental understanding of disassembly processes to increase resilience to uncertainties.


The objectives of our work are:

  1. To conduct a fundamental investigation into disassembly science. This will involve developing analytical models validated through experiments of a range of common basic disassembly tasks.

  2. To devise and test autonomous strategies for a range of basic disassembly operations using the understanding obtained through achieving Objective 1. Two kinds of strategies, passive accommodation and active impedance control, will be investigated to enable the disassembly system to adapt to external changes.

  3. To build the infrastructure needed to integrate and implement the strategies developed through attaining Objective 2. This includes autonomous planning and collaboration systems for complex disassembly tasks.

  4. To demonstrate the robustness to uncertainties of the disassembly strategies, plans and autonomous systems created in pursuing Objectives 2 and 3 on tasks involving real products. Examples suggested by the industrial partners will be analysed and suitable products selected for use in the demonstrations.

Illustrative workers production line


Selected publications

M. Qu, Y. Wang and D. T. Pham, "Robotic Disassembly Task Training and Skill Transfer Using Reinforcement Learning," in IEEE Transactions on Industrial Informatics.


M. Kerin, N. Hartono and D. T. Pham, “Optimising remanufacturing decision-making using the bees algorithm in product digital twins.Sci Rep 13, 701 (2023). 


J. Lim and D. T. Pham, “A Passive Compliant Gough-Whitehall-Stewart Mechanism for Peg-Hole Disassembly,” in Kim, KY., Monplaisir, L., Rickli, J. (eds) Flexible Automation and Intelligent Manufacturing: The Human-Data-Technology Nexus. FAIM 2022. Lecture Notes in Mechanical Engineering. Springer, Cham. 


J. Huang, D. T. Pham, R. Li, M. Qu, Y. Wang, M. Kerin, S. Su, C. Ji, O. Mahomed, R. Khalil, D. Stockton, W. Xu, Q. Liu and Z. Zhou, “An experimental human-robot collaborative disassembly cell”, Computers and Industrial Engineering, (2021), Volume 143, ISSN 0360-8352.


R. Li, D. T. Pham, J. Huang, Y. Tan, M. Qu, Y. Wang, M. Kerin, K. Jiang, S. Su, C. Ji, Q. Liu and Z. Zhou, “Unfastening of Hexagonal Headed Screws by a Collaborative Robot”, IEEE Transactions on Automation Science and Engineering, (2020).


Y. Zhang, H. Lu, D. T. Pham, Y. Wang, M. Qu, J. Lim and S. Su – “Peg–hole disassembly using active compliance”, Royal Society Open Science, (2019), 6 (8).

Sensing computer vision

Human Eye Gaze Tracking

Contact: Dr Hyung Jin Chang

Summary

Eye gaze is an important functional component in various applications, as it indicates human attentiveness and can thus be used to study their intentions and understand social interactions. For these reasons, accurately estimating gaze is an active research topic in computer vision, with applications in affect analysis, saliency detection and action recognition, to name a few. Gaze estimation has also been applied in domains other than computer vision, such as navigation for eye gaze controlled wheelchairs, detection of non-verbal behaviours of drivers, and inferring the object of interest in human-robot interactions.

Eye-tracking glasses

Selected publications

Tobias Fischer, Hyung Jin Chang, Yiannis Demiris, ‘RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments (PDF)’, European Conference on Computer Vision (ECCV), September, 2018.


Jun O Oh, Hyung Jin Chang, Sang-Il Choi, ‘Self-Attention with Convolution and Deconvolution for Efficient Eye Gaze Estimation from a Full Face Image’, IEEE Proc. Computer Vision and Pattern Recognition Workshop(CVPRW) / GAZE2022, June, 2022.

 

Human Body Pose Estimation

Contact: Dr Hyung Jin Chang

 

Summary

Human pose estimation has long been a nontrivial and fundamental problem in the computer vision community. The goal is to localize anatomical keypoints (e.g., nose, ankle, etc.) of human bodies from images or videos. Nowadays, as more and more videos are recorded endlessly, video-based human pose estimation has been extremely de- sired in enormous applications including live streaming, augmented reality, surveillance, and movement tracking.

Figure positioning

Selected publications

Runyang Feng, Yixing Gao, Xueqing Ma, Tze Ho Elden Tse, Hyung Jin Chang, Mutual Information-Based Temporal Difference Learning for Human Pose Estimation in Video, IEEE Proc. Computer Vision and Pattern Recognition (CVPR) (PDF), June, 2023.


Boeun Kim, Hyung Jin Chang, Jungho Kim, Jin Young Choi, Global-local Motion Transformer for Unsupervised Skeleton-based Action Learning, European Conference on Computer Vision (ECCV) (PDF), October, 2022


Joseph Ramsay, Hyung Jin Chang, Body Pose Sonification for a View-Independent Auditory Aid to Blind Rock Climbers, IEEE Winter Conference on Applications of Computer Vision (WACV) (PDF), March, 2020.

Visual Object Tracking

Contact: Dr Hyung Jin Chang

Summary

Key to realizing the vision of human-centred computing is the ability for machines to recognize people, so that spaces and devices can become truly personalized. However, the unpredictability of real-world environments impacts robust recognition, limiting usability. In real conditions, human identification systems have to handle issues such as out-of-set subjects and domain deviations, where conventional supervised learning approaches for training and inference are poorly suited.


With the rapid development of Internet of Things (IoT), we advocate a new labelling method that exploits signals of opportunity hidden in heterogeneous IoT data. The key insight is that one sensor modality can leverage the signals measured by other co-located sensor modalities to improve its own labelling performance. If identity associations between heterogeneous sensor data can be discovered, it is possible to automatically label data, leading to more robust human recognition, without manual labelling or enrolment. On the other side of the coin, we also study the privacy implication for such cross-modal identity association.

Selection positioningobject identification


Selected publications

Jinyu Yang, Zhongqun Zhang, Zhe LI, Hyung Jin Chang, Ales Leonardis, Feng Zheng, Towards Generic 3D Tracking in RGBD Videos: Benchmark and Baseline (PDF), European Conference on Computer Vision (ECCV), October, 2022.


Jongwon Choi, Hyung Jin Chang, Tobias Fischer, Sangdoo Yun, Kyuewang Lee, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi, Context-aware Deep Feature Compression for High-speed Visual Tracking (PDF), IEEE Proc. Computer Vision and Pattern Recognition (CVPR), June, 2018.


Jongwon Choi, Hyung Jin Chang, Sangdoo Yun, Tobias Fischer, Yiannis Demiris, Jin Young Choi, Attentional Correlation Filter Network for Adaptive Visual Tracking (PDF), IEEE Proc. Computer Vision and Pattern Recognition (CVPR), July, 2017.


Jongwon Choi, Hyung Jin Chang, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi, Visual Tracking Using Attention-Modulated Disintegration and Integration (PDF), IEEE Proc. Computer Vision and Pattern Recognition (CVPR) (PDF), June, 2016.

Interaction between visual processing and movement execution in humans

Computational Psychology lab (CPL): Interaction between visual processing and movement execution in humans

Contact: Dr Dietmar Heinke

Summary

This research focuses on how humans generate reach movements towards behaviourally relevant objects in complex environments e.g., reaching for your cup of coffee on the breakfast table while ignoring the beer glass from last night. In cognitive neuroscience this constitutes a relatively novel research question as typically movement control and visual processing are examined separately. However, the novel approach is to examine both mechanisms in the same experimental set-up (e.g., choice reaching task). This empirical work also goes along with the development of control architectures for robot arms aiming to mimic human reaching (robotics models). A recent finding of this work is that in humans the motor stage and the visual processing stage can operate in parallel. So rather then first establishing the location of the reach target and then executing the reach movement (serial operation mode) humans can begin reach movements before the locations of the reach targets are fully established.

Selected publications

Makwana, M., Zhang, F., Heinke, D., & Song, J. (2022). Continuous action with a neurobiologically inspired computational approach reveals the dynamics of selection history. PsyArXiv.


Strauss, S., Woodgate, P.J.W., Sami, S. A., & Heinke, D. (2015). Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information. Neural Networks, 72, 3-12.


Woodgate, P.J.W., Strauss, S., Sami, S. A., & Heinke, D. (2015) Motor cortex guides selection of predictable movement targets. Behavioural Brain Research, 287, 238-246.


Strauss, S. & Heinke, D. (2012) A robotics-based approach to modelling of choice reaching experiments on visual attention. Front. Psychology, 3:105.

Computational Psychology lab (CPL): Affordances & tool use

Computational Psychology lab (CPL): Interaction between visual processing and movement execution in humans

Contact: Dr Dietmar Heinke

Summary

In this theme CPL examines how humans extract action possibilities (e.g., grasping, turning, and hammering) from visual objects. This theme particularly focusses on how humans realize this extraction without the usage of semantic information (i.e., without recognising objects) and instead use geometrical features e.g., handles. In recent years, CPL has extended this theme to how we are able to use tools. Of particular interest is the question how humans are able to decide that we can use a pocketknife to turn a screw i.e., unusual tool use. Hereby CPL focuses not only on affordances but also at the involvement of problem solving.

Selected publications

Xu, S., Liu, X., Almeida, J. & Heinke, D. (2021). The contributions of the ventral and the dorsal visual streams to the automatic processing of action relations of familiar and unfamiliar object pairs. (2021). NeuroImage, 245, 118629.


Osiurak, F., & Heinke, D. (2018). Looking for Intoolligence: A unified framework for the cognitive study of human tool use and technology. American Psychologist , 73(2), 169-185.Xu, S., & Heinke, D. (2017). Implied between-object actions affect response selection without knowledge about object functionality. Visual Cognition, 1-3, 152-168.


Xu, S., Humphreys, G. W., Mevorach, C., & Heinke, D. (2017) The involvement of the dorsal stream in processing implied actions between object pairs: a TMS study. Neuropsychologia, 95, 240-249.


Xu, S., Humphreys, G. W., & Heinke, D. (2015) Implied actions between paired objects lead to affordance selection by inhibition. Journal of Experimental Psychology: Human Perception and Performance, 41(4), 1021-1036.


Yoon, E. Y., Heinke, D., & Humphreys, G. W. (2002). Modelling direct perceptual constraints on action selection: The Naming and Action model (NAM). Visual Cognition, 9(4/5):615-661.

Soft robotics

Smart Soft Glove

Contacts: Professor Samia Nefti-Meziani and Dr Steve Davis

Summary

The ultimate objective of our smart soft glove is to provide tactical assistance to the astronauts by augmenting forces and assisting their finger movements in extreme conditions. To achieve this, there are numerous challenges to be addressed such as; maintaining sense of touch, precision, safety, and tele-operability. It is almost inevitable for astronauts to lose their sense of touch while wearing bulky suits and gloves which also compromises precision. Furthermore, astronauts’ safety ensured by teleoperations accomplished on ground is of utmost importance especially in professions where candidates are high value assets. Therefore, it is extremely crucial to address these challenges in order to accomplish convoluted space missions that require precise human-level object manipulations.

To address the above problem statement, first, a precise haptic feedback (sense of touch) is facilitated by using a combination of soft conductive fabric and an array of vibrating points (micro disk motors). This is achieved by reading resistance variations across the conductive fabric sewed across palm of the glove shown in Fig. 2(a). Detected pressure is then converted into vibrations on the corresponding disk motors also fixed to the palm of the glove shown in Fig. 2(b). Hence, when the operator is manipulating an object, they would feel subtle vibrations at points where the object is making contacts with the hand. This haptic experience is made further realistic by tuning the level of vibrations to match the intensity of the pressure applied on the object.

Safe and precise soft pneumatic actuators are used here to augment radial force applied by astronaut’s fingers. Pneumatic muscle actuators (PMA) are developed by coating a rubber tube of desirable size with a braided sleeve and blocking the two ends with stoppers having a single hole on either end to input air. These actuators are classified as contractor PMAs as their braided angle is less than 54.7°. A parallel neural network proportional (PNNP) controller is used to control the movements of our PMAs. The network consists of 9 neurons in one hidden layer, 3 delayed plant inputs, 2 delayed plant outputs, and it is trained by Trainlm for 100 Epochs. Fig. 3(a) shows the structure of the PMAs sewed onto the glove. Also, a video demonstrating the actuators’ operation is attached.

Teleoperation or object manipulation from a far distance (Earth-Space) can be achieved through a number of methods. One of the most efficient and low-cost methods is the 3-dimensional visual mapping technique. This is achieved by developing a system setup using Leap Motion Sensor to detect operator’s hand movements in real-time and transmit the recorded kinematics to another station using a secure protocol simulating the motions with zero latency as shown in Fig. 3(b). Furthermore, as final part of this project, operator’s hand movements will be translated into a robotic gripper’s finger movements installed at the receiving station to perform operator’s tasks remotely.

Illustrating the pneumatic muscle actuators fixed on our smart soft glove Simulation of  a tele-operating hand

Fig. 1: (a) Illustrating the pneumatic muscle actuators fixed on our smart soft glove, (b) Simulation of the tele-operating hands in Unity.

 

Robotics glove showing pressure points Disk motors providing haptic feedback

Fig. 2: (a) Array of pressure points on the conductive fabric, (b) Disk motors providing haptic feedback.

 

Illustrating the pneumatic muscle actuators fixed on our smart soft glove Integration of the soft glove in the teleoperation setup

Fig. 3: (a) Pneumatic muscle actuators (PMAs) fixed on the smart soft glove (video attached), (b) Integration of the soft glove in the teleoperation setup.

 

Fabrication tool for shape-changing interfaces

Contact: Dr Hyunyoung Kim

Summary

Toolkits for shape-changing interfaces (SCIs) enable designers and researchers to easily explore the broad design space of SCIs. However, despite their utility, existing approaches are often limited in the number of shape-change features they can express. This paper introduces MorpheesPlug, a toolkit for creating SCIs that covers seven of the eleven shape-change features identified in the literature. MorpheesPlug is comprised of (1) a set of six standardized widgets that express the shape-change features with user-definable parameters; (2) software for 3D-modeling the widgets to create 3D-printable pneumatic SCIs; and (3) a hardware platform to control the widgets. To evaluate MorpheesPlug we carried out ten open-ended interviews with novice and expert designers who were asked to design a SCI using our software. Participants highlighted the ease of use and expressivity of the MorpheesPlug.

Graphic of a hanging mobile shape Graphic of a hanging mobile shape

Virtual reality

Perception with immersive technology

Contact: Dr Massimiliano (Max) Di Luca

Summary

Massimiliano Di Luca (Max) performs fundamental and applied research to understand how humans perceive and interact with their environment. He uses immersive technologies and haptic devices to produce simulated environments where he conducts psychophysical and neuroimaging experiments to capture how the brain employs multiple sources of sensory information. He employs signal processing and machine learning to discover patterns in the interaction and user’s movements related to perception.

The leitmotiv of his research is to create computational models that constitute quantitative and testable theories about the underlying cognitive and neural processes. Such models can be used for simulations (e.g., to be implemented in robots), rendering (e.g., in haptic devices), and prediction about the user movement, responses, and states (e.g., to optimise the generation of sensory cues in VR systems by using perceptual metrics).

A robot-arm graphic
A VR experiment
A VR experimentation lab
VR depiction of a robot arm

Selected publications

V. Ortenzi, M. Filipovica, D. AbdlkarimT. Pardi, A. M. Wing, M. Di Luca, K. J. Kuchenbecker, (2022) Robot, Pass Me the Tool: Handle Visibility Facilitates Task-oriented Handovers, 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Sapporo, Japan, 2022, pp. 256-264.


M. Sarac, T. M. Huh, H. Choi, M. R. Cutkosky, M. D. Luca, A. M. Okamura (2022) Perceived Intensities of Normal and Shear Skin Stimuli Using a Wearable Haptic Bracelet, in IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6099-6106.


D. Abdlkarim, V. Ortenzi, T. Pardi, M. Filipovica, A. M. Wing, K. J. Kuchenbecker, M. Di Luca (2021) PrendoSim: Proxy-Hand-Based Robot Grasp Generator, ICINCO 2021.


R. Canales, A. Normoyle, Y. Sun, Y. Ye, M. Di Luca, S. Jörg (2019) Virtual Grasping Feedback and Virtual Hand Ownership. In ACM Symposium on Applied Perception 2019 (SAP '19). Association for Computing Machinery, New York, NY, USA, Article 4, 1–9.


M. DI Luca, B. Knoerlein, M.O. Ernst, M. Harders (2010) Effects of visual–haptic asynchronies and loading–unloading movements on compliance perception, Brain Research Bulletin.

Fabrication tool for shape-changing interfaces

Contact: Dr Hyunyoung Kim

Summary

Toolkits for shape-changing interfaces (SCIs) enable designers and researchers to easily explore the broad design space of SCIs. However, despite their utility, existing approaches are often limited in the number of shape-change features they can express. This paper introduces MorpheesPlug, a toolkit for creating SCIs that covers seven of the eleven shape-change features identified in the literature. MorpheesPlug is comprised of (1) a set of six standardized widgets that express the shape-change features with user-definable parameters; (2) software for 3D-modeling the widgets to create 3D-printable pneumatic SCIs; and (3) a hardware platform to control the widgets. To evaluate MorpheesPlug we carried out ten open-ended interviews with novice and expert designers who were asked to design a SCI using our software. Participants highlighted the ease of use and expressivity of the MorpheesPlug.

Human Sensorimotor Robotics

Contact: Dr Sang-Hoon Yeo

Summary

Dr Yeo use's techniques in robotics and virtual reality to understand how the brain collaborates with eyes, muscles and bones to coordinate our sensation and movement, and is keen to apply the outcomes of research to diagnose and rehabilitate motor diseases and injuries. His current research interest lies in hand-eye coordination, model of sensorimotor motor learning and adaptation, and development of a new muscle mechanics model.

Selected publications

Deane, O., Toth, E., & Yeo, S. H. (2022). Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data. Behavior Research Methods, 1-20.


Wang, Y., Verheul, J., Yeo, S. H., Kalantari, N. K., & Sueda, S. (2022). Differentiable Simulation of Inertial Musculotendons. ACM Transactions on Graphics (TOG), 41(6), 1-11.


Kim, S., Kwon, J., Kim, J. M., Park, F. C., & Yeo, S. H. (2021). On the encoding capacity of human motor adaptation. Journal of Neurophysiology, 126(1), 123-139.


Yeo, S. H., Franklin, D. W., & Wolpert, D. M. (2016). When optimal feedback control is not enough: Feedforward strategies are required for optimal control with active sensing. PLoS computational biology, 12(12), e1005190.