Generic Learning from Demonstration Approaches for Programming Collaborative Robots

  • Shirine El Zaatari

    Student thesis: Doctoral ThesisDoctor of Philosophy


    The Industry 4.0 revolution is transforming the manufacturing scene to cater for new mass customisation demands and competitive production requirements. One of the ways Industry 4.0 aims to improve manufacturing is by adopting human-robot collaboration (HRC) due to its benefits in improving ergonomics, speeding production and merging human dexterity with robotic precision. Collaborative robots (cobots) are a stepping stone towards enabling HRC due to their intuitive programming interfaces, intrinsic safety features and agility. Cobots are marketed as “easily deployable” and “intuitively programmed”. However, this is only true for a limited range of HRC
    applications in which the tasks of the human and the cobot are independent and the cobot follows a fixed path.

    In order to expand the applicability of cobots to more complex HRC tasks in which the cobot has to be human-aware and exhibit flexible behaviour, we explored a new programming technique, namely taskparametrized learning from demonstration (TP-LfD). In TP-LfD, the cobot is shown a few demonstrations of a task conducted under changing circumstances and a regressed model of the task is learnt to support new previously-unseen circumstances. Using TP-LfD requires the human teacher to
    specify the task-relevant objects and the methods of detecting and localising the objects. This is a time consuming process subject to human errors. To address the above challenge, in this thesis, we present a generic solution that automatically detects and optimises the choice of task parameters for TP-LfD.
    We evaluated our solution in multiple simulation industrial tasks where positions of objects vary in the scene.

    In this thesis, Gaussian mixture models (GMM) are used to model tasks using TP-LfD. A GMM is generated to model the path with respect to each task parameter. Each task parameter is a frame of
    reference characterised by a position and orientation. GMMs intrinsically vary when the orientation of a frame changes. However, in some cases, such as pick-and-place, the orientation of a frame of reference is irrelevant to the task. Therefore, in this thesis, we designed a new model, called the ring Gaussian,
    which models paths with respect to frames whose orientation is task-irrelevant. Our model generated more efficient and successful paths compared to the traditional GMM model. The proposed previous two contributions were integrated in one end-to-end algorithm to program cobots for a wide range of
    HRC industrial tasks, including pick-and-place and handover. The performance of the robot was more efficient and accurate than when either of the contributions was not used. In addition, case studies in this thesis show how the integrated algorithm can help overcome common problems such as partial occlusion and obstacle avoidance. The outcomes of this thesis were presented in 4 peer-reviewed journal papers and an open source code.

    To conclude, this thesis contributes towards the intuitive programming of cobots for flexible manufacturing tasks. Simulated industrial tasks and case studies in this thesis demonstrated the successful use of the developed algorithm to program cobots for tasks with changing object position and human involvement, without the need for programming expertise.
    Date of Award2022
    Original languageEnglish
    Awarding Institution
    • Coventry University
    SupervisorZahid Usman (Supervisor), Nazaraf Shah (Supervisor) & Weidong Li (Supervisor)


    • Human-robot collaboration
    • Industrial robots
    • Learning from demonstration
    • Task- parametrized Gaussian mixture models

    Cite this