iTP-LfD: Improved Task Parametrised Learning from Demonstration for Adaptive Path Generation of Cobot

Shirine El Zaatari, Yuqi Wang, Weidong Li, Yiqun Peng

    Research output: Contribution to journalArticlepeer-review

    6 Citations (Scopus)
    26 Downloads (Pure)

    Abstract

    An approach of Task-Parameterised Learning from Demonstration (TP-LfD) aims at automatically adapting the movements of collaborative robots (cobots) to new settings using knowledge learnt from demonstrated paths. The approach is suitable for encoding complex relations between a cobot and its surrounding, i.e., task-relevant objects. However, further efforts are still required to enhance the intelligence and adaptability of TP-LfD for dynamic tasks. With this aim, this paper presents an improved TP-LfD (iTP-LfD) approach to program cobots adaptively for a variety of industrial tasks. iTP-LfD comprises of three main improvements over other developed TP-LfD approaches: 1) detecting generic visual features for frames of reference (frames) in demonstrations for path reproduction in new settings without using complex computer vision algorithms, 2) minimising redundant frames that belong to the same object in demonstrations using a statistical algorithm, and 3) designing a reinforcement learning algorithm to eliminate irrelevant frames. The distinguishing characteristic of the iTP-LfD approach is that optimal frames are identified from demonstrations by simplifying computational complexity, overcoming occlusions in new settings, and boosting the overall performance. Case studies for a variety of industrial tasks involving different objects and scenarios highlight the adaptability and robustness of the iTP-LfD approach.
    Original languageEnglish
    Article number102109
    JournalRobotics and Computer-Integrated Manufacturing
    Volume69
    Early online date17 Dec 2020
    DOIs
    Publication statusPublished - Jun 2021

    Bibliographical note

    NOTICE: this is the author’s version of a work that was accepted for publication in Robotics and Computer-Integrated Manufacturing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Robotics and Computer-Integrated Manufacturing, 69, (2021) DOI: 10.1016/j.rcim.2020.102109

    © 2020, Elsevier. Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International http://creativecommons.org/licenses/by-nc-nd/4.0/10.1016/j.rcim.2020.102109

    Funder

    This work is funded by the PhD studentships from the Coventry University , the Unipart Powertrain Application Ltd. (U.K.) , the Institute of Digital Engineering, U.K. , and a research project sponsored by the National Natural Science Foundation of China (Project No. 51975444 ).

    Keywords

    • Collaborative robots (Cobots)
    • Intuitive programming
    • Learning from demonstration
    • Reinforcement learning

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Software
    • Mathematics(all)
    • Computer Science Applications
    • Industrial and Manufacturing Engineering

    Fingerprint

    Dive into the research topics of 'iTP-LfD: Improved Task Parametrised Learning from Demonstration for Adaptive Path Generation of Cobot'. Together they form a unique fingerprint.

    Cite this