Compositional Servoing by Recombining Demonstrations

Max Argus*, Abhijeet Nayak*, Martin Büchner, Silvio Galesso, Abhinav Valada, and Thomas Brox



Abstract

Learning-based manipulation policies from image inputs often show weak task transfer capabilities. In contrast, visual servoing methods allow efficient task transfer in high-precision scenarios while requiring only a few demonstrations. In this work, we present a framework that formulates the visual servoing task as graph traversal. Our method not only extends the robustness of visual servoing, but also enables multitask capability based on a few task-specific demonstrations. We construct demonstration graphs by splitting existing demonstrations and recombining them. In order to traverse the demonstration graph in the inference case, we utilize a similarity function that helps select the best demonstration for a specific task. This enables us to compute the shortest path through the graph. Ultimately, we show that recombining demonstrations leads to higher task-respective success. We present extensive simulation and real-world experimental results that demonstrate the efficacy of our approach.

Video

Authors

...
Max Argus

Computer Vision Group
University of Freiburg

...
Abhijeet Nayak

Robot Learning Lab
University of Freiburg

...
Martin Büchner

Robot Learning Lab
University of Freiburg

...
Silvio Galesso

Computer Vision Group
University of Freiburg

...
Abhinav Valada

Robot Learning Lab
University of Freiburg

...
Thomas Brox

Computer Vision Group
University of Freiburg