Robot Task Learning from Human Demonstration
2007 (English)Doctoral thesis, monograph (Other scientific)
Today, most robots used in the industry are preprogrammed and require a welldefined and controlled environment. Reprogramming such robots is often a costly process requiring an expert. By enabling robots to learn tasks from human demonstration, robot installation and task reprogramming are simplified. In a longer time perspective, the vision is that robots will move out of factories into our homes and offices. Robots should be able to learn how to set a table or how to fill the dishwasher. Clearly, robot learning mechanisms are required to enable robots to adapt and operate in a dynamic environment, in contrast to the well defined factory assembly line.
This thesis presents contributions in the field of robot task learning. A distinction is made between direct and indirect learning. Using direct learning, the robot learns tasks while being directly controlled by a human, for example in a teleoperative setting. Indirect learning, however, allows the robot to learn tasks by observing a human performing them. A challenging and realistic assumption that is decisive for the indirect learning approach is that the task relevant objects are not necessarily at the same location at execution time as when the learning took place. Thus, it is not sufficient to learn movement trajectories and absolute coordinates. Different methods are required for a robot that is to learn tasks in a dynamic home or office environment. This thesis presents contributions to several of these enabling technologies. Object detection and recognition are used together with pose estimation in a Programming by Demonstration scenario. The vision system is integrated with a localization module which enables the robot to learn mobile tasks. The robot is able to recognize human grasp types, map human grasps to its own hand and also evaluate suitable grasps before grasping an object. The robot can learn tasks from a single demonstration, but it also has the ability to adapt and refine its knowledge as more demonstrations are given. Here, the ability to generalize over multiple demonstrations is important and we investigate a method for automatically identifying the underlying constraints of the tasks.
The majority of the methods have been implemented on a real, mobile robot, featuring a camera, an arm for manipulation and a parallel-jaw gripper. The experiments were conducted in an everyday environment with real, textured objects of various shape, size and color.
Place, publisher, year, edition, pages
Stockholm: KTH , 2007. , vii, 136 p.
Trita-CSC-A, ISSN 1653-5723 ; 2007:01
Robotics, Machine Learning, Artificial Intelligence, Computer Vision, Programming by Demonstration, Grasp Mappping, Grasp Recognition, Robot Grasping, Planning, Autonomous Robots
IdentifiersURN: urn:nbn:se:kth:diva-4279ISBN: 978-91-7178-570-1OAI: oai:DiVA.org:kth-4279DiVA: diva2:11592
2007-02-23, E2, E-huset, Lindstedtsvägen 3, Stockholm, 10:00
Vincze, Marcus, Prof.
QC 201007062007-02-152007-02-152010-07-06Bibliographically approved