Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Robot Task Learning from Human Demonstration
KTH, School of Computer Science and Communication (CSC).
2007 (English)Doctoral thesis, monograph (Other scientific)
Abstract [en]

Today, most robots used in the industry are preprogrammed and require a welldefined and controlled environment. Reprogramming such robots is often a costly process requiring an expert. By enabling robots to learn tasks from human demonstration, robot installation and task reprogramming are simplified. In a longer time perspective, the vision is that robots will move out of factories into our homes and offices. Robots should be able to learn how to set a table or how to fill the dishwasher. Clearly, robot learning mechanisms are required to enable robots to adapt and operate in a dynamic environment, in contrast to the well defined factory assembly line.

This thesis presents contributions in the field of robot task learning. A distinction is made between direct and indirect learning. Using direct learning, the robot learns tasks while being directly controlled by a human, for example in a teleoperative setting. Indirect learning, however, allows the robot to learn tasks by observing a human performing them. A challenging and realistic assumption that is decisive for the indirect learning approach is that the task relevant objects are not necessarily at the same location at execution time as when the learning took place. Thus, it is not sufficient to learn movement trajectories and absolute coordinates. Different methods are required for a robot that is to learn tasks in a dynamic home or office environment. This thesis presents contributions to several of these enabling technologies. Object detection and recognition are used together with pose estimation in a Programming by Demonstration scenario. The vision system is integrated with a localization module which enables the robot to learn mobile tasks. The robot is able to recognize human grasp types, map human grasps to its own hand and also evaluate suitable grasps before grasping an object. The robot can learn tasks from a single demonstration, but it also has the ability to adapt and refine its knowledge as more demonstrations are given. Here, the ability to generalize over multiple demonstrations is important and we investigate a method for automatically identifying the underlying constraints of the tasks.

The majority of the methods have been implemented on a real, mobile robot, featuring a camera, an arm for manipulation and a parallel-jaw gripper. The experiments were conducted in an everyday environment with real, textured objects of various shape, size and color.

Place, publisher, year, edition, pages
Stockholm: KTH , 2007. , vii, 136 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2007:01
Keyword [en]
Robotics, Machine Learning, Artificial Intelligence, Computer Vision, Programming by Demonstration, Grasp Mappping, Grasp Recognition, Robot Grasping, Planning, Autonomous Robots
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-4279ISBN: 978-91-7178-570-1 (print)OAI: oai:DiVA.org:kth-4279DiVA: diva2:11592
Public defence
2007-02-23, E2, E-huset, Lindstedtsvägen 3, Stockholm, 10:00
Opponent
Supervisors
Note
QC 20100706Available from: 2007-02-15 Created: 2007-02-15 Last updated: 2010-07-06Bibliographically approved

Open Access in DiVA

fulltext(6608 kB)997 downloads
File information
File name FULLTEXT01.pdfFile size 6608 kBChecksum MD5
66472e10a6bf5293bee1e8c01fa2762c7863fb965aa66a6d437f338b134aae5043b09590
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Ekvall, Staffan
By organisation
School of Computer Science and Communication (CSC)
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 997 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1533 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf