Change search
ReferencesLink to record
Permanent link

Direct link
Learning Task Models from Multiple Human Demonstrations
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0003-2965-2953
2006 (English)In: Robot and Human Interactive Communication, 2006. ROMAN 2006. The 15th IEEE International Symposium on Issue Date: 6-8 Sept. 2006, 2006, 358-363 p.Conference paper (Refereed)
Abstract [en]

In this paper, we present a novel method for learning robot tasks from multiple demonstrations. Each demonstrated task is decomposed into subtasks that allow for segmentation and classification of the input data. The demonstrated tasks are then merged into a flexible task model, describing the task goal and its constraints. The two main contributions of the paper are the state generation and contraints identification methods. We also present a task level planner, that is used to assemble a task plan at run-time, allowing the robot to choose the best strategy depending on the current world state

Place, publisher, year, edition, pages
2006. 358-363 p.
National Category
Computer and Information Science
URN: urn:nbn:se:kth:diva-82411DOI: 10.1109/ROMAN.2006.314460ScopusID: 2-s2.0-34948873779OAI: diva2:498213
IEEE International Symposium on Robot and Human Interactive Communication, 6-8 September 2006, University of Hertfordshire, Hatfield, United Kingdom
QC 20120305Available from: 2012-02-11 Created: 2012-02-11 Last updated: 2012-03-05Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Ekvall, StefanKragic, Danica
By organisation
Computer Vision and Active Perception, CVAP
Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 26 hits
ReferencesLink to record
Permanent link

Direct link