Integrating object and grasp recognition for dynamic scene interpretation
2005 (English)In: 2005 12th International Conference on Advanced Robotics, NEW YORK, NY: IEEE , 2005, 331-336 p.Conference paper (Refereed)
Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, Programming by Demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it.
Place, publisher, year, edition, pages
NEW YORK, NY: IEEE , 2005. 331-336 p.
IdentifiersURN: urn:nbn:se:kth:diva-43126DOI: 10.1109/ICAR.2005.1507432ISI: 000234272400050ScopusID: 2-s2.0-33749072001ISBN: 0-7803-9177-2OAI: oai:DiVA.org:kth-43126DiVA: diva2:448528
12th International Conference on Advanced Robotics. Seattle, WA. JUL 17-20, 2005
QC 201110172011-10-172011-10-132011-10-17Bibliographically approved