A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision
2010 (English)In: Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (ROMAN 2010), IEEE , 2010, 397-403 p.Conference paper (Refereed)
In this paper we present an algorithm that allows a human to naturally and easily teach a mobile robot how to recognize objects in its environment. The human selects the object by pointing at it using a laser pointer. The robot recognizes the laser reflections with its cameras and uses this data to generate an initial 2D segmentation of the object. The 3D position of SURF feature points are extracted from the designated area using stereo vision. As the robot moves around the object, new views of the object are obtained from which feature points are extracted. These features are filtered using active vision. The complete object representation consists of feature points registered with 3D pose data. We describe the method and show that it works well by performing experiments on real world data collected with our robot. We use an extensive dataset of 21 objects, differing in size, shape and texture.
Place, publisher, year, edition, pages
IEEE , 2010. 397-403 p.
Object Recognition, Human-Robot Interaction, 3D Visual Perception
IdentifiersURN: urn:nbn:se:kth:diva-47183DOI: 10.1109/ROMAN.2010.5598696ScopusID: 2-s2.0-78649888516ISBN: 978-1-4244-7991-7OAI: oai:DiVA.org:kth-47183DiVA: diva2:454510
IEEE International Symposium on Robot and Human Interactive Communication (ROMAN 2010)
© 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
QC 201111102011-11-102011-11-072011-11-10Bibliographically approved