Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision
Technical University of Eindhoven, The Netherlands.
The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (Center for Autonomous Systems)
2010 (English)In: Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (ROMAN 2010), IEEE , 2010, 397-403 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present an algorithm that allows a human to naturally and easily teach a mobile robot how to recognize objects in its environment. The human selects the object by pointing at it using a laser pointer. The robot recognizes the laser reflections with its cameras and uses this data to generate an initial 2D segmentation of the object. The 3D position of SURF feature points are extracted from the designated area using stereo vision. As the robot moves around the object, new views of the object are obtained from which feature points are extracted. These features are filtered using active vision. The complete object representation consists of feature points registered with 3D pose data. We describe the method and show that it works well by performing experiments on real world data collected with our robot. We use an extensive dataset of 21 objects, differing in size, shape and texture.

Place, publisher, year, edition, pages
IEEE , 2010. 397-403 p.
Keyword [en]
Object Recognition, Human-Robot Interaction, 3D Visual Perception
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-47183DOI: 10.1109/ROMAN.2010.5598696Scopus ID: 2-s2.0-78649888516ISBN: 978-1-4244-7991-7 (print)OAI: oai:DiVA.org:kth-47183DiVA: diva2:454510
Conference
IEEE International Symposium on Robot and Human Interactive Communication (ROMAN 2010)
Note
© 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC 20111110Available from: 2011-11-10 Created: 2011-11-07 Last updated: 2011-11-10Bibliographically approved

Open Access in DiVA

zwinderman10roman.pdf(1978 kB)722 downloads
File information
File name FULLTEXT01.pdfFile size 1978 kBChecksum SHA-512
c3e610ea33fad0eb9742b93cc07bcdc592a410ba29e8a3cbfe7ad5a41fa0f8143969bd701dad09868b22e465685eb95701c52ede868f47d46700e897c7f13e8e
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusPublished version

Search in DiVA

By author/editor
Kootstra, Gert
By organisation
Computer Vision and Active Perception, CVAP
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 722 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 1504 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf