kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Tracking Unobservable Rotations by Cue Integration
Lappeenranta University of Technology.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
2006 (English)In: 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2006, p. 2744-2750Conference paper, Published paper (Refereed)
Abstract [en]

Model based object tracking has earned significant importance in areas such as augmented reality, surveillance, visual servoing, robotic object manipulation and grasping. Although an active research area, there are still few systems that perform robustly in realistic settings. The key problems to robust and precise object tracking are outliers caused by occlusion, self-occlusion, cluttered background, and reflections. Two most common solutions to the above problems have been the use of robust estimators and the integration of visual cues. The tracking system considered in this paper achieves robustness by integrating model-based and model-free cues. As model-based cues, we consider a CAD model of the object known a priori and as model-free cues,. automatically generated corner features are used. The main idea is to account for relative object motion between consecutive frames using integration of the two cues. The particular contribution of this work is the integration framework where not only polyhedral objects are considered. In particular, we deal with spherical, cylindrical and conical objects for which the complete pose cannot be estimate using only CAD like models. Using the integration with the model-free features, we show how a full pose estimate can be obtained. Experimental evaluation demonstrates robust system performance in realistic settings with highly textured objects.

Place, publisher, year, edition, pages
2006. p. 2744-2750
Series
Proceedings - IEEE International Conference on Robotics and Automation, ISSN 1050-4729 ; 2006
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-63797DOI: 10.1109/ROBOT.2006.1642116ISI: 000240886906005Scopus ID: 2-s2.0-33845675421ISBN: 0-7803-9505-0 (print)OAI: oai:DiVA.org:kth-63797DiVA, id: diva2:482640
Conference
2006 IEEE International Conference on Robotics and Automation, ICRA 2006; Orlando, FL; 15 May 2006 through 19 May 2006
Note
QC 20120130Available from: 2012-01-24 Created: 2012-01-24 Last updated: 2022-06-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kragic, Danica

Search in DiVA

By author/editor
Kyrki, VilleKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 82 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf