Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Robust 3D tracking of unknown objects
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0003-2314-2880
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-5750-9655
2015 (English)In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, no June, 2410-2417 p.Conference paper, Published paper (Refereed)
Abstract [en]

Visual tracking of unknown objects is an essential task in robotic perception, of importance to a wide range of applications. In the general scenario, the robot has no full 3D model of the object beforehand, just the partial view of the object visible in the first video frame. A tracker with this information only will inevitably lose track of the object after occlusions or large out-of-plane rotations. The way to overcome this is to incrementally learn the appearances of new views of the object. However, this bootstrapping approach is sensitive to drifting due to occasional inclusion of the background into the model. In this paper we propose a method that exploits 3D point coherence between views to overcome the risk of learning the background, by only learning the appearances at the faces of an inscribed cuboid. This is closely related to the popular idea of 2D object tracking using bounding boxes, with the additional benefit of recovering the full 3D pose of the object as well as learning its full appearance from all viewpoints. We show quantitatively that the use of an inscribed cuboid to guide the learning leads to significantly more robust tracking than with other state-of-the-art methods. We show that our tracker is able to cope with 360 degree out-of-plane rotation, large occlusion and fast motion.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015. no June, 2410-2417 p.
Keyword [en]
Tracking (position), Fast motions, Large occlusion, Out-of-plane rotation, Partial views, Robust tracking, State-of-the-art methods, Unknown objects, Visual Tracking, Robotics
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-176136DOI: 10.1109/ICRA.2015.7139520ISI: 000370974902060Scopus ID: 2-s2.0-84938249572OAI: oai:DiVA.org:kth-176136DiVA: diva2:875890
Conference
2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015
Note

QC 20151202. QC 20160411

Available from: 2015-12-02 Created: 2015-11-02 Last updated: 2016-04-11Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Pieropan, AlessandroKjellström, Hedvig

Search in DiVA

By author/editor
Pieropan, AlessandroBergström, NiklasKjellström, Hedvig
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 434 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf