Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Robust 3D tracking of unknown objects
KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.ORCID-id: 0000-0003-2314-2880
KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.ORCID-id: 0000-0002-5750-9655
2015 (engelsk)Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, nr June, s. 2410-2417Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Visual tracking of unknown objects is an essential task in robotic perception, of importance to a wide range of applications. In the general scenario, the robot has no full 3D model of the object beforehand, just the partial view of the object visible in the first video frame. A tracker with this information only will inevitably lose track of the object after occlusions or large out-of-plane rotations. The way to overcome this is to incrementally learn the appearances of new views of the object. However, this bootstrapping approach is sensitive to drifting due to occasional inclusion of the background into the model. In this paper we propose a method that exploits 3D point coherence between views to overcome the risk of learning the background, by only learning the appearances at the faces of an inscribed cuboid. This is closely related to the popular idea of 2D object tracking using bounding boxes, with the additional benefit of recovering the full 3D pose of the object as well as learning its full appearance from all viewpoints. We show quantitatively that the use of an inscribed cuboid to guide the learning leads to significantly more robust tracking than with other state-of-the-art methods. We show that our tracker is able to cope with 360 degree out-of-plane rotation, large occlusion and fast motion.

sted, utgiver, år, opplag, sider
IEEE conference proceedings, 2015. nr June, s. 2410-2417
Emneord [en]
Tracking (position), Fast motions, Large occlusion, Out-of-plane rotation, Partial views, Robust tracking, State-of-the-art methods, Unknown objects, Visual Tracking, Robotics
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-176136DOI: 10.1109/ICRA.2015.7139520ISI: 000370974902060Scopus ID: 2-s2.0-84938249572OAI: oai:DiVA.org:kth-176136DiVA, id: diva2:875890
Konferanse
2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015
Merknad

QC 20151202. QC 20160411

Tilgjengelig fra: 2015-12-02 Laget: 2015-11-02 Sist oppdatert: 2016-04-11bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Personposter BETA

Pieropan, AlessandroKjellström, Hedvig

Søk i DiVA

Av forfatter/redaktør
Pieropan, AlessandroBergström, NiklasKjellström, Hedvig
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 457 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf