kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
SimTrack: A Simulation-based Framework for Scalable Real-time Object Pose Detection and Tracking
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-3731-0582
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
2015 (English)In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, p. 1300-1307Conference paper, Published paper (Refereed)
Resource type
Text
Abstract [en]

We propose a novel approach for real-time object pose detection and tracking that is highly scalable in terms of the number of objects tracked and the number of cameras observing the scene. Key to this scalability is a high degree of parallelism in the algorithms employed. The method maintains a single 3D simulated model of the scene consisting of multiple objects together with a robot operating on them. This allows for rapid synthesis of appearance, depth, and occlusion information from each camera viewpoint. This information is used both for updating the pose estimates and for extracting the low-level visual cues. The visual cues obtained from each camera are efficiently fused back into the single consistent scene representation using a constrained optimization method. The centralized scene representation, together with the reliability measures it enables, simplify the interaction between pose tracking and pose detection across multiple cameras. We demonstrate the robustness of our approach in a realistic manipulation scenario. We publicly release this work as a part of a general ROS software framework for real-time pose estimation, SimTrack, that can be integrated easily for different robotic applications.

Place, publisher, year, edition, pages
IEEE , 2015. p. 1300-1307
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-185105DOI: 10.1109/IROS.2015.7353536ISI: 000371885401067Scopus ID: 2-s2.0-84958156400ISBN: 978-1-4799-9994-1 (print)OAI: oai:DiVA.org:kth-185105DiVA, id: diva2:919039
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 28-OCT 02, 2015, Hamburg, GERMANY
Note

QC 20160412

Available from: 2016-04-12 Created: 2016-04-11 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Pauwels, KarlKragic, Danica

Search in DiVA

By author/editor
Pauwels, KarlKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 867 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf