kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Real-Time Semantic Stereo Matching
Show others and affiliations
2020 (English)In: Proceedings - IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers Inc. , 2020, p. 10780-10787Conference paper, Published paper (Refereed)
Abstract [en]

Scene understanding is paramount in robotics, self-navigation, augmented reality, and many other fields. To fully accomplish this task, an autonomous agent has to infer the 3D structure of the sensed scene (to know where it looks at) and its content (to know what it sees). To tackle the two tasks, deep neural networks trained to infer semantic segmentation and depth from stereo images are often the preferred choices. Specifically, Semantic Stereo Matching can be tackled by either standalone models trained for the two tasks independently or joint end-to-end architectures. Nonetheless, as proposed so far, both solutions are inefficient because requiring two forward passes in the former case or due to the complexity of a single network in the latter, although jointly tackling both tasks is usually beneficial in terms of accuracy. In this paper, we propose a single compact and lightweight architecture for real-time semantic stereo matching. Our framework relies on coarse-to-fine estimations in a multi-stage fashion, allowing: i) very fast inference even on embedded devices, with marginal drops in accuracy, compared to state-of-the-art networks, ii) trade accuracy for speed, according to the specific application requirements. Experimental results on high-end GPUs as well as on an embedded Jetson TX2 confirm the superiority of semantic stereo matching compared to standalone tasks and highlight the versatility of our framework on any hardware and for any application.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2020. p. 10780-10787
Keywords [en]
Agricultural robots, Augmented reality, Autonomous agents, Deep neural networks, Image segmentation, Network architecture, Program processors, Robotics, Semantics, Application requirements, Depth from stereo, Lightweight architecture, Real-time semantics, Scene understanding, Semantic segmentation, State of the art, Stereo matching, Stereo image processing
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-291313DOI: 10.1109/ICRA40945.2020.9196784Scopus ID: 2-s2.0-85092690739OAI: oai:DiVA.org:kth-291313DiVA, id: diva2:1537380
Conference
2020 IEEE International Conference on Robotics and Automation, ICRA 2020, 31 May 2020 through 31 August 2020
Note

QC 20210315

Available from: 2021-03-15 Created: 2021-03-15 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Marti, MiquelKjellström, Hedvig

Search in DiVA

By author/editor
Marti, MiquelKjellström, Hedvig
By organisation
Robotics, Perception and Learning, RPL
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 49 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf