Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Non-Parametric Spatial Context Structure Learning for Autonomous Understanding of Human Environments
KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.ORCID iD: 0000-0003-0448-3786
KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.ORCID iD: 0000-0002-1170-7162
2017 (English)In: 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) / [ed] Howard, A Suzuki, K Zollo, L, IEEE , 2017, p. 1317-1324Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous scene understanding by object classification today, crucially depends on the accuracy of appearance based robotic perception. However, this is prone to difficulties in object detection arising from unfavourable lighting conditions and vision unfriendly object properties. In our work, we propose a spatial context based system which infers object classes utilising solely structural information captured from the scenes to aid traditional perception systems. Our system operates on novel spatial features (IFRC) that are robust to noisy object detections; It also caters to on-the-fly learned knowledge modification improving performance with practise. IFRC are aligned with human expression of 3D space, thereby facilitating easy HRI and hence simpler supervised learning. We tested our spatial context based system to successfully conclude that it can capture spatio structural information to do joint object classification to not only act as a vision aide, but sometimes even perform on par with appearance based robotic vision.

Place, publisher, year, edition, pages
IEEE , 2017. p. 1317-1324
Series
IEEE RO-MAN, ISSN 1944-9445
Keywords [en]
structure learning, spatial relationships, lazy learners, autonomous scene understanding
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-225236ISI: 000427262400205Scopus ID: 2-s2.0-85045741190ISBN: 978-1-5386-3518-6 (print)OAI: oai:DiVA.org:kth-225236DiVA, id: diva2:1194604
Conference
26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL
Funder
EU, FP7, Seventh Framework Programme, 600623Swedish Research Council, C0475401
Note

QC 20180403

Available from: 2018-04-03 Created: 2018-04-03 Last updated: 2018-04-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records BETA

Thippur, AkshayaStork, Johannes A.Jensfelt, Patric

Search in DiVA

By author/editor
Thippur, AkshayaStork, Johannes A.Jensfelt, Patric
By organisation
Robotics, perception and learning, RPL
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 3 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf