Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Relational approaches for joint object classification andscene similarity measurement in indoor environments
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Chemical Science and Engineering (CHE). (CVAP/CAS/CSC)ORCID iD: 0000-0002-1170-7162
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (CAS/CVAP/CSC)ORCID iD: 0000-0002-7796-1438
2014 (English)In: Proc. of 2014 AAAI Spring Symposium QualitativeRepresentations for Robots 2014, Palo Alto, California: The AAAI Press , 2014Conference paper, Published paper (Refereed)
Abstract [en]

The qualitative structure of objects and their spatial distribution,to a large extent, define an indoor human environmentscene. This paper presents an approach forindoor scene similarity measurement based on the spatialcharacteristics and arrangement of the objects inthe scene. For this purpose, two main sets of spatialfeatures are computed, from single objects and objectpairs. A Gaussian Mixture Model is applied both onthe single object features and the object pair features, tolearn object class models and relationships of the objectpairs, respectively. Given an unknown scene, the objectclasses are predicted using the probabilistic frameworkon the learned object class models. From the predictedobject classes, object pair features are extracted. A fi-nal scene similarity score is obtained using the learnedprobabilistic models of object pair relationships. Ourmethod is tested on a real world 3D database of deskscenes, using a leave-one-out cross-validation framework.To evaluate the effect of varying conditions on thescene similarity score, we apply our method on mockscenes, generated by removing objects of different categoriesin the test scenes.

Place, publisher, year, edition, pages
Palo Alto, California: The AAAI Press , 2014.
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-156596Scopus ID: 2-s2.0-84904916968OAI: oai:DiVA.org:kth-156596DiVA: diva2:767253
Conference
AAAI Spring Symposium Qualitative Representations for Robots March 24–26 2014, Palo Alto, USA
Projects
STRANDS
Note

QC 20141208

Available from: 2014-12-01 Created: 2014-12-01 Last updated: 2015-06-08Bibliographically approved

Open Access in DiVA

fulltext(1810 kB)1160 downloads
File information
File name FULLTEXT01.pdfFile size 1810 kBChecksum SHA-512
2d50c11062c41d7a81cd50c9ca6b26916d01e739764c9140591c521b0fb57a656dc2df5b20177a230938736cf75ade102cefe7a30221f5d0d9950064b81e8bf7
Type fulltextMimetype application/pdf

Other links

Scopushttp://www.aaai.org/ocs/index.php/SSS/SSS14/paper/view/7709

Authority records BETA

Jensfelt, PatricFolkesson, John

Search in DiVA

By author/editor
Alberti, MarinaJensfelt, PatricFolkesson, John
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CASSchool of Chemical Science and Engineering (CHE)
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 1160 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 1097 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf