Multi-modal Semantic Place Classification
2010 (English)In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 29, no 2-3, 298-320 p.Article in journal (Refereed) Published
The ability to represent knowledge about space and its position therein is crucial for a mobile robot. To this end, topological and semantic descriptions are gaining popularity for augmenting purely metric space representations. In this paper we present a multi-modal place classification system that allows a mobile robot to identify places and recognize semantic categories in an indoor environment. The system effectively utilizes information from different robotic sensors by fusing multiple visual cues and laser range data. This is achieved using a high-level cue integration scheme based on a Support Vector Machine (SVM) that learns how to optimally combine and weight each cue. Our multi-modal place classification approach can be used to obtain a real-time semantic space labeling system which integrates information over time and space. We perform an extensive experimental evaluation of the method for two different platforms and environments, on a realistic off-line database and in a live experiment on an autonomous robot. The results clearly demonstrate the effectiveness of our cue integration scheme and its value for robust place classification under varying conditions.
Place, publisher, year, edition, pages
2010. Vol. 29, no 2-3, 298-320 p.
recognition, sensor fusion, localization, multi-modal place, classification, sensor and cue integration, semantic annotation of space, image, representations, vision
Computer and Information Science
IdentifiersURN: urn:nbn:se:kth:diva-19266DOI: 10.1177/0278364909356483ISI: 000275038200010ScopusID: 2-s2.0-77949376736OAI: oai:DiVA.org:kth-19266DiVA: diva2:337313
FunderSwedish Research Council, 2005-3600-Complex
QC 201005252010-08-052010-08-052011-12-09Bibliographically approved