Change search
ReferencesLink to record
Permanent link

Direct link
Semantic modelling of space
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1396-0102
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1170-7162
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
Show others and affiliations
2010 (English)In: Cognitive Systems Monographs: Cognitive Systems / [ed] H. I. Christensen, G.-J. M. Kruijff, J. L. Wyatt, Springer Berlin/Heidelberg, 2010, 8, 165-221 p.Chapter in book (Refereed)
Abstract [en]

A cornerstone for robotic assistants is their understanding of the space they are to be operating in: an environment built by people for people to live and work in. The research questions we are interested in in this chapter concern spatial understanding, and its connection to acting and interacting in indoor environments. Comparing the way robots typically perceive and represent the world with findings from cognitive psychology about how humans do it, it is evident that there is a large discrepancy. If robots are to understand humans and vice versa, robots need to make use of the same concepts to refer to things and phenomena as a person would do. Bridging the gap between human and robot spatial representations is thus of paramount importance.  A spatial knowledge representation for robotic assistants must address the issues of human-robot communication. However, it must also provide a basis for spatial reasoning and efficient planning. Finally, it must ensure safe and reliable navigation control. Only then can robots be deployed in semi-structured environments, such as offices, where they have to interact with humans in everyday situations.  In order to meet the aforementioned requirements, i.e. robust robot control and human-like conceptualization, in CoSy, we adopted a spatial representation that contains maps at different levels of abstraction. This stepwise abstraction from raw sensory input not only produces maps that are suitable for reliable robot navigation, but also yields a level of representation that is similar to a human conceptualization of spatial organization. Furthermore, this model provides a richer semantic view of an environment that permits the robot to do spatial categorization rather than only instantiation.  This approach is at the heart of the Explorer demonstrator, which is a mobile robot capable of creating a conceptual spatial map of an indoor environment. In the present chapter, we describe how we use multi-modal sensory input provided by a laser range finder and a camera in order to build more and more abstract spatial representations.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2010, 8. 165-221 p.
National Category
Computer Vision and Robotics (Autonomous Systems)
URN: urn:nbn:se:kth:diva-67502DOI: 10.1007/978-3-642-11694-0_5ISBN: 978-3-642-11694-0OAI: diva2:485889
QC 20120130Available from: 2012-01-27 Created: 2012-01-27 Last updated: 2012-02-21Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Pronobis, AndrzejJensfelt, PatricSjöö, KristofferZender, HendrikBurgard, Wolfram
By organisation
Computer Vision and Active Perception, CVAP
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 176 hits
ReferencesLink to record
Permanent link

Direct link