Change search
ReferencesLink to record
Permanent link

Direct link
The explorer system
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1170-7162
Show others and affiliations
2010 (English)In: Cognitive Systems Monographs: Cognitive Systems / [ed] H. I. Christensen, G.-J. M. Kruijff, J. L. Wyatt, Springer Berlin/Heidelberg, 2010, 8, 395-421 p.Chapter in book (Refereed)
Abstract [en]

In the Explorer scenario we deal with the problems of modeling space, acting in this space and reasoning about it. Spatial models are built using input from sensors such as laser scanners and cameras but equally importantly also based on human input. It is this combination that enables the creation of a spatial model that can support low level tasks such as navigation, as well as interaction. Even combined, the inputs only provide a partial description of the world. By combining this knowledge with a reasoning system and a common sense ontology, further information can be inferred to make the description of the world more complete. Unlike the PlayMate system, all the information that is needed to build the spatial models are not available to it sensors at all times. The Explorer need to move around, i.e. explorer space, to gather information and integrate this into the spatial models. Two main modes for this exploration of space have been investigated within the Explorer scenario. In the first mode the robot explores space together with a user in a home tour fashion. That is, the user shows the robot around their shared environment. This is what we call the Human Augmented Mapping paradigm. The second mode is fully autonomous exploration where the robot moves with the purpose of covering space. In practice the two modes would both be used interchangeably to get the best trade-off between autonomy, shared representation and speed. The focus in the Explorer is not on performing a particular task to perfection, but rather acting within a flexible framework that alleviates the need for scripting and hardwiring. We want to investigate two problems within this context: what information must be exchanged by different parts of the system to make this possible, and how the current state of the world should be represented during such exchanges. One particular interaction which encompasses a lot of the aforementioned issues is giving the robot the ability to talk about space. This interaction raises questions such as:  how can we design models that allow the robot and human to talk about where things are, and how do we link the dialogue and the mapping systems?

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2010, 8. 395-421 p.
National Category
Computer Vision and Robotics (Autonomous Systems)
URN: urn:nbn:se:kth:diva-67465DOI: 10.1007/978-3-642-11694-0_10ISBN: 978-3-642-11694-0OAI: diva2:485054
QC 20120130Available from: 2012-01-27 Created: 2012-01-27 Last updated: 2012-02-21Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Sjöö, KristofferZender, HendrikJensfelt, PatricPronobis, Andrzej
By organisation
Computer Vision and Active Perception, CVAP
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 19 hits
ReferencesLink to record
Permanent link

Direct link