Change search
Refine search result
123 101 - 136 of 136
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Caputo, Barbara
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A discriminative approach to robust visual place recognition2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 3829-3836Conference paper (Refereed)
    Abstract [en]

    An important competence for a mobile robot system is the ability to localize and perform context interpretation. This is required to perform basic navigation and to facilitate local specific services. Usually localization is performed based on a purely geometric model. Through use of vision and place recognition a number of opportunities open up in terms of flexibility and association of semantics to the model. To achieve this the present paper presents an appearance based method for place recognition. The method is based on a large margin classifier in combination with a rich global image descriptor. The method is robust to variations in illumination and minor scene changes. The method is evaluated across several different cameras, changes in time-of-day and weather conditions. The results clearly demonstrate the value of the approach.

  • 102.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hierarchical Multi-modal Place Categorization2011In: Proc. of the European Conference on Mobile Robotics (ECMR'11), 2011Conference paper (Refereed)
    Abstract [en]

    In this paper we present an hierarchical approach to place categorization. Low level sensory data is processed into more abstract concept, named properties of space. The framework allows for fusing information from heterogeneous sensory modalities and a range of derivatives of their data. Place categories are defined based on the properties that decouples them from the low level sensory data. This gives for better scalability, both in terms of memory and computations. The probabilistic inference is performed in a chain graph which supports incremental learning of the room category models. Experimental results are presented where the shape, size and appearance of the rooms are used as properties along with the number of objects of certain classes and the topology of space.

  • 103.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Large-scale semantic mapping and reasoning with heterogeneous modalities2012In: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, p. 3515-3522Conference paper (Refereed)
    Abstract [en]

    This paper presents a probabilistic framework combining heterogeneous, uncertain, information such as object observations, shape, size, appearance of rooms and human input for semantic mapping. It abstracts multi-modal sensory information and integrates it with conceptual common-sense knowledge in a fully probabilistic fashion. It relies on the concept of spatial properties which make the semantic map more descriptive, and the system more scalable and better adapted for human interaction. A probabilistic graphical model, a chaingraph, is used to represent the conceptual information and perform spatial reasoning. Experimental results from online system tests in a large unstructured office environment highlight the system's ability to infer semantic room categories, predict existence of objects and values of other spatial properties as well as reason about unexplored space.

  • 104.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Understanding the Real World: Combining Objects, Appearance, Geometry and Topology for Semantic Mapping2011Report (Other academic)
    Abstract [en]

    A cornerstone for mobile robots operating in man-made environments and interacting with humans is representing and understanding the human semantic concepts of space. In this report, we present a multi-layered semantic mapping algorithm able to combine information about the existence of objects in the environment with knowledge about the topology and semantic properties of space such as room size, shape and general appearance. We use it to infer semantic categories of rooms and predict existence of objects and values of other spatial properties. We perform experiments offline and online on a mobile robot showing the efficiency and usefulness of our system.

  • 105.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Zender, Hendrik
    Kruijff, Geert-Jan M.
    Mozos, O. M.
    Burgard, Wolfram
    Semantic modelling of space2010In: Cognitive Systems Monographs: Cognitive Systems / [ed] H. I. Christensen, G.-J. M. Kruijff, J. L. Wyatt, Springer Berlin/Heidelberg, 2010, 8, p. 165-221Chapter in book (Refereed)
    Abstract [en]

    A cornerstone for robotic assistants is their understanding of the space they are to be operating in: an environment built by people for people to live and work in. The research questions we are interested in in this chapter concern spatial understanding, and its connection to acting and interacting in indoor environments. Comparing the way robots typically perceive and represent the world with findings from cognitive psychology about how humans do it, it is evident that there is a large discrepancy. If robots are to understand humans and vice versa, robots need to make use of the same concepts to refer to things and phenomena as a person would do. Bridging the gap between human and robot spatial representations is thus of paramount importance.  A spatial knowledge representation for robotic assistants must address the issues of human-robot communication. However, it must also provide a basis for spatial reasoning and efficient planning. Finally, it must ensure safe and reliable navigation control. Only then can robots be deployed in semi-structured environments, such as offices, where they have to interact with humans in everyday situations.  In order to meet the aforementioned requirements, i.e. robust robot control and human-like conceptualization, in CoSy, we adopted a spatial representation that contains maps at different levels of abstraction. This stepwise abstraction from raw sensory input not only produces maps that are suitable for reliable robot navigation, but also yields a level of representation that is similar to a human conceptualization of spatial organization. Furthermore, this model provides a richer semantic view of an environment that permits the robot to do spatial categorization rather than only instantiation.  This approach is at the heart of the Explorer demonstrator, which is a mobile robot capable of creating a conceptual spatial map of an indoor environment. In the present chapter, we describe how we use multi-modal sensory input provided by a laser range finder and a camera in order to build more and more abstract spatial representations.

  • 106.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Mozos, O. Martinez
    Caputo, B.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Multi-modal Semantic Place Classification2010In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 29, no 2-3, p. 298-320Article in journal (Refereed)
    Abstract [en]

    The ability to represent knowledge about space and its position therein is crucial for a mobile robot. To this end, topological and semantic descriptions are gaining popularity for augmenting purely metric space representations. In this paper we present a multi-modal place classification system that allows a mobile robot to identify places and recognize semantic categories in an indoor environment. The system effectively utilizes information from different robotic sensors by fusing multiple visual cues and laser range data. This is achieved using a high-level cue integration scheme based on a Support Vector Machine (SVM) that learns how to optimally combine and weight each cue. Our multi-modal place classification approach can be used to obtain a real-time semantic space labeling system which integrates information over time and space. We perform an extensive experimental evaluation of the method for two different platforms and environments, on a realistic off-line database and in a live experiment on an autonomous robot. The results clearly demonstrate the effectiveness of our cue integration scheme and its value for robust place classification under varying conditions.

  • 107.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A Framework for Robust Cognitive Spatial Mapping2009In: 2009 International Conference on Advanced Robotics, ICAR 2009, IEEE , 2009, p. 686-693Conference paper (Refereed)
    Abstract [en]

    Spatial knowledge constitutes a fundamental component of the knowledge base of a cognitive, mobile agent. This paper introduces a rigorously defined framework for building a cognitive spatial map that permits high level reasoning about space along with robust navigation and localization. Our framework builds on the concepts of places and scenes expressed in terms of arbitrary, possibly complex features as well as local spatial relations. The resulting map is topological and discrete, robocentric and specific to the agent's perception. We analyze spatial mapping design mechanics in order to obtain rules for how to define the map components and attempt to prove that if certain design rules are obeyed then certain map properties are guaranteed to be realized. The idea of this paper is to take a step back from existing algorithms and literature and see how a rigorous formal treatment can lead the way towards a powerful spatial representation for localization and navigation. We illustrate the power of our analysis and motivate our cognitive mapping characteristics with some illustrative examples.

  • 108.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bishop, Adrian N.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Representing spatial knowledge in mobile cognitive systems2010In: Intelligent Autonomous Systems 11, IAS 2010, 2010, p. 133-142Conference paper (Refereed)
    Abstract [en]

    A cornerstone for cognitive mobile agents is to represent the vast body of knowledge about space in which they operate. In order to be robust and efficient, such representation must address requirements imposed on the integrated system as a whole, but also resulting from properties of its components. In this paper, we carefully analyze the problem and design a structure of a spatial knowledge representation for a cognitive mobile system. Our representation is layered and represents knowledge at different levels of abstraction. It deals with complex, crossmodal, spatial knowledge that is inherently uncertain and dynamic. Furthermore, it incorporates discrete symbols that facilitate communication with the user and components of a cognitive system. We present the structure of the representation and propose concrete instantiations.

  • 109. Seiz, M.
    et al.
    Jensfelt, Patric
    Autonomous System Laboratory, Swiss Federal Inst. of Technology, Switzerland.
    Christensen, H.I.
    Active exploration for feature based global localization2000In: 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2000, p. 281-287Conference paper (Refereed)
    Abstract [en]

    This paper presents an algorithm for a mobile robot to actively explore the environment in order to achieve global localization. During the localization process interesting regions for future exploration are selected based on the detected features and on the hypotheses generated by the localization algorithm. The localization process is improved by presenting unique features to the robot's sensors. The proposed algorithm provides highly robust global localization in real world environments with very low computational effort spent in finding exploration goal points. Experimental results are given demonstrating the effectiveness of the algorithm in a number of different situations.

  • 110.
    Selin, Magnus
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. Linkoping Univ, Dept Comp & Informat Sci, S-58183 Linkoping, Sweden.
    Tiger, Maths
    Linkoping Univ, Dept Comp & Informat Sci, S-58183 Linkoping, Sweden..
    Duberg, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Heintz, Fredrik
    Linkoping Univ, Dept Comp & Informat Sci, S-58183 Linkoping, Sweden..
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 1699-1706Article in journal (Refereed)
    Abstract [en]

    Exploration is an important aspect of robotics, whether it is for mapping, rescue missions, or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this letter, we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.

  • 111.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Andrzej, Pronobis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Functional topological relations for qualitative spatial representation2011Conference paper (Refereed)
    Abstract [en]

    In this paper, a framework is proposed for representing knowledge about 3-D space in terms of the functional support and containment relationships, corresponding approximately to the prepositions ``on'' and ``in''. A perceptual model is presented which allows for appraising these qualitative relations given the geometries of objects; also, an axiomatic system for reasoning with the relations is put forward. We implement the system on a mobile robot and show how it can use uncertain visual input to infer a coherent qualitative evaluation of a scene, in terms of these functional relations.

  • 112.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Topological spatial relations for active visual search2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 9, p. 1093-1107Article in journal (Refereed)
    Abstract [en]

    If robots are to assume their long anticipated place by humanity's side and be of help to us in our partially structured environments, we believe that adopting human-like cognitive patterns will be valuable. Such environments are the products of human preferences, activity and thought; they are imbued with semantic meaning. In this paper we investigate qualitative spatial relations with the aim of both perceiving those semantics, and of using semantics to perceive. More specifically, in this paper we introduce general perceptual measures for two common topological spatial relations, "on" and "in", that allow a robot to evaluate object configurations, possible or actual, in terms of those relations. We also show how these spatial relations can be used as a way of guiding visual object search. We do this by providing a principled approach for indirect search in which the robot can make use of known or assumed spatial relations between objects, significantly increasing the efficiency of search by first looking for an intermediate object that is easier to find. We explain our design, implementation and experimental setup and provide extensive experimental results to back up our thesis.

  • 113.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Mörwald, Thomas
    Zhou, Kai
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Mechanical support as a spatial abstraction for mobile robots2010In: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, p. 4894-4900Conference paper (Refereed)
    Abstract [en]

    Motivated by functional interpretations of spatial language terms, and the need for cognitively plausible and practical abstractions for mobile service robots, we present a spatial representation based on the physical support of one object by another, corresponding to the preposition "on". A perceptual model for evaluating this relation is suggested, and experiments-simulated as well as using a real robot -are presented. We indicate how this model can be used for important tasks such as communication of spatial knowledge, abstract reasoning and learning, taking as an example direct and indirect visual search. We also demonstrate the model experimentally and show that it produces intuitively feasible results from visual scene analysis as well as synthetic distributions that can be put to a number of uses.

  • 114.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gálvez López, Dorian
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Paul, Chandana
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Object Search and Localization for an Indoor Mobile Robot2009In: Journal of Computing and Information Technology, ISSN 1330-1136, E-ISSN 1846-3908, Vol. 17, no 1, p. 67-80Article in journal (Refereed)
    Abstract [en]

    In this paper we present a method for search and localization of objects with a mobile robot using a monocular camera with zoom capabilities. We show how to overcome the limitations of low resolution images in object recognition by utilizing a combination of an attention mechanism and zooming as the first steps in the recognition process. The attention mechanism is based on receptive field cooccurrence histograms and the object recognition on SIFT feature matching. We present two methods for estimating the distance to the objects which serve both as the input to the control of the zoom and the final object localization. Through extensive experiments in a realistic environment, we highlight the strengths and weaknesses of both methods. To evaluate the usefulness of the method we also present results from experiments with an integrated system where a global sensing plan is generated based on view planning to let the camera cover the space on a per room basis.

  • 115.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Learning spatial relations from functional simulation2011In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), 2011, p. 1513-1519Conference paper (Refereed)
    Abstract [en]

    Robots acting in complex environments need not only be aware of objects, but also of the relationships objects have with each other. This paper suggests a conceptualization of these relationships in terms of task-relevant functional distinctions, such as support, location control, protection and confinement. Being able to discern such relations in a scene will be important for robots in practical tasks; accordingly, it is demonstrated how predictive models can be trained using data from physics simulations. The resulting models are shown to be both highly predictive and intuitively reasonable.

  • 116.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Paul, Chandana
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Object Localization using Bearing Only Visual Detection2008In: Intelligent Autonomous Systems 10, IAS 2008AS-10: INTELLIGENT AUTONOMOUS SYSTEMS 10 / [ed] Burgard W; Dillmann R; Plagemann C; Vahrenkamp N, AMSTERDAM: I O S PRESS , 2008, p. 254-263Conference paper (Refereed)
    Abstract [en]

    This work demonstrates how an autonomous robotic platform can use intrinsically noisy, coarse-scale visual methods lacking range information to produce good estimates of the location of objects, by using a map-space representation for weighting together multiple observations from different vantage points. As the robot moves through the environment it acquires visual images which are processed by means of a fast but noisy visual detection algorithm that gives bearing only information. The results from the detection are projected from image space into map space, where data from multiple viewpoints can intrinsically combine to yield an increasingly accurate picture of the location of objects. This method has been implemented and shown to work for object localization on a real robot. It has also been tested extensively in simulation, with systematically varied false positive and false negative detection rates. The results demonstrate that this is a viable method for object localization, even under a wide range of sensor uncertainties.

  • 117.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Zender, Hendrik
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kruijff, Geert-Jan M.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Hawes, Nick
    Brenner, Michael
    The explorer system2010In: Cognitive Systems Monographs: Cognitive Systems / [ed] H. I. Christensen, G.-J. M. Kruijff, J. L. Wyatt, Springer Berlin/Heidelberg, 2010, 8, p. 395-421Chapter in book (Refereed)
    Abstract [en]

    In the Explorer scenario we deal with the problems of modeling space, acting in this space and reasoning about it. Spatial models are built using input from sensors such as laser scanners and cameras but equally importantly also based on human input. It is this combination that enables the creation of a spatial model that can support low level tasks such as navigation, as well as interaction. Even combined, the inputs only provide a partial description of the world. By combining this knowledge with a reasoning system and a common sense ontology, further information can be inferred to make the description of the world more complete. Unlike the PlayMate system, all the information that is needed to build the spatial models are not available to it sensors at all times. The Explorer need to move around, i.e. explorer space, to gather information and integrate this into the spatial models. Two main modes for this exploration of space have been investigated within the Explorer scenario. In the first mode the robot explores space together with a user in a home tour fashion. That is, the user shows the robot around their shared environment. This is what we call the Human Augmented Mapping paradigm. The second mode is fully autonomous exploration where the robot moves with the purpose of covering space. In practice the two modes would both be used interchangeably to get the best trade-off between autonomy, shared representation and speed. The focus in the Explorer is not on performing a particular task to perfection, but rather acting within a flexible framework that alleviates the need for scripting and hardwiring. We want to investigate two problems within this context: what information must be exchanged by different parts of the system to make this possible, and how the current state of the world should be represented during such exchanges. One particular interaction which encompasses a lot of the aforementioned issues is giving the robot the ability to talk about space. This interaction raises questions such as:  how can we design models that allow the robot and human to talk about where things are, and how do we link the dialogue and the mapping systems?

  • 118.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A predictor for operator input for time-delayed teleoperation2010In: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 20, no 7, p. 778-786Article in journal (Refereed)
    Abstract [en]

    In this paper we describe a method for bridging Internet time delays in a free motion type teleoperation scenario in an unmodeled remote environment with video feedback The method proposed uses minimum jerk motion models to predict the input from the user a time into the future that is equivalent to the round-trip communication delay The predictions are then used to control a remote robot Thus the operator can in effect observe the resulting motion of the remote robot with virtually no time-delay even in the presence of a delay on the physical communications channel We present results from a visually guided teleoperated line tracing experiment with 100 ms round-trip delays where we show that the proposed method makes a significant performance improvement for teleoperation with delays corresponding to intercontinental distances.

  • 119.
    Sundvall, Paul
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Fault detection for mobile robots using redundant positioning systems2006In: 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-10, NEW YORK, NY: IEEE , 2006, p. 3781-3786Conference paper (Refereed)
    Abstract [en]

    Reliable navigation is a very important part of an autonomous mobile robot system. This means for instance that the robot should not lose track of its position, even if unexpected events like wheel slip and collisions occur. The standard approach to this problem is to construct a navigation system that is robust in itself. This paper proposes that detecting faults can also be made outside the normal navigation system, as an additional fault detector. Besides increasing the robustness, a means for detecting deviations is obtained, which can be important for the rest of the robot system, for instance the top level planner. The method uses two or more sources of robot position estimates, and compares them to detect unexpected deviation without getting deceived by drift or different characteristics in the position systems it gets information from. Both relative and absolute position sources can be used. meaning that existing positioning systems already implemented can be used in the detector. For detection purposes, an extended Kalman filter is used in conjunction with a CUSUM test. The detector is able to not only detect faults, but also give an estimate of when the fault occurred, which is useful for doing fault recovery. The detector is easy to implement, as it requires no modification of existing systems. Also the computational demands are very low. The approach is implemented and demonstrated on a mobile robot, using odometry and a scan matcher as sources of position information. It is shown that the system is able to detect wheel slip in real-time.

  • 120.
    Sundvall, Paul
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Wahlberg, Bo
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Fault detection using redundant navigation modules2007In: Fault Detection, Supervision and Safety of Technical Processes 2006, Elsevier, 2007, Vol. 6, p. 522-527Conference paper (Refereed)
    Abstract [en]

    Mobile robots and other moving vehicles need to know their position with a certain level of confidence. In this chapter, a method is proposed to handle faults in the navigation system by considering the outputs of existing navigation modules rather than processing sensor data directly. The proposed method needs only a simple model for drift and noise, an extended Kalman filter and a CUSUM test. The approach is demonstrated using two providers, odometry and scan matching. It can handle position information given in different coordinate systems and does not require any modification of existing navigation modules. Promising experimental results are shown.

  • 121.
    Tang, Jiexiong
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Geometric Correspondence Network for Camera Motion Estimation2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 1010-1017Article in journal (Refereed)
    Abstract [en]

    In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

  • 122.
    Tang, Jiexiong
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 530-537Article in journal (Refereed)
    Abstract [en]

    In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

  • 123.
    Thippur, Akshaya
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Agrawal, G.
    Del Burgo, Adria Gallart
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ramesh, J. H.
    Jha, M. K.
    Akhil, M. B. S. S.
    Shetty, N. B.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    KTH-3D-TOTAL: A 3D dataset for discovering spatial structures for long-term autonomous learning2014In: 2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014, IEEE , 2014, p. 1528-1535Conference paper (Refereed)
    Abstract [en]

    Long-term autonomous learning of human environments entails modelling and generalizing over distinct variations in: object instances in different scenes, and different scenes with respect to space and time. It is crucial for the robot to recognize the structure and context in spatial arrangements and exploit these to learn models which capture the essence of these distinct variations. Table-tops posses a typical structure repeatedly seen in human environments and are identified by characteristics of being personal spaces of diverse functionalities and dynamically changing due to human interactions. In this paper, we present a 3D dataset of 20 office table-tops manually observed and scanned 3 times a day as regularly as possible over 19 days (461 scenes) and subsequently, manually annotated with 18 different object classes, including multiple instances. We analyse the dataset to discover spatial structures and patterns in their variations. The dataset can, for example, be used to study the spatial relations between objects and long-term environment models for applications such as activity recognition, context and functionality estimation and anomaly detection.

  • 124.
    Thippur, Akshaya
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Burbridge, C.
    Kunze, L.
    Alberti, Marina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hawes, N.
    A comparison of qualitative and metric spatial relation models for scene understanding2015In: Proceedings of the National Conference on Artificial Intelligence, AI Access Foundation , 2015, Vol. 2, p. 1632-1640Conference paper (Refereed)
    Abstract [en]

    Object recognition systems can be unreliable when run in isolation depending on only image based features, but their performance can be improved when taking scene context into account. In this paper, we present techniques to model and infer object labels in real scenes based on a variety of spatial relations - geometric features which capture how objects co-occur - and compare their efficacy in the context of augmenting perception based object classification in real-world table-top scenes. We utilise a long-term dataset of office table-tops for qualitatively comparing the performances of these techniques. On this dataset, we show that more intricate techniques, have a superior performance but do not generalise well on small training data. We also show that techniques using coarser information perform crudely but sufficiently well in standalone scenarios and generalise well on small training data. We conclude the paper, expanding on the insights we have gained through these comparisons and comment on a few fundamental topics with respect to long-term autonomous robots.

  • 125.
    Thippur, Akshaya
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Non-Parametric Spatial Context Structure Learning for Autonomous Understanding of Human Environments2017In: 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) / [ed] Howard, A Suzuki, K Zollo, L, IEEE , 2017, p. 1317-1324Conference paper (Refereed)
    Abstract [en]

    Autonomous scene understanding by object classification today, crucially depends on the accuracy of appearance based robotic perception. However, this is prone to difficulties in object detection arising from unfavourable lighting conditions and vision unfriendly object properties. In our work, we propose a spatial context based system which infers object classes utilising solely structural information captured from the scenes to aid traditional perception systems. Our system operates on novel spatial features (IFRC) that are robust to noisy object detections; It also caters to on-the-fly learned knowledge modification improving performance with practise. IFRC are aligned with human expression of 3D space, thereby facilitating easy HRI and hence simpler supervised learning. We tested our spatial context based system to successfully conclude that it can capture spatio structural information to do joint object classification to not only act as a vision aide, but sometimes even perform on par with appearance based robotic vision.

  • 126.
    Topp, Elin Anna
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Kragic, Danica
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Jensfelt, Patric
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    An interactive interface for service robots2004In: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, p. 3469-3474Conference paper (Refereed)
    Abstract [en]

    In this paper, we present an initial design of an interactive interface for a service robot based on multi sensor fusion. We show how the integration of speech, vision and laser range data can be performed using a high level of abstraction. Guided by a number of scenarios commonly used in a service robot framework, the experimental evaluation will show the benefit of sensory integration which allows the design of a robust and natural interaction system using a set of simple perceptual algorithms.

  • 127.
    Ullah, Muhammad Muneeb
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Caputo, B
    Luo, J
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Towards Robust Place Recognition for Robot Localization2008In: 2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9 / [ed] IEEE, 2008, p. 530-537Conference paper (Refereed)
    Abstract [en]

    Localization and context interpretation are two key competences for mobile robot systems. Visual place recognition, as opposed to purely geometrical models, holds promise of higher flexibility and association of semantics to the model. Ideally, a place recognition algorithm should be robust to dynamic changes and it should perform consistently when recognizing a room (for instance a corridor) in different geographical locations. Also, it should be able to categorize places, a crucial capability for transfer of knowledge and continuous learning. In order to test the suitability of visual recognition algorithms for these tasks, this paper presents a new database, acquired in three different labs across Europe. It contains image sequences of several rooms under dynamic changes, acquired at the same time with a perspective and omnidirectional camera, mounted on a socket. We assess this new database with an appearance based algorithm that combines local features with support vector machines through an ad-hoc kernel. Results show the effectiveness of the approach and the value of the database.

  • 128.
    Wang, Zhan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Chemical Science and Engineering (CHE).
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Modeling motion patterns of dynamic objectsby IOHMM2014In: Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, Chicago, IL: IEEE conference proceedings, 2014, p. 1832-1838Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach to model motion patterns of dynamic objects, such as people and vehicles, in the environment with the occupancy grid map representation. Corresponding to the ever-changing nature of the motion pattern of dynamic objects, we model each occupancy grid cell by an IOHMM, which is an inhomogeneous variant of the HMM. This distinguishes our work from existing methods which use the conventional HMM, assuming motion evolving according to a stationary process. By introducing observations of neighbor cells in the previous time step as input of IOHMM, the transition probabilities in our model are dependent on the occurrence of events in the cell's neighborhood. This enables our method to model the spatial correlation of dynamics across cells. A sequence processing example is used to illustrate the advantage of our model over conventional HMM based methods. Results from the experiments in an office corridor environment demonstrate that our method is capable of capturing dynamics of such human living environments.

  • 129.
    Wang, Zhan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Building a human behavior map from local observations2016In: Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on, IEEE, 2016, p. 64-70, article id 7745092Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel method for classifying regions from human movements in service robots' working environments. The entire space is segmented subject to the class type according to the functionality or affordance of each place which accommodates a typical human behavior. This is achieved based on a grid map in two steps. First a probabilistic model is developed to capture human movements for each grid cell by using a non-ergodic HMM. Then the learned transition probabilities corresponding to these movements are used to cluster all cells by using the K-means algorithm. The knowledge of typical human movements for each location, represented by the prototypes from K-means and summarized in a ‘behavior-based map’, enables a robot to adjust the strategy of interacting with people according to where they are located, and thus greatly enhances its capability to assist people. The performance of the proposed classification method is demonstrated by experimental results from 8 hours of data that are collected in a kitchen environment.

  • 130.
    Wang, Zhan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC).
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Modeling Spatial-Temporal Dynamics of Human Movements for Predicting Future Trajectories2015Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach to modeling the dynamics of human movements with a grid-based representation. For each grid cell, we formulate the local dynamics using a variant of the left-to-right HMM, and thus explicitly model the exiting direction from the current cell. The dependency of this process on the entry direction is captured by employing the InputOutput HMM (IOHMM). On a higher level, we introduce the place where the whole trajectory originated into the IOHMM framework forming a hierarchical input structure. Therefore, we manage to capture both local spatial-temporal correlations and the long-term dependency on faraway initiating events, thus enabling the developed model to incorporate more information and to generate more informative predictions of future trajectories. The experimental results in an office corridor environment verify the capabilities of our method.

  • 131.
    Wang, Zhan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Multi-scale conditional transition map: Modeling spatial-temporal dynamics of human movements with local and long-term correlations2015In: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, IEEE conference proceedings, 2015, p. 6244-6251Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach to modeling the dynamics of human movements with a grid-based representation. The model we propose, termed as Multi-scale Conditional Transition Map (MCTMap), is an inhomogeneous HMM process that describes transitions of human location state in spatial and temporal space. Unlike existing work, our method is able to capture both local correlations and long-term dependencies on faraway initiating events. This enables the learned model to incorporate more information and to generate an informative representation of human existence probabilities across the grid map and along the temporal axis for intelligent interaction of the robot, such as avoiding or meeting the human. Our model consists of two levels. For each grid cell, we formulate the local dynamics using a variant of the left-to-right HMM, and thus explicitly model the exiting direction from the current cell. The dependency of this process on the entry direction is captured by employing the Input-Output HMM (IOHMM). On the higher level, we introduce the place where the whole trajectory originated into the IOHMM framework forming a hierarchical input structure to capture long-term dependencies. The capabilities of our method are verified by experimental results from 10 hours of data collected in an office corridor environment.

  • 132. Wijk, O
    et al.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, H
    Triangulation based fusion of ultrasonic sensor data1998Conference paper (Refereed)
    Abstract [en]

    Ultrasonic sensors are still one of the most widely used sensors in mobile robotics. A notorious problem in the use of sonar data is the lack of good spatial resolution, which typically results in a high uncertainty in the resulting map of the environment. In the paper a triangulation technique is used for filtering of data so as to obtain an improved grid map of the environment. The basic technique is described and it is outlined how it can be used for identification of natural landmarks.

  • 133. Wyatt, Jeremy L.
    et al.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Brenner, Michael
    Hanheide, Marc
    Hawes, Nick
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristan, Matej
    Kruijff, Geert-Jan M.
    Lison, Pierre
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vrecko, Alen
    Zender, Hendrik
    Zillich, Michael
    Skocaj, Danijel
    Self-Understanding and Self-Extension: A Systems and Representational Approach2010In: IEEE T AUTON MENT DE, ISSN 1943-0604, Vol. 2, no 4, p. 282-303Article in journal (Refereed)
    Abstract [en]

    There are many different approaches to building a system that can engage in autonomous mental development. In this paper, we present an approach based on what we term self-understanding, by which we mean the explicit representation of and reasoning about what a system does and does not know, and how that knowledge changes under action. We present an architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, which we term self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a goal management and planning system for setting and achieving learning goals.

  • 134. Zender, H.
    et al.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kruijff, G.-J. M.
    Human- and Situation-Aware People Following2007In: 2007 RO-MAN: 16TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1-3, 2007, p. 1124-1129Conference paper (Refereed)
    Abstract [en]

    The paper presents an approach to intelligent, interactive people following for autonomous robots. The approach combines robust methods for simultaneous localization and mapping and for people tracking in order to yield a socially and environmentally sensitive people following behavior. Unlike current purely reactive approaches ("nearest point following") it enables the robot to follow a human in a socially acceptable way, providing verbal and non-verbal feedback to the user where necessary. At the same time, the robot makes use of information about the spatial and functional organization of its environment, so that it can anticipate likely actions performed by a human, and adjust its motion accordingly. As a result, the robot's behaviors become less reactive and more intuitive when following people around an indoor environment. The approach has been fully implemented and tested.

  • 135. Zender, H.
    et al.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Mozos, O. M.
    Kruijff, G.-J. M.
    Burgard, W.
    An integrated robotic system for spatial understanding and situated interaction in indoor environments2007In: AAAI-07/IAAI-07 Proceedings: 22nd AAAI Conference on Artificial Intelligence and the 19th Innovative Applications of Artificial Intelligence Conference, 2007, p. 1584-1589Conference paper (Refereed)
    Abstract [en]

    A major challenge in robotics and artificial intelligence lies in creating robots that are to cooperate with people in human-populated environments, e.g. for domestic assistance or elderly care. Such robots need skills that allow them to interact with the world and the humans living and working therein. In this paper we investigate the question of spatial understanding of human-made environments. The functionalities of our system comprise perception of the world, natural language, learning, and reasoning. For this purpose we integrate state-of-the-art components from different disciplines in AI, robotics and cognitive systems into a mobile robot system. The work focuses on the description of the principles we used for the integration, including cross-modal integration, ontology-based mediation, and multiple levels of abstraction of perception. Finally, we present experiments with the integrated “CoSy Explorer ” 1 system and list some of the major lessons that were learned from its design, implementation, and evaluation.

  • 136. Zender, H.
    et al.
    Mozos, O. Martinez
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kruijff, G. J. M.
    Burgard, W.
    Conceptual spatial representations for indoor mobile robots2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 6, p. 493-502Article in journal (Refereed)
    Abstract [en]

    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following different findings in spatial cognition, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system.

123 101 - 136 of 136
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf