Change search
Refine search result
123 51 - 100 of 136
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Gálvez López, Dorian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Paul, Chandana
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hybrid Laser and Vision Based Object Search and Localization2008In: 2008 IEEE International Conference on Robotics and Automation: Vols 1-9, 2008, p. 2636-2643Conference paper (Refereed)
    Abstract [en]

    We describe a method for an autonomous robot to efficiently locate one or more distinct objects in a realistic environment using monocular vision. We demonstrate how to efficiently subdivide acquired images into interest regions for the robot to zoom in on, using receptive field cooccurrence histograms. Objects are recognized through SIFT feature matching and the positions of the objects are estimated. Assuming a 2D map of the robot's surroundings and a set of navigation nodes between which it is free to move, we show how to compute an efficient sensing plan that allows the robot's camera to cover the environment, while obeying restrictions on the different objects' maximum and minimum viewing distances. The approach has been implemented on a real robotic system and results are presented showing its practicability and the quality of the position estimates obtained.

  • 52. Göbelbecker, M.
    et al.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A planning approach to active visual search in large environments2011In: AAAI Workshop Tech. Rep., 2011, p. 8-13Conference paper (Refereed)
    Abstract [en]

    In this paper we present a principled planner based approach to the active visual object search problem in unknown environments. We make use of a hierarchical planner that combines the strength of decision theory and heuristics. Furthermore, our object search approach leverages on the conceptual spatial knowledge in the form of object co-occurrences and semantic place categorisation. A hierarchical model for representing object locations is presented with which the planner is able to perform indirect search. Finally we present real world experiments to show the feasibility of the approach.

  • 53.
    Göransson, Rasmus
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, A.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kinect@home: A crowdsourced RGB-D dataset2016In: 13th International Conference on Intelligent Autonomous Systems, IAS 2014, Springer, 2016, Vol. 302, p. 843-858Conference paper (Refereed)
    Abstract [en]

    Algorithms for 3D localization, mapping, and reconstruction are getting increasingly mature. It is time to also make the datasets on which they are tested more realistic to reflect the conditions in the homes of real people. Today algorithms are tested on data gathered in the lab or at best in a few places, and almost always by the people that designed the algorithm. In this paper, we present the first RGB-D dataset from the crowd sourced data collection project Kinect@Home and perform an initial analysis of it. The dataset contains 54 recordings with a total of approximately 45 min of RGB-D video. We present a comparison of two different pose estimation methods, the Kinfu algorithm and a key point-based method, to show how this dataset can be used even though it is lacking ground truth. In addition, the analysis highlights the different characteristics and error modes of the two methods and shows how challenging data from the real world is.

  • 54.
    Hanheide, Marc
    et al.
    University of Lincoln.
    Göbelbecker, Moritz
    University of Freiburg.
    Horn, Graham S.
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. krsj@kth.se.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gretton, Charles
    University of Birmingham.
    Dearden, Richard
    University of Birmingham.
    Janicek, Miroslav
    DFKI, Saarbrücken.
    Zender, Hendrik
    DFKI, Saarbrücken.
    Kruijff, Geert-Jan
    DFKI, Saarbrücken.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Robot task planning and explanation in open and uncertain worlds2015In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921Article in journal (Refereed)
    Abstract [en]

    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

  • 55.
    Hanheide, Marc
    et al.
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Göbelbecker, Moritz
    Albert-Ludwigs-Universität.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Zender, Hendrik
    DFKI Saarbrücken.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    A Framework for Goal Generation and Management2010In: Proceedings of the AAAI Workshop on Goal-Directed Autonomy, 2010Conference paper (Refereed)
    Abstract [en]

    Goal-directed behaviour is often viewed as an essential char- acteristic of an intelligent system, but mechanisms to generate and manage goals are often overlooked. This paper addresses this by presenting a framework for autonomous goal gener- ation and selection. The framework has been implemented as part of an intelligent mobile robot capable of exploring unknown space and determining the category of rooms au- tonomously. We demonstrate the efficacy of our approach by comparing the performance of two versions of our inte- grated system: one with the framework, the other without. This investigation leads us conclude that such a framework is desirable for an integrated intelligent system because it re- duces the complexity of the problems that must be solved by other behaviour-generation mechanisms, it makes goal- directed behaviour more robust in the face of a dynamic and unpredictable environments, and it provides an entry point for domain-specific knowledge in a more general system.

  • 56. Hawes, N
    et al.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Hanheide, Marc
    et al.,
    The STRANDS Project Long-Term Autonomy in Everyday Environments2017In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 24, no 3, p. 146-156Article in journal (Refereed)
  • 57. Hawes, N.
    et al.
    Hanheide, M.
    Hargreaves, J.
    Page, B.
    Zender, H.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Home alone: Autonomous extension and correction of spatial representations2011Conference paper (Refereed)
    Abstract [en]

    In this paper we present an account of the problems faced by a mobile robot given an incomplete tour of an unknown environment, and introduce a collection of techniques which can generate successful behaviour even in the presence of such problems. Underlying our approach is the principle that an autonomous system must be motivated to act to gather new knowledge, and to validate and correct existing knowledge. This principle is embodied in Dora, a mobile robot which features the aforementioned techniques: shared representations, non-monotonic reasoning, and goal generation and management. To demonstrate how well this collection of techniques work in real-world situations we present a comprehensive analysis of the Dora system's performance over multiple tours in an indoor environment. In this analysis Dora successfully completed 18 of 21 attempted runs, with all but 3 of these successes requiring one or more of the integrated techniques to recover from problems.

  • 58.
    Hawes, Nick
    et al.
    University of Birmingham.
    Hanheide, Marc
    University of Birmingham.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Göbelbecker, Moritz
    Albert-Ludwigs-Universität.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Zender, Hendrik
    Lison, Pierre
    DFKI Saarbrücken.
    Kruijff-Korbayova, Ivana
    DFKI Saarbrücken.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    Zillich, Michael
    Vienna University of Technology.
    Dora The Explorer: A Motivated Robot2009In: Proc. of 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2010) / [ed] van der Hoek, Kaminka, Lespérance, Luck, Sen, 2009, p. 1617-1618Conference paper (Refereed)
    Abstract [en]

    Dora the Explorer is a mobile robot with a sense of curios- ity and a drive to explore its world. Given an incomplete tour of an indoor environment, Dora is driven by internal motivations to probe the gaps in her spatial knowledge. She actively explores regions of space which she hasn't previously visited but which she expects will lead her to further unex- plored space. She will also attempt to determine the cate- gories of rooms through active visual search for functionally important objects, and through ontology-driven inference on the results of this search.

  • 59.
    Hawes, Nick
    et al.
    University of Birmingham.
    Zender, Hendrik
    DFKI Saarbrücken.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Planning and Acting with an Integrated Sense of Space2009In: Proceedings of the 1st International Workshop on Hybrid Control of Autonomous Systems:  Integrating Learning, Deliberation and Reactive Control (HYCAS), 2009Conference paper (Refereed)
    Abstract [en]

    The paper describes PECAS, an architecture for intelligent systems, and its application in the Explorer, an interactive mobile robot. PECAS is a new architectural combination of information fusion and continual planning. PECAS plans, integrates and monitors the asynchronous flow of information between multiple concurrent systems. Information fusion provides a suitable intermediary to robustly couple the various reactive and deliberative forms of processing used concurrently in the Explorer. The Explorer instantiates PECAS around a hybrid spatial model combining SLAM, visual search, and conceptual inference. This paper describes the elements of this model, and demonstrates on an implemented scenario how PECAS provides means for flexible control.

  • 60.
    Jensfelt, Patric
    KTH, Superseded Departments, Signals, Sensors and Systems.
    Approaches to Mobile Robot Localization in Indoor Environments2001Doctoral thesis, monograph (Other scientific)
  • 61.
    Jensfelt, Patric
    KTH, Superseded Departments, Signals, Sensors and Systems.
    Localization using laser scanning and minimalistic environmental models1999Licentiate thesis, monograph (Other scientific)
  • 62.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Austin, D
    Christensen, H.I
    Toward task oriented localization2000Conference paper (Refereed)
    Abstract [en]

    In the course of building a fully autonomous robot platform it is important to look at the computational resources spent by the indi- vidual modules. Each of them cannot be greedy, or the overall demand for computational power will be beyond what can be handled on-board. Maintaining an estimate of the pose of a mobile robot is a typical exam- ple where we might not always need to run the algorithm at the highest possible rate. This paper deals with the problem of determining how much e ort is needed in order to accomplish the localization part of a task. The approach we have taken to the problem is to optimize a cost function that accounts for the cost of sensing and the growth of the uncertainty.

  • 63.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Austin, D
    Wijk, O
    Andersson, M
    Feature based condensation for mobile robot localization2000Conference paper (Refereed)
    Abstract [en]

    Much attention has been given to CONDENSATION methods for mobile robot localization. This has resulted in somewhat of a breakthrough in representing urncertainty for mobile robots. In this paper we use CONDENSATION with planned sampling as a tool for doing feature based global localization in a large and semi-structured environment. This paper presents a comparison of four different feature types: sonar based triangulation points and point pairs, as well as lines and doors extracted using a laser scanner. We show eperimental results that highlight the information content of the different features, and point to fruitful combinations. Accuracy, computation time and the ability to narrow down the search space are among the measures used to compare the features. From the comparison of the features, some general guidelines are drawn for determining good feature types.

  • 64.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, H
    Laser based pose tracking1999Conference paper (Refereed)
    Abstract [en]

    The trend in localization is towards using more and more detailed models of the world. Our aim is to deal with the question of how simple a model can be used to provide and maintain pose information in an in-door setting. In this paper a Kalman filter based method for continuous position updating using a laser scanner is presented. By updating the position at a high frequency the matching problem becomes tractable and outliers can effectively be filtered out by means of validation gates. The experimental results presented show that the method performs very well in an in-door environment

  • 65.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, H
    Laser based position acquisition and tracking in an indoor environment1998Conference paper (Refereed)
  • 66.
    Jensfelt, Patric
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, H. I.
    Pose tracking using laser scanning and minimalistic environmental models2001In: IEEE transactions on robotics and automation, ISSN 1042-296X, Vol. 17, no 2, p. 138-147Article in journal (Refereed)
    Abstract [en]

    Keeping track of the position and orientation over time using sensor data, i.e., pose tracking, is a central component in many mobile robot systems. In this paper, we present a Kalman filter-based approach utilizing a minimalistic environmental model. By continuously updating the pose, matching the sensor data to the model is straightforward and outliers can be filtered out effectively by validation gates, The minimalistic model paves the way for a low-complexity algorithm with a high degree of robustness and accuracy. Robustness here refers both to being able to track the pose for a long time, but also handling changes and clutter in the environment. This robustness is gained by the minimalistic model only capturing the stable and large scale features of the environment. The effectiveness of the pose tracker will be demonstrated through a number of experiments, including a run of 90 min, which clearly establishes the robustness of the method.

  • 67.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik Iskov
    GeorgiaTech.
    Mobile robot2005Patent (Other (popular science, discussion, etc.))
    Abstract [en]

    A mobile robot (1) arranged to operate in an environment is described as well as a method for building a map (20). The mobile robot (1) is in an installation mode arranged to store representations of detected objects (19) in a storage means (7) based on detected movement in order to create a map (20). The mobile robot (1) is in a maintenance mode arranged to move in the environment using the map (20) created in the installation mode. The mobile robot (1) comprises editing means for editing, in the installation mode, the map (20) in the storage means (7) based on the map (20) output from the output means (13).

  • 68.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ekvall, Staffan
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Integrating SLAM and Object Detection for Service Robot Tasks2005Conference paper (Other academic)
    Abstract [en]

    A mobile robot system operating in a domestic environment has to integrate components from a number of key research areas such as recognition, visual tracking, visual servoing, object grasping, robot localization, etc. There also has to be an underlying methodology to facilitate the integration. We have previously showed that through sequencing of basic skills, provided by the above mentioned competencies, the system has the ability to carry out flexible grasping for fetch and carry tasks in realistic environments. Through careful fusion of reactive and deliberative control and use of multiple sensory modalities a flexible system is achieved. However, our previous work has mostly concentrated on pick-and-place tasks leaving limited place for generalization. Currently, we are interested in more complex tasks such as collaborating and helping humans in their everyday tasks, opening doors and cupboards, building maps of the environment including objects that are automatically recognized by the system. In this paper, we will show some of the current results regarding the above. Most systems for simultaneous localization and mapping (SLAM) build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. Here we augment the process with an object recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way the user can command the robot to retrieve a certain object from a certain room.

  • 69.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ekvall, Staffan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Aarno, Daniel
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Augmenting slam with object detection in a service robot framework2006In: Proceedings, IEEE International Workshop on Robot and Human Interactive Communication, 2006, p. 741-746Conference paper (Refereed)
    Abstract [en]

    In a service robot scenario, we are interested in a task of building maps of the environment that include automatically recognized objects. Most systems for simultaneous localization and mapping (SLAM) build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. Here, we augment the process with an object recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. During task execution, the robot can use this information to reason about objects, places and their relationships. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve an object from a particular room or get help from a robot when searching for a certain object

  • 70.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Exploiting distinguishable image features in robotic mapping and localization2006In: European Robotics Symposium 2006 / [ed] Christensen, HI, 2006, Vol. 22, p. 143-157Conference paper (Refereed)
    Abstract [en]

    Simultaneous localization and mapping (SLAM) is an important research area in robotics. Lately, systems that use a single bearing-only sensors have received significant attention and the use of visual sensors have been strongly advocated. In this paper, we present a framework for 3D bearing only SLAM using a single camera. We concentrate on image feature selection in order to achieve precise localization and thus good reconstruction in 3D. In addition, we demonstrate how these features can be managed to provide real-time performance and fast matching, to detect loop-closing situations. The proposed vision system has been combined with an extended Kalman Filter (EKF) based SLAM method. A number of experiments have been performed in indoor environments which demonstrate the validity and effectiveness of the approach. We also show how the SLAM generated map can be used for robot localization. The use of vision features which are distinguishable allows a straightforward solution to the "kidnapped-robot" scenario.

  • 71.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Förell, Erik
    Ljunggren, Per
    Field and service applications - Automating the marking process for exhibitions and fairs - The making of Harry Plotter2007In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 14, no 3, p. 35-42Article in journal (Refereed)
    Abstract [en]

    Robot technology is constantly finding new applications. This article presented the design of a system for automating the process of marking the locations for stands in large scale exhibition spaces. It is a true service robot application, with a high level of autonomy. It is also an excellent example of what mobile robot localization can be used for. The robot system solves a real task, adding value for the customer, and has been in operation at the Stockholm International Fairs since August 2003. It has now become an integral part of the standard routines of marking. With its help, the time for a standard job from 8 h by two people has been cut to 4 h with one person and one robot. Using more than one robot further increases the gain in productivity.

  • 72.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Gullstrand, Gunnar
    Forell, Erik
    A mobile robot system for automatic floor marking2006In: Journal of Field Robotics, ISSN 1556-4959, Vol. 23, no 07-jun, p. 441-459Article in journal (Refereed)
    Abstract [en]

    This paper describes a patent awarded system for automatically marking the positions of stands for a trade fair or exhibition. The system has been in operation since August 2003 and has been used for every exhibition in the three main exhibition halls at the Stockholm International Fair since then. The system has speeded up the marking process significantly What used to be a job for two men over 8 h now takes one robot monitored by one man 4 h to complete. The operators of the robot are from the same group of people that previously performed the marking task manually. Environmental features are much further away than in most other indoor applications and even many outdoor applications. Experiments show that many of the problems that are typically associated with the large beam width of ultrasonic sensors in normal indoor environments manifest themselves here for the laser because of the long range. Reaching the required level of accuracy was only possible by proper modeling of the laser scanner. The system has been evaluated by hand measuring 680 marked points. To make the integration of the robot system into the overall system as smooth as possible the robot uses information from the existing computer aided design (CAD) model of the environment in combination with a SICK LMS 291 laser scanner to localize the robot. This allows the robot to make use of the same information about changes in the environment as the people administrating the CAD system.

  • 73.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Gullstrand, Gunnar
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Forell, Erik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A system for automatic marking of floors in very large spaces2006In: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, SPRINGER-VERLAG BERLIN: BERLIN , 2006, Vol. 25, p. 93-104Conference paper (Refereed)
    Abstract [en]

    This paper describes a system for automatic marking of floors. Such systems can be used for example when marking the positions of stands for a trade fair or exhibition. Achieving a high enough accuracy in such an environment, characterized by very large open spaces, is a major challenge. Environmental features will be much further away then in most other indoor applications and even many outdoor applications. A SICK LMS 291 laser scanner is used for localization purposes. Experiments show that many of the problems that are typically associated with the large beam width of ultra sonic sensors in normal indoor environments manifest themselves here for the laser because of the long range. The system that is presented has been in operation for almost two years to date and has been used for every exhibition in the three main exhibition halls at the Stockholm International Fair since then. The system has speeded up the marking process significantly. For example, what used to be a job for two men over eight hours now takes one robot monitored by one man four hours to complete.

  • 74.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A framework for vision based bearing only 3D SLAM2006In: Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida - May 2006: Vols 1-10, IEEE , 2006, p. 1944-1950Conference paper (Refereed)
    Abstract [en]

    This paper presents a framework for 3D vision based bearing only SLAM using a single camera, an interesting setup for many real applications due to its low cost. The focus in is on the management of the features to achieve real-time performance in extraction, matching and loop detection. For matching image features to map landmarks a modified, rotationally variant SIFT descriptor is used in combination with a Harris-Laplace detector. To reduce the complexity in the map estimation while maintaining matching performance only a few, high quality, image features are used for map landmarks. The rest of the features are used for matching. The framework has been combined with an EKF implementation for SLAM. Experiments performed in indoor environments are presented. These experiments demonstrate the validity and effectiveness of the approach. In particular they show how the robot is able to successfully match current image features to the map when revisiting an area.

  • 75.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristensen, S
    Active global localisation for a mobile robot using multiple hypothesis tracking1999Conference paper (Refereed)
    Abstract [en]

    In this paper we present a probabilistic approach for mobile robot localization using an incomplete topological world model. The method, which we have termed multi-hypothesis localization (MHL), uses multi-hypothesis Kalman filter based pose tracking combined with a probabilistic formulation of hypothesis correctness to generate and track Gaussian pose hypotheses online. Apart from a lower computational complexity, this approach has the advantage over traditional grid based methods that incomplete and topological world model information can be utilized. Furthermore, the method generates movement commands for the platform to enhance the gathering of information for the pose estimation process. Extensive experiments are presented from two different environments, a typical office environment and an old hospital building.

  • 76.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristensen, S
    active global localisation for a mobile robot using multiple hypothesis tracking2001In: IEEE transactions on robotics and automation, ISSN 1042-296X, Vol. 17, no 5, p. 748-760Article in journal (Refereed)
    Abstract [en]

    In this paper we present a probabilistic approach for mobile robot localization using an incomplete topological world model. The method, which we have termed multi-hypothesis localization (MHL), uses multi-hypothesis Kalman filter based pose tracking combined with a probabilistic formulation of hypothesis correctness to generate and track Gaussian pose hypotheses online. Apart from a lower computational complexity, this approach has the advantage over traditional grid based methods that incomplete and topological world model information can be utilized. Furthermore, the method generates movement commands for the platform to enhance the gathering of information for the pose estimation process. Extensive experiments are presented from two different environments, a typical office environment and an old hospital building.

  • 77.
    Jensfelt, Patric
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Kristensen, S.
    Active global localization for a mobile robot using multiple hypothesis tracking2001In: IEEE transactions on robotics and automation, ISSN 1042-296X, Vol. 17, no 5, p. 748-760Article in journal (Refereed)
    Abstract [en]

    In this paper we present a probabilistic approach for mobile robot localization using an incomplete topological world model. The method, which we have termed multi-hypothesis localization (MHL), uses multi-hypothesis Kalman filter based pose tracking combined with a probabilistic formulation of hypothesis correctness to generate and track Gaussian pose hypotheses online. Apart from a lower computational complexity, this approach has the advantage over traditional grid based methods that incomplete and topological world model information can be utilized. Furthermore, the method generates movement commands for the platform to enhance the gathering of information for the pose estimation process. Extensive experiments are presented from two different environments, a typical office environment and an old hospital building.

  • 78. Jensfelt, Patric
    et al.
    Wijk, O
    Austin, D
    Andersson, M
    Experiments on augmenting condensation for mobile robot localization2000Conference paper (Refereed)
    Abstract [en]

    In this paper we study some modifications of the

    CONDENSATION

    algorithm. The case studied is feature

    based mobile robot localization in a large scale environment.

    The required sample set size for making

    the

    CONDENSATION

    algorithm converge properly can

    in many cases require too much computation. This is

    often the case when observing features in symmetric

    environments like for instance doors in long corridors.

    In such areas a large sample set is required to resolve

    the generated multi-hypotheses problem. To manage

    with a sample set size which in the normal case would

    cause the

    CONDENSATION

    algorithm to break down,

    we study two modifications. The first strategy, called

    "CONDENSATION

    with random sampling", takes part

    of the sample set and spreads it randomly over the

    environment the robot operates in. The second strategy,

    called

    "CONDENSATION

    with planned sampling",

    places part of the sample set at planned positions based

    on the detected features. From the experiments we conclude

    that the second strategy is the best and can reduce

    the sample set size by at least a factor of 40.

  • 79.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Wijk, O
    Austin, D
    Andersson, M
    experiments on augmenting condensation for mobile robot localization2000Conference paper (Refereed)
    Abstract [en]

    In this paper we study some modifications of the CONDENSATION algorithm. The case studied is feature based mobile robot localization in a large scale environment. The required sample set size for making the CONDENSATION  algorithm converge properly can in many cases require too much computation. This is often the case when observing features in symmetric environments like for instance doors in long corridors. In such areas a large sample set is required to resolve the generated multi-hypotheses problem. To manage with a sample set size which in the normal case would cause the CONDENSATION algorithm to break down, we study two modifications. The first strategy, called "CONDENSATION with random sampling", takes part of the sample set and spreads it randomly over the environment the robot operates in. The second strategy, called "CONDENSATION with planned sampling", places part of the sample set at planned positions based on the detected features. From the experiments we conclude that the second strategy is the best and can reduce the sample set size by at least a factor of 40.

  • 80.
    Karaoguz, Hakan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Human-Centric Partitioning of the Environment2017In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 844-850Conference paper (Refereed)
    Abstract [en]

    In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regionsis to detect the objects that are commonly associated withfrequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information inthe robot’s memory to be later used for generating the regions.The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions.The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data isused, the regions are generated at densely occupied locations.

  • 81.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ekvall, Staffan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aarno, Daniel
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sensor Integration and Task Planning for Mobile Manipulation2004Conference paper (Refereed)
    Abstract [en]

    Robotic mobile manipulation in unstructured environments requires integration of a number of key reasearch areas such as localization, navigation, object recognition, visual tracking/servoing, grasping and object manipulation. It has been demonstrated that, given the above, and through simple sequencing of basic skills, a robust system can be designed, [19]. In order to provide the robustness and flexibility required of the overall robotic system in unstructured and dynamic everyday environments, it is important to consider a wide range of individual skills using different sensory modalities. In this work, we consider a combination of deliberative and reactive control together with the use of multiple sensory modalities for modeling and execution of manipulation tasks. Special consideration is given to the design of a vision system necessary for object recognition and scene segmentation as well as learning principles in terms of grasping.

  • 82.
    Kragic, Danica
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Gustafson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
    Karaoǧuz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Krug, Robert
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Interactive, collaborative robots: Challenges and opportunities2018In: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2018, p. 18-25Conference paper (Refereed)
    Abstract [en]

    Robotic technology has transformed manufacturing industry ever since the first industrial robot was put in use in the beginning of the 60s. The challenge of developing flexible solutions where production lines can be quickly re-planned, adapted and structured for new or slightly changed products is still an important open problem. Industrial robots today are still largely preprogrammed for their tasks, not able to detect errors in their own performance or to robustly interact with a complex environment and a human worker. The challenges are even more serious when it comes to various types of service robots. Full robot autonomy, including natural interaction, learning from and with human, safe and flexible performance for challenging tasks in unstructured environments will remain out of reach for the foreseeable future. In the envisioned future factory setups, home and office environments, humans and robots will share the same workspace and perform different object manipulation tasks in a collaborative manner. We discuss some of the major challenges of developing such systems and provide examples of the current state of the art.

  • 83. Kristensen, S.
    et al.
    Jensfelt, Patric
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    An experimental comparison of localisation methods, the MHL sessions2003In: IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, New York: IEEE , 2003, p. 992-997Conference paper (Refereed)
    Abstract [en]

    In this paper we compare multi hypothesis localisation (MHL)-which is a mobile robot localisation method based on multi hypothesis tracking-with six other methods reported in the literature. The comparison is performed using a standard set of test data and corresponding evaluation tools, thus facilitating a direct comparison of the obtained results. The experiments show that MHL compares favourably to all other methods in terms of recovering when The robot has been kidnapped. When using a validation gate for filtering out noisy measurements, MHL and the standard extended Kalman filter both perform as well as all other reported methods in terms of accuracy while being faster to compute.

  • 84. Kruijff, G.-J. M.
    et al.
    Zender, H.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Situated dialogue and spatial organization: What, where... and why?2007In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, Vol. 4, no 1, p. 125-138Article in journal (Refereed)
    Abstract [en]

    The paper presents an HRI architecture for human-augmented mapping, which has been implemented and tested on an autonomous mobile robotic platform. Through interaction with a human, the robot can augment its autonomously acquired metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independently performed Wizard-of-Oz studies. The paper discusses an ontology-based approach to multi-layered conceptual spatial mapping that provides a common ground for human-robot dialogue. This is achieved by combining acquired knowledge with innate conceptual commonsense knowledge in order to infer new knowledge. The architecture bridges the gap between the rich semantic representations of the meaning expressed by verbal utterances on the one hand and the robot's internal sensor-based world representation on the other. It is thus possible to establish references to spatial areas in a situated dialogue between a human and a robot about their environment. The resulting conceptual descriptions represent qualitative knowledge about locations in the environment that can serve as a basis for achieving a notion of situational awareness.

  • 85. Kruijff, G.-J. M.
    et al.
    Zender, Hendrik
    Language Technology Lab., DFKI GmbH.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Clarification dialogues in human-augmented mapping2006In: HRI 2006: Proceedings of the 2006 ACM Conference on Human-Robot Interaction, 2006, p. 282-289Conference paper (Refereed)
    Abstract [en]

    An approach to dialogue based interaction for resolution of ambiguities encountered as part of Human-Augmented Mapping (HAM) is presented. The paper focuses on issues related to spatial organisation and localisation. The dialogue pattern naturally arises as robots are introduced to novel environments. The paper discusses an approach based on the notion of Questions under Discussion (QUD). The presented approach has been implemented on a mobile platform that has dialogue capabilities and methods for metric SLAM. Experimental results from a pilot study clearly demonstrate that the system can resolve problematic situations.

  • 86. Kruijff, G.-J.
    et al.
    Zender, Hendrik
    Language Technology Lab., DFKI GmbH.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Situated dialogue and understanding spatial organization: Knowing what is where and what you can do there2006In: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 2006, p. 328-333Conference paper (Refereed)
    Abstract [en]

    The paper presents an HRI architecture for human-augmented mapping. Through interaction with a human, the robot can augment its autonomously learnt metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independent Wizard-of-Oz studies. The paper discusses an ontology-based approach to representing and inferring 2.5D spatial organization, and presents how knowledge of spatial organization can be acquired autonomously or through spoken dialogue interaction.

  • 87.
    Kunze, Lars
    et al.
    University of Birmingham.
    Burbridge, Christopher
    University of Birmingham.
    Alberti, Marina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Thippur, Akshaya
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Chemical Science and Engineering (CHE).
    Hawes, Nick
    University of Birmingham.
    Combining Top-down Spatial Reasoning and Bottom-up Object Class Recognition for Scene Understanding2014In: Proc. of 2014 IEEE/RSJ International Conference on IntelligentRobots and Systems 2014, IEEE conference proceedings, 2014, p. 2910-2915Conference paper (Refereed)
    Abstract [en]

    Many robot perception systems are built to only consider intrinsic object features to recognise the class of an object. By integrating both top-down spatial relational reasoning and bottom-up object class recognition the overall performance of a perception system can be improved. In this paper we present a unified framework that combines a 3D object class recognition system with learned, spatial models of object relations. In robot experiments we show that our combined approach improves the classification results on real world office desks compared to pure bottom-up perception. Hence, by using spatial knowledge during object class recognition perception becomes more efficient and robust and robots can understand scenes more effectively.

  • 88. Lopez-Nicolas, G.
    et al.
    Sagues, C.
    Guerrero, J. J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Switching visual control based on epipoles for mobile robots2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 7, p. 592-603Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a Visual control approach consisting in a switching control scheme based on the epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to control the robot to the desired pose (position and orientation). As a result of our proposal a mobile robot carries out a smooth trajectory towards the target and the epipolar geometry model is used through the whole motion. The control scheme developed considers the motion constraints of the mobile platform in a framework based on the epipolar geometry that does not rely on artificial markers or specific models of the environment. The proposed method is designed in order to cope with the degenerate estimation case of the epipolar geometry with short baseline. Experimental evaluation has been performed in realistic indoor and outdoor settings.

  • 89. Luo, J.
    et al.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Caputo, B.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Incremental learning for place recognition in dynamic environments2007In: Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, IEEE , 2007, p. 721-728Conference paper (Refereed)
    Abstract [en]

    Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which allows to control the memory requirements as the system updates its internal representation. At the same time, it preserves the recognition performance of the batch algorithm. In order to assess the method, we acquired a database capturing the intrinsic variability of places over time. Extensive experiments show the power and the potential of the approach.

  • 90. López-Nicolás, G
    et al.
    Sagüés, C.
    Guerrero, J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Nonholonomic epipolar visual servoing2006In: 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), New York: IEEE , 2006, p. 2378-2384Conference paper (Refereed)
    Abstract [en]

    A significant amount of work has been reported in the area of visual servoing during the last decade. However, most of the contributions are applied in cases of holonomic robots. More recently, the use of visual feedback for control of nonholonomic vehicles has been reported. Some of the examples are docking and parallel parking maneuvers of cars or vision-based stabilization of a mobile manipulator to a desired pose with respect to a target of interest. Still, many of the approaches are mostly interested in the control part of visual servoing loop considering very simple vision algorithms based on artificial markers. In this paper, we present an approach for nonholonomic visual servoing based on epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to define the desired pose (position and orientation) of the robot. The major contribution of the paper is the design of the control law that considers nonholonomic constraints of the robot as well as the robust feature detection and matching process based on scale and rotation invariant image features. An extensive experimental evaluation has been performed in a realistic indoor setting and the results are summarized in the paper.

  • 91.
    Mancini, Massimiliano
    et al.
    Sapienza Univ Rome, Rome, Italy.;Fdn Bruno Kessler, Trento, Italy..
    Karaoǧuz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Ricci, Elisa
    Fdn Bruno Kessler, Trento, Italy.;Univ Trento, Trento, Italy..
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Caputo, Barbara
    Sapienza Univ Rome, Rome, Italy.;Italian Inst Technol, Milan, Italy..
    Kitting in the Wild through Online Domain Adaptation2018In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, p. 1103-1109Conference paper (Refereed)
    Abstract [en]

    Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.

  • 92. Mozos, O.M.
    et al.
    Triebel, R.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Rottmann, A.
    Burgard, W.
    Supervised semantic labeling of places using information extracted from sensor data2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 391-402Article in journal (Refereed)
    Abstract [en]

    Indoor environments can typically be divided into places with different functionalities like corridors, rooms or doorways. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating the interaction with humans. As an example, natural language terms like ``corridor" or ``room" can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we first propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from sensor range data into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. In this case we additionally use as features objects extracted from images. Secondly, we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation method. Alternatively, we apply associative Markov networks to classify geometric maps and compare the results with the relaxation approach. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor environments

  • 93.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Design of an office-guide robot for social interaction studies2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 4965-4970Conference paper (Refereed)
    Abstract [en]

    In this paper, the design of an office-guide robot for social interaction studies is presented. We are interested in studying the impact of passage behaviours in casual encounters. While the system offers assistance in locating the appropriate office that a visitor wants to reach, it is expected to engage in a passing behaviour to allow free passage for other persons that it may encounter. Through use of such an approach it is possible to study the effect of social interaction in a situation that is much more natural than out-of-context user studies. The system has been tested in an early evaluation phase when it worked for almost 7 hours. A total of 64 interactions with people were registered and 13 passage behaviors were performed to conclude that this framework can be successfully used for the evaluation of passing behaviors in natural contexts of operation.

  • 94.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Embodied social interaction for service robots in hallway environments2006In: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, BERLIN: SPRINGER-VERLAG BERLIN , 2006, Vol. 25, p. 293-304Conference paper (Refereed)
    Abstract [en]

    A key aspect of service robotics for everyday use is the motion in close proximity to humans. It is essential that the robot exhibits a behavior that signals safety of motion and awareness of the persons in the environment. To achieve this, there is a need to define control strategies that are perceived as socially acceptable by users that are not familiar with robots. In this paper a system for navigation in a hallway is presented, in which the rules of proxemics are used to define the interaction strategies. The experimental results show the contribution to the establishment of effective spatial interaction patterns between the robot and a person.

  • 95.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Human-robot embodied interaction in hallway settings: a pilot user study2005In: 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), 2005, p. 164-171Conference paper (Refereed)
    Abstract [en]

    This paper explores the problem of embodied interaction between a service robot and a person in a hallway setting. For operation in environments with people that have limited experience with robots, a behaviour that signals awareness of the persons and safety of motion is essential. A control strategy based on human spatial behaviour studies is presented that adopts human-robot interaction patterns similar to those used in person-person encounters. The results of a pilot study with human subjects are presented in which the users have evaluated the acceptability of the robot behaviour patterns during passage, with respect to three basic parameters: the robot speed, the signaling distance at which the robot starts the maneuver and the lateral distance from the person for safe passage. The study has shown a good overall user response and has provided some useful indications on how to design a hallway passage behaviour that could be most acceptable to human users.

  • 96.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Evaluation of passing distance for social robots2006In: RO-MAN 2006: The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006, p. 315-320Conference paper (Refereed)
    Abstract [en]

    Casual encounters with mobile robots for nonexperts can be a challenge due to lack of an interaction model. The present work is based on the rules from proxemics which are used to design a passing strategy. In narrow corridors the lateral distance of passage is a key parameter to consider. An implemented system has been used in a small study to verify the basic parametric design for such a system. In total 10 subjects evaluated variations in proxemics for encounters with a robot in a corridor setting. The user feedback indicates that entering the intimate sphere of people is less comfortable, however a too significant avoidance is also considered unnecessary. Adequate signaling of avoidance is a behaviour that must be carefully tuned.

  • 97.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik Iskov
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Tasking everyday interaction2007In: Autonomous navigation in dynamic environments / [ed] Laugier,Chatila; Raja Chatila, Springer, 2007, p. 151-168Chapter in book (Refereed)
    Abstract [en]

    An important problem in the design of mobile robot systems for operation in natural environments for everyday tasks is the safe handling of encounters with people. People-People encounters follow certain social rules to allow co-existence even in cramped spaces. These social rules are often described according to the classification termed proxemics. In this paper we present an analysis of how the physical interaction with people can be modelled using the rules of proxemics and discuss how the rules of embodied feedback generation can simplify the interaction with novice users. We also provide some guidelines for the design of a control architecture for a mobile robot moving among people. The concepts presented are illustrated by a number of real experiments that verify the overall approach to the design of systems for navigation in human-populated environments.

  • 98. Paz, L.
    et al.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Tardós, J.
    Neira, J.
    EKF SLAM updates in O(n) with Divide and Conquer SLAM2007In: PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10, 2007, p. 1657-1663Conference paper (Refereed)
    Abstract [en]

    In this paper we describe Divide and Conquer SLAM (D&C SLAM), an algorithm for performing Simultaneous Localization and Mapping using the Extended Kalman Filter. D&C SLAM overcomes the two fundamental limitations of standard EKF SLAM: 1- the computational cost per step is reduced from O(n2) to O(n) (the cost full SLAM is reduced from O(n3) to O(n2)); 2- the resulting vehicle and map estimates have better consistency properties than standard EKF SLAM in the sense that the computed state covariance adequately represents the real error in the estimation. Unlike many current large scale EKF SLAMtechniques, this algorithm computes an exact solution, without relying on approximations or simplifications to reduce computational complexity. Also, estimates and covariances are available when needed by data association without any further computation. Empirical results show that, as a bi-product of reduced computations, and without losing precision because of approximations, D&C SLAM has better consistency properties than standard EKF SLAM. Both characteristics allow to extend the range of environments that can be mapped in real time using EKF. We describe the algorithm and study its computational cost and consistency properties.

  • 99.
    Petersson, Lars
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Tell, Dennis
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Strandberg, Morten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, H.I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Systems integration for real–world manipulation tasks2002In: 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2002, p. 2500-2505Conference paper (Refereed)
    Abstract [en]

     A system developed to demonstrate integration of a number of key research areas such as localization, recognition, visual tracking, visual servoing and grasping is presented together with the underlying methodology adopted to facilitate the integration. Through sequencing of basic skills, provided by the above mentioned competencies, the system has the potential to carry out flexible grasping for fetch and carry in realistic environments. Through careful fusion of reactive and deliberative control and use of multiple sensory modalities a significant flexibility is achieved. Experimental verification of the integrated system is presented.

  • 100.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Caputo, B
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, H. I.
    A realistic benchmark for visual indoor place recognition2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 1, p. 81-96Article in journal (Refereed)
    Abstract [en]

    An important competence for a mobile robot system is the ability to localize and perform context interpretation. This is required to perform basic navigation and to facilitate local specific services. Recent advances in vision have made this modality a viable alternative to the traditional range sensors, and visual place recognition algorithms emerged as a useful and widely applied tool for obtaining information about robot's position. Several place recognition methods have been proposed using vision alone or combined with sonar and/or laser. This research calls for standard benchmark datasets for development, evaluation and comparison of solutions. To this end, this paper presents two carefully designed and annotated image databases augmented with an experimental procedure and extensive baseline evaluation. The databases were gathered in an uncontrolled indoor office environment using two mobile robots and a standard camera. The acquisition spanned across a time range of several months and different illumination and weather conditions. Thus, the databases are very well suited for evaluating the robustness of algorithms with respect to a broad range of variations, often occurring in real-world settings. We thoroughly assessed the databases with a purely appearance-based place recognition method based on support vector machines and two types of rich visual features (global and local).

123 51 - 100 of 136
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf