Change search
Refine search result
1 - 22 of 22
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Exploiting structure in man-made environments2012Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Robots are envisioned to take on jobs that are dirty, dangerous and dull, the three D's of robotics. With this mission, robotic technology today is ubiquitous on the factory floor. However, the same level of success has not occurred when it comes to robots that operate in everyday living spaces, such as homes and offices.

    A big part of this is attributed to domestic environments being complex and unstructured as opposed to factory settings which can be set up and precisely known in advance. In this thesis we challenge the point of view which regards man-made environments as unstructured and that robots should operate without prior assumptions about the world. Instead, we argue that robots should make use of the inherent structure of everyday living spaces across various scales and applications, in the form of contextual and prior information, and that doing so can improve the performance of robotic tasks.

    To investigate this premise, we start by attempting to solve a hard and realistic problem, active visual search. The particular scenario considered is that of a mobile robot tasked with finding an object on an entire unexplored building floor. We show that a search strategy which exploits the structure of indoor environments offers significant improvements on state of the art and is comparable to humans in terms of search performance. Based on the work on active visual search, we present two specific ways of making use of the structure of space. First, we propose to use the local 3D geometry as a strong indicator of objects in indoor scenes. By learning a 3D context model for various object categories, we demonstrate a method that can reliably predict the location of objects. Second, we turn our attention to predicting what lies in the unexplored part of the environment at the scale of rooms and building floors. By analyzing a large dataset, we propose that indoor environments can be thought of as being composed out of frequently occurring functional subparts. Utilizing these, we present a method that can make informed predictions about the unknown part of a given indoor environment.

    The ideas presented in this thesis explore various sides of the same idea: modeling and exploiting the structure inherent in indoor environments for the sake of improving robot's performance on various applications. We believe that in addition to contributing some answers, the work presented in this thesis will generate additional, fruitful questions.

  • 2.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Simultaneous Object Class and Pose Estimation for Mobile Robotic Applications with Minimalistic Recognition2010In: 2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)    / [ed] Rakotondrabe M; Ivan IA, 2010, p. 2020-2027Conference paper (Refereed)
    Abstract [en]

    In this paper we address the problem of simultaneous object class and pose estimation using nothing more than object class label measurements from a generic object classifier. We detail a method for designing a likelihood function over the robot configuration space. This function provides a likelihood measure of an object being of a certain class given that the robot (from some position) sees and recognizes an object as being of some (possibly different) class. Using this likelihood function in a recursive Bayesian framework allows us to achieve a kind of spatial averaging and determine the object pose (up to certain ambiguities to be made precise). We show how inter-class confusion from certain robot viewpoints can actually increase the ability to determine the object pose. Our approach is motivated by the idea of minimalistic sensing since we use only class label measurements albeit we attempt to estimate the object pose in addition to the class.

  • 3.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Göbelbecker, Moritz
    Institut für Informatik, Albert-Ludwigs-Universität Freiburg, Germany.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Plan-based Object Search and Exploration Using Semantic Spatial Knowledge in the Real World2011In: Proc. of the European Conference on Mobile Robotics (ECMR'11), 2011Conference paper (Refereed)
    Abstract [en]

    In this paper we present a principled planner based approach to the active visual object search problem in unknown environments. We make use of a hierarchical planner that combines the strength of decision theory and heuristics. Furthermore, our object search approach leverages on the conceptual spatial knowledge in the form of object cooccurences and semantic place categorisation. A hierarchical model for representing object locations is presented with which the planner is able to perform indirect search. Finally we present real world experiments to show the feasibility of the approach.

  • 4.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Henell, Daniel
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Shilkrot, R.
    Kinect@Home: Crowdsourcing a large 3D dataset of real environments2012In: AAAI Spring Symposium - Technical Report: Volume SS-12-06, 2012, 2012, Vol. SS-12-06, p. 8-9Conference paper (Refereed)
    Abstract [en]

    We present Kinect@Home, aimed at collecting a vast RGB-D dataset from real everyday living spaces. This dataset is planned to be the largest real world image collection of everyday environments to date, making use of the availability of a widely adopted robotics sensor which is also in the homes of millions of users, the Microsoft Kinect camera.

  • 5.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Exploiting and modeling local 3D structure for predicting object locations2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 3885-3892Conference paper (Refereed)
    Abstract [en]

    In this paper, we argue that there is a strong correlation between local 3D structure and object placement in everyday scenes. We call this the 3D context of the object. In previous work, this is typically hand-coded and limited to flat horizontal surfaces. In contrast, we propose to use a more general model for 3D context and learn the relationship between 3D context and different object classes. This way, we can capture more complex 3D contexts without implementing specialized routines. We present extensive experiments with both qualitative and quantitative evaluations of our method for different object classes. We show that our method can be used in conjunction with an object detection algorithm to reduce the rate of false positives. Our results support that the 3D structure surrounding objects in everyday scenes is a strong indicator of their placement and that it can give significant improvements in the performance of, for example, an object detection system. For evaluation, we have collected a large dataset of Microsoft Kinect frames from five different locations, which we also make publicly available.

  • 6.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    What can we learn from 38,000 rooms?: Reasoning about unexplored space in indoor environments2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 4675-4682Conference paper (Refereed)
    Abstract [en]

    Many robotics tasks require the robot to predict what lies in the unexplored part of the environment. Although much work focuses on building autonomous robots that operate indoors, indoor environments are neither well understood nor analyzed enough in the literature. In this paper, we propose and compare two methods for predicting both the topology and the categories of rooms given a partial map. The methods are motivated by the analysis of two large annotated floor plan data sets corresponding to the buildings of the MIT and KTH campuses. In particular, utilizing graph theory, we discover that local complexity remains unchanged for growing global complexity in real-world indoor environments, a property which we exploit. In total, we analyze 197 buildings, 940 floors and over 38,000 real-world rooms. Such a large set of indoor places has not been investigated before in the previous work. We provide extensive experimental results and show the degree of transferability of spatial knowledge between two geographically distinct locations. We also contribute the KTH data set and the software tools to with it.

  • 7.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Järleberg, Erik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Prentice, S.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Predicting what lies ahead in the topology of indoor environments2012In: Spatial Cognition VIII: International Conference, Spatial Cognition 2012, Kloster Seeon, Germany, August 31 – September 3, 2012. Proceedings / [ed] Cyrill Stachniss, Kerstin Schill, David Uttal, Springer, 2012, p. 1-16Conference paper (Refereed)
    Abstract [en]

    A significant amount of research in robotics is aimed towards building robots that operate indoors yet there exists little analysis of how human spaces are organized. In this work we analyze the properties of indoor environments from a large annotated floorplan dataset. We analyze a corpus of 567 floors, 6426 spaces with 91 room types and 8446 connections between rooms corresponding to real places. We present a system that, given a partial graph, predicts the rest of the topology by building a model from this dataset. Our hypothesis is that indoor topologies consists of multiple smaller functional parts. We demonstrate the applicability of our approach with experimental results. We expect that our analysis paves the way for more data driven research on indoor environments.

  • 8.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gobelbecker, Moritz
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active Visual Object Search in Unknown Environments Using Uncertain Semantics2013In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, no 4, p. 986-1002Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the problem of active visual search (AVS) in large, unknown, or partially known environments. We argue that by making use of uncertain semantics of the environment, a robot tasked with finding an object can devise efficient search strategies that can locate everyday objects at the scale of an entire building floor, which is previously unknown to the robot. To realize this, we present a probabilistic model of the search environment, which allows for prioritizing the search effort to those parts of the environment that are most promising for a specific object type. Further, we describe a method for reasoning about the unexplored part of the environment for goal-directed exploration with the purpose of object search. We demonstrate the validity of our approach by comparing it with two other search systems in terms of search trajectory length and time. First, we implement a greedy coverage-based search strategy that is found in previous work. Second, we let human participants search for objects as an alternative comparison for our method. Our results show that AVS strategies that exploit uncertain semantics of the environment are a very promising idea, and our method pushes the state-of-the-art forward in AVS.

  • 9.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Search in the real world: Active visual object search based on spatial relations2011In: IEEE International Conference on Robotics and Automation (ICRA), 2011, IEEE , 2011, p. 2818-2824Conference paper (Refereed)
    Abstract [en]

    Objects are integral to a robot’s understandingof space. Various tasks such as semantic mapping, pick-andcarrymissions or manipulation involve interaction with objects.Previous work in the field largely builds on the assumption thatthe object in question starts out within the ready sensory reachof the robot. In this work we aim to relax this assumptionby providing the means to perform robust and large-scaleactive visual object search. Presenting spatial relations thatdescribe topological relationships between objects, we thenshow how to use these to create potential search actions. Weintroduce a method for efficiently selecting search strategiesgiven probabilities for those relations. Finally we performexperiments to verify the feasibility of our approach.

  • 10.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Object search on a mobile robot using relational spatial information2010In: Proc. of the 11th Int Conference on Intelligent Autonomous Systems (IAS-11), Amsterdam: IOS Press, 2010, p. 111-120Conference paper (Refereed)
    Abstract [en]

    We present a method for utilising knowledge of qualitative spatial relations between objects in order to facilitate efficient visual search for those objects. A computational model for the relation is used to sample a probability distribution that guides the selection of camera views. Specifically we examine the spatial relation “on”, in the sense of physical support, and show its usefulness in search experiments on a real robot. We also experimentally compare different search strategies and verify the efficiency of so-called indirect search.

  • 11. Civera, Javier
    et al.
    Ciocarlie, Matei
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekris, Kostas
    Sarma, Sanjay
    Special Issue on Cloud Robotics and Automation2015In: IEEE Transactions on Automation Science and Engineering, ISSN 1545-5955, E-ISSN 1558-3783, Vol. 12, no 2, p. 396-397Article in journal (Other academic)
    Abstract [en]

    The articles in this special section focus on the use of cloud computing in the robotics industry. The Internet and the availability of vast computational resources, ever-growing data and storage capacity have the potential to define a new paradigm for robotics and automation. An intelligent system connected to the Internet can expand its onboard local data, computation and sensors with huge data repositories from similar and very different domains, massive parallel computation from server farms and sensor/actuator streams from other robots and automata. It is the potential and also the research challenges of the field that become the focus on this special section. The goal is to group together and to show the state-of-the-art of this newly emerged field, identify the relevant advances and topics, point out the current lines of research and potential applications, and discuss the main research challenges and future work directions.

  • 12. Göbelbecker, M.
    et al.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A planning approach to active visual search in large environments2011In: AAAI Workshop Tech. Rep., 2011, p. 8-13Conference paper (Refereed)
    Abstract [en]

    In this paper we present a principled planner based approach to the active visual object search problem in unknown environments. We make use of a hierarchical planner that combines the strength of decision theory and heuristics. Furthermore, our object search approach leverages on the conceptual spatial knowledge in the form of object co-occurrences and semantic place categorisation. A hierarchical model for representing object locations is presented with which the planner is able to perform indirect search. Finally we present real world experiments to show the feasibility of the approach.

  • 13.
    Göbelbecker, Moritz
    et al.
    University of Freiburg.
    Hanheide, Marc
    University of Lincoln.
    Gretton, Charles
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristoffer, Sjöö
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Zender, Hendrik
    DFKI, Saarbruecken.
    Dora: A Robot that Plans and Acts Under Uncertainty2012In: Proceedings of the 35th German Conference on Artificial Intelligence (KI’12), 2012Conference paper (Refereed)
    Abstract [en]

    Dealing with uncertainty is one of the major challenges when constructing autonomous mobile robots. The CogX project addressed key aspects of that by developing and implementing mechanisms for self-understanding and self-extension -- i.e. awareness of gaps in knowledge, and the ability to reason and act to fill those gaps. We discuss our robot called Dora, a showcase outcome of that project. Dora is able to perform a variety of search tasks in unexplored environments. One of the results of the project is the Dora robot, that can perform a variety of search tasks in unexplored environments by exploiting probabilistic knowledge representations while retaining efficiency by using a fast planning system.

  • 14. Hanheide, Marc
    et al.
    Gretton, Charles
    Dearden, Richard
    Hawes, Nick
    Wyatt, Jeremy
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Göbelbecker, Moritz
    Zender, Hendrik
    Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour2011In: 22nd International Joint Conference on Artificial Intelligence, 2011Conference paper (Refereed)
    Abstract [en]

    Robots must perform tasks efficiently and reliably while acting under uncertainty. One way to achieve efficiency is to give the robot common-sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by modelling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first contribution is a probabilistic relational model integrating common-sense knowledge about the world in general, with observations of a particularenvironment. Our second contribution is a continual planning system which isable to plan in the large problems posed by that model, by automatically switching between decision-theoretic and classical procedures. We evaluate our system on objects earch tasks in two different real-world indoor environments. By reasoning about the trade-offs between possible courses of action with different informational effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.

  • 15.
    Hanheide, Marc
    et al.
    University of Lincoln.
    Göbelbecker, Moritz
    University of Freiburg.
    Horn, Graham S.
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. krsj@kth.se.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gretton, Charles
    University of Birmingham.
    Dearden, Richard
    University of Birmingham.
    Janicek, Miroslav
    DFKI, Saarbrücken.
    Zender, Hendrik
    DFKI, Saarbrücken.
    Kruijff, Geert-Jan
    DFKI, Saarbrücken.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Robot task planning and explanation in open and uncertain worlds2015In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921Article in journal (Refereed)
    Abstract [en]

    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

  • 16.
    Hanheide, Marc
    et al.
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Göbelbecker, Moritz
    Albert-Ludwigs-Universität.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Zender, Hendrik
    DFKI Saarbrücken.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    A Framework for Goal Generation and Management2010In: Proceedings of the AAAI Workshop on Goal-Directed Autonomy, 2010Conference paper (Refereed)
    Abstract [en]

    Goal-directed behaviour is often viewed as an essential char- acteristic of an intelligent system, but mechanisms to generate and manage goals are often overlooked. This paper addresses this by presenting a framework for autonomous goal gener- ation and selection. The framework has been implemented as part of an intelligent mobile robot capable of exploring unknown space and determining the category of rooms au- tonomously. We demonstrate the efficacy of our approach by comparing the performance of two versions of our inte- grated system: one with the framework, the other without. This investigation leads us conclude that such a framework is desirable for an integrated intelligent system because it re- duces the complexity of the problems that must be solved by other behaviour-generation mechanisms, it makes goal- directed behaviour more robust in the face of a dynamic and unpredictable environments, and it provides an entry point for domain-specific knowledge in a more general system.

  • 17.
    Hawes, Nick
    et al.
    University of Birmingham.
    Hanheide, Marc
    University of Birmingham.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Göbelbecker, Moritz
    Albert-Ludwigs-Universität.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Zender, Hendrik
    Lison, Pierre
    DFKI Saarbrücken.
    Kruijff-Korbayova, Ivana
    DFKI Saarbrücken.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    Zillich, Michael
    Vienna University of Technology.
    Dora The Explorer: A Motivated Robot2009In: Proc. of 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2010) / [ed] van der Hoek, Kaminka, Lespérance, Luck, Sen, 2009, p. 1617-1618Conference paper (Refereed)
    Abstract [en]

    Dora the Explorer is a mobile robot with a sense of curios- ity and a drive to explore its world. Given an incomplete tour of an indoor environment, Dora is driven by internal motivations to probe the gaps in her spatial knowledge. She actively explores regions of space which she hasn't previously visited but which she expects will lead her to further unex- plored space. She will also attempt to determine the cate- gories of rooms through active visual search for functionally important objects, and through ontology-driven inference on the results of this search.

  • 18.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A Framework for Robust Cognitive Spatial Mapping2009In: 2009 International Conference on Advanced Robotics, ICAR 2009, IEEE , 2009, p. 686-693Conference paper (Refereed)
    Abstract [en]

    Spatial knowledge constitutes a fundamental component of the knowledge base of a cognitive, mobile agent. This paper introduces a rigorously defined framework for building a cognitive spatial map that permits high level reasoning about space along with robust navigation and localization. Our framework builds on the concepts of places and scenes expressed in terms of arbitrary, possibly complex features as well as local spatial relations. The resulting map is topological and discrete, robocentric and specific to the agent's perception. We analyze spatial mapping design mechanics in order to obtain rules for how to define the map components and attempt to prove that if certain design rules are obeyed then certain map properties are guaranteed to be realized. The idea of this paper is to take a step back from existing algorithms and literature and see how a rigorous formal treatment can lead the way towards a powerful spatial representation for localization and navigation. We illustrate the power of our analysis and motivate our cognitive mapping characteristics with some illustrative examples.

  • 19.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bishop, Adrian N.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Representing spatial knowledge in mobile cognitive systems2010In: Intelligent Autonomous Systems 11, IAS 2010, 2010, p. 133-142Conference paper (Refereed)
    Abstract [en]

    A cornerstone for cognitive mobile agents is to represent the vast body of knowledge about space in which they operate. In order to be robust and efficient, such representation must address requirements imposed on the integrated system as a whole, but also resulting from properties of its components. In this paper, we carefully analyze the problem and design a structure of a spatial knowledge representation for a cognitive mobile system. Our representation is layered and represents knowledge at different levels of abstraction. It deals with complex, crossmodal, spatial knowledge that is inherently uncertain and dynamic. Furthermore, it incorporates discrete symbols that facilitate communication with the user and components of a cognitive system. We present the structure of the representation and propose concrete instantiations.

  • 20.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Topological spatial relations for active visual search2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 9, p. 1093-1107Article in journal (Refereed)
    Abstract [en]

    If robots are to assume their long anticipated place by humanity's side and be of help to us in our partially structured environments, we believe that adopting human-like cognitive patterns will be valuable. Such environments are the products of human preferences, activity and thought; they are imbued with semantic meaning. In this paper we investigate qualitative spatial relations with the aim of both perceiving those semantics, and of using semantics to perceive. More specifically, in this paper we introduce general perceptual measures for two common topological spatial relations, "on" and "in", that allow a robot to evaluate object configurations, possible or actual, in terms of those relations. We also show how these spatial relations can be used as a way of guiding visual object search. We do this by providing a principled approach for indirect search in which the robot can make use of known or assumed spatial relations between objects, significantly increasing the efficiency of search by first looking for an intermediate object that is easier to find. We explain our design, implementation and experimental setup and provide extensive experimental results to back up our thesis.

  • 21.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Mörwald, Thomas
    Zhou, Kai
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Mechanical support as a spatial abstraction for mobile robots2010In: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, p. 4894-4900Conference paper (Refereed)
    Abstract [en]

    Motivated by functional interpretations of spatial language terms, and the need for cognitively plausible and practical abstractions for mobile service robots, we present a spatial representation based on the physical support of one object by another, corresponding to the preposition "on". A perceptual model for evaluating this relation is suggested, and experiments-simulated as well as using a real robot -are presented. We indicate how this model can be used for important tasks such as communication of spatial knowledge, abstract reasoning and learning, taking as an example direct and indirect visual search. We also demonstrate the model experimentally and show that it produces intuitively feasible results from visual scene analysis as well as synthetic distributions that can be put to a number of uses.

  • 22. Wyatt, Jeremy L.
    et al.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Brenner, Michael
    Hanheide, Marc
    Hawes, Nick
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristan, Matej
    Kruijff, Geert-Jan M.
    Lison, Pierre
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vrecko, Alen
    Zender, Hendrik
    Zillich, Michael
    Skocaj, Danijel
    Self-Understanding and Self-Extension: A Systems and Representational Approach2010In: IEEE T AUTON MENT DE, ISSN 1943-0604, Vol. 2, no 4, p. 282-303Article in journal (Refereed)
    Abstract [en]

    There are many different approaches to building a system that can engage in autonomous mental development. In this paper, we present an approach based on what we term self-understanding, by which we mean the explicit representation of and reasoning about what a system does and does not know, and how that knowledge changes under action. We present an architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, which we term self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a goal management and planning system for setting and achieving learning goals.

1 - 22 of 22
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf