Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 35) Show all publications
Pronobis, A. & Rao, R. P. N. (2017). Learning Deep Generative Spatial Models for Mobile Robots. In: Bicchi, A Okamura, A (Ed.), 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA (pp. 755-762). IEEE
Open this publication in new window or tab >>Learning Deep Generative Spatial Models for Mobile Robots
2017 (English)In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 755-762Conference paper, Published paper (Refereed)
Abstract [en]

We propose a new probabilistic framework that allows mobile robots to autonomously learn deep, generative models of their environments that span multiple levels of abstraction. Unlike traditional approaches that combine engineered models for low-level features, geometry, and semantics, our approach leverages recent advances in Sum-Product Networks (SPNs) and deep learning to learn a single, universal model of the robot's spatial environment. Our model is fully probabilistic and generative, and represents a joint distribution over spatial information ranging from low-level geometry to semantic interpretations. Once learned, it is capable of solving a wide range of tasks: from semantic classification of places, uncertainty estimation, and novelty detection, to generation of place appearances based on semantic information and prediction of missing data in partial observations. Experiments on laser-range data from a mobile robot show that the proposed universal model obtains performance superior to state-of-the-art models fine-tuned to one specific task, such as Generative Adversarial Networks (GANs) or SVMs.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225798 (URN)000426978201025 ()978-1-5386-2682-5 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA
Funder
Swedish Research Council, 2012-4907 SKAEENet
Note

QC 20180409

Available from: 2018-04-09 Created: 2018-04-09 Last updated: 2018-04-09Bibliographically approved
Hanheide, M., Göbelbecker, M., Horn, G. S., Pronobis, A., Sjöö, K., Aydemir, A., . . . Wyatt, J. (2015). Robot task planning and explanation in open and uncertain worlds. Artificial Intelligence
Open this publication in new window or tab >>Robot task planning and explanation in open and uncertain worlds
Show others...
2015 (English)In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921Article in journal (Refereed) Published
Abstract [en]

A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

Place, publisher, year, edition, pages
Elsevier, 2015
Keywords
Robotics
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-186868 (URN)10.1016/j.artint.2015.08.008 (DOI)
Projects
EU FP7 CogX
Funder
EU, FP7, Seventh Framework Programme, 215181Swedish Foundation for Strategic Research , RIT08-0010Swedish Research Council, 2012-4907
Note

QC 20160517

Available from: 2016-05-16 Created: 2016-05-16 Last updated: 2017-09-21Bibliographically approved
Aydemir, A., Pronobis, A., Gobelbecker, M. & Jensfelt, P. (2013). Active Visual Object Search in Unknown Environments Using Uncertain Semantics. IEEE Transactions on robotics, 29(4), 986-1002
Open this publication in new window or tab >>Active Visual Object Search in Unknown Environments Using Uncertain Semantics
2013 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, no 4, p. 986-1002Article in journal (Refereed) Published
Abstract [en]

In this paper, we study the problem of active visual search (AVS) in large, unknown, or partially known environments. We argue that by making use of uncertain semantics of the environment, a robot tasked with finding an object can devise efficient search strategies that can locate everyday objects at the scale of an entire building floor, which is previously unknown to the robot. To realize this, we present a probabilistic model of the search environment, which allows for prioritizing the search effort to those parts of the environment that are most promising for a specific object type. Further, we describe a method for reasoning about the unexplored part of the environment for goal-directed exploration with the purpose of object search. We demonstrate the validity of our approach by comparing it with two other search systems in terms of search trajectory length and time. First, we implement a greedy coverage-based search strategy that is found in previous work. Second, we let human participants search for objects as an alternative comparison for our method. Our results show that AVS strategies that exploit uncertain semantics of the environment are a very promising idea, and our method pushes the state-of-the-art forward in AVS.

Keywords
Active vision, semantic mapping, visual object search
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-127759 (URN)10.1109/TRO.2013.2256686 (DOI)000322836600014 ()2-s2.0-84882315603 (Scopus ID)
Note

QC 20130906

Available from: 2013-09-06 Created: 2013-09-05 Last updated: 2017-12-06Bibliographically approved
Ekekrantz, J., Pronobis, A., Folkesson, J. & Jensfelt, P. (2013). Adaptive Iterative Closest Keypoint. In: 2013 European Conference on Mobile Robots, ECMR 2013 - Conference Proceedings: . Paper presented at 2013 6th European Conference on Mobile Robots, ECMR 2013; Barcelona; Spain; 25 September 2013 through 27 September 2013 (pp. 80-87). New York: IEEE
Open this publication in new window or tab >>Adaptive Iterative Closest Keypoint
2013 (English)In: 2013 European Conference on Mobile Robots, ECMR 2013 - Conference Proceedings, New York: IEEE , 2013, p. 80-87Conference paper, Published paper (Refereed)
Abstract [en]

Finding accurate correspondences between overlapping 3D views is crucial for many robotic applications, from multi-view 3D object recognition to SLAM. This step, often referred to as view registration, plays a key role in determining the overall system performance. In this paper, we propose a fast and simple method for registering RGB-D data, building on the principle of the Iterative Closest Point (ICP) algorithm. In contrast to ICP, our method exploits both point position and visual appearance and is able to smoothly transition the weighting between them with an adaptive metric. This results in robust initial registration based on appearance and accurate final registration using 3D points. Using keypoint clustering we are able to utilize a non exhaustive search strategy, reducing runtime of the algorithm significantly. We show through an evaluation on an established benchmark that the method significantly outperforms current methods in both robustness and precision.

Place, publisher, year, edition, pages
New York: IEEE, 2013
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-141743 (URN)10.1109/ECMR.2013.6698824 (DOI)000330234600014 ()2-s2.0-84893242591 (Scopus ID)978-147990263-7 (ISBN)
Conference
2013 6th European Conference on Mobile Robots, ECMR 2013; Barcelona; Spain; 25 September 2013 through 27 September 2013
Note

QC 20140224

Available from: 2014-02-24 Created: 2014-02-21 Last updated: 2016-03-10Bibliographically approved
Pronobis, A. & Jensfelt, P. (2012). Large-scale semantic mapping and reasoning with heterogeneous modalities. In: 2012 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA) Location: St Paul, MN Date: May 14-18, 2012 (pp. 3515-3522). IEEE Computer Society
Open this publication in new window or tab >>Large-scale semantic mapping and reasoning with heterogeneous modalities
2012 (English)In: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, p. 3515-3522Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a probabilistic framework combining heterogeneous, uncertain, information such as object observations, shape, size, appearance of rooms and human input for semantic mapping. It abstracts multi-modal sensory information and integrates it with conceptual common-sense knowledge in a fully probabilistic fashion. It relies on the concept of spatial properties which make the semantic map more descriptive, and the system more scalable and better adapted for human interaction. A probabilistic graphical model, a chaingraph, is used to represent the conceptual information and perform spatial reasoning. Experimental results from online system tests in a large unstructured office environment highlight the system's ability to infer semantic room categories, predict existence of objects and values of other spatial properties as well as reason about unexplored space.

Place, publisher, year, edition, pages
IEEE Computer Society, 2012
Series
IEEE International Conference on Robotics and Automation, ISSN 2152-4092
Keywords
Human interactions, Multi-modal, Office environments, Probabilistic framework, Probabilistic graphical models, Semantic map, Semantic mapping, Sensory information, Spatial properties, Spatial reasoning, Robotics, Semantics
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-101527 (URN)10.1109/ICRA.2012.6224637 (DOI)000309406703080 ()2-s2.0-84864465291 (Scopus ID)978-146731403-9 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA) Location: St Paul, MN Date: May 14-18, 2012
Funder
ICT - The Next Generation
Note

QC 20120904

Available from: 2012-09-04 Created: 2012-08-30 Last updated: 2018-01-12Bibliographically approved
Göbelbecker, M., Aydemir, A., Pronobis, A., Sjöö, K. & Jensfelt, P. (2011). A planning approach to active visual search in large environments. In: AAAI Workshop Tech. Rep.: . Paper presented at 2011 AAAI Workshop, 7 August 2011, San Francisco, CA, USA (pp. 8-13).
Open this publication in new window or tab >>A planning approach to active visual search in large environments
Show others...
2011 (English)In: AAAI Workshop Tech. Rep., 2011, p. 8-13Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a principled planner based approach to the active visual object search problem in unknown environments. We make use of a hierarchical planner that combines the strength of decision theory and heuristics. Furthermore, our object search approach leverages on the conceptual spatial knowledge in the form of object co-occurrences and semantic place categorisation. A hierarchical model for representing object locations is presented with which the planner is able to perform indirect search. Finally we present real world experiments to show the feasibility of the approach.

Series
AAAI Workshop - Technical Report ; WS-11-09
Keywords
Hierarchical model, Object location, Real world experiment, Search problem, Spatial knowledge, Unknown environments, Visual objects, Visual search, Decision theory, Hierarchical systems, Semantics, Robot programming
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-150686 (URN)2-s2.0-80055050331 (Scopus ID)9781577355250 (ISBN)
Conference
2011 AAAI Workshop, 7 August 2011, San Francisco, CA, USA
Note

QC 20140908

Available from: 2014-09-08 Created: 2014-09-08 Last updated: 2014-09-08Bibliographically approved
Hanheide, M., Gretton, C., Dearden, R., Hawes, N., Wyatt, J., Pronobis, A., . . . Zender, H. (2011). Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour. In: 22nd International Joint Conference on Artificial Intelligence. Paper presented at 22nd International Joint Conference on Artificial Intelligence (IJCAI’ 11), Barcelona, Spain, July 2011.
Open this publication in new window or tab >>Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour
Show others...
2011 (English)In: 22nd International Joint Conference on Artificial Intelligence, 2011Conference paper, Published paper (Refereed)
Abstract [en]

Robots must perform tasks efficiently and reliably while acting under uncertainty. One way to achieve efficiency is to give the robot common-sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by modelling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first contribution is a probabilistic relational model integrating common-sense knowledge about the world in general, with observations of a particularenvironment. Our second contribution is a continual planning system which isable to plan in the large problems posed by that model, by automatically switching between decision-theoretic and classical procedures. We evaluate our system on objects earch tasks in two different real-world indoor environments. By reasoning about the trade-offs between possible courses of action with different informational effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.

National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-34159 (URN)2-s2.0-84881058154 (Scopus ID)
Conference
22nd International Joint Conference on Artificial Intelligence (IJCAI’ 11), Barcelona, Spain, July 2011
Note
QC 20110527Available from: 2011-05-27 Created: 2011-05-27 Last updated: 2018-01-12Bibliographically approved
Sjöö, K., Andrzej, P. & Jensfelt, P. (2011). Functional topological relations for qualitative spatial representation. In: : . Paper presented at IEEE 15th International Conference on Advanced Robotics: New Boundaries for Robotics, ICAR 2011; Tallinn; Estonia; 20-23 June 2011 (pp. 130-136).
Open this publication in new window or tab >>Functional topological relations for qualitative spatial representation
2011 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, a framework is proposed for representing knowledge about 3-D space in terms of the functional support and containment relationships, corresponding approximately to the prepositions ``on'' and ``in''. A perceptual model is presented which allows for appraising these qualitative relations given the geometries of objects; also, an axiomatic system for reasoning with the relations is put forward. We implement the system on a mobile robot and show how it can use uncertain visual input to infer a coherent qualitative evaluation of a scene, in terms of these functional relations.

Series
IEEE 15th International Conference on Advanced Robotics: New Boundaries for Robotics, ICAR 2011
Keywords
spatial relations, knowledge representation, robotics, qualitative reasoning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-38266 (URN)10.1109/ICAR.2011.6088635 (DOI)2-s2.0-84255162943 (Scopus ID)978-145771158-9 (ISBN)
Conference
IEEE 15th International Conference on Advanced Robotics: New Boundaries for Robotics, ICAR 2011; Tallinn; Estonia; 20-23 June 2011
Projects
CogX
Funder
EU, FP7, Seventh Framework Programme, CogX
Note

QC 20140915

Available from: 2011-08-23 Created: 2011-08-23 Last updated: 2018-01-12Bibliographically approved
Pronobis, A. & Jensfelt, P. (2011). Hierarchical Multi-modal Place Categorization. In: Proc. of the European Conference on Mobile Robotics (ECMR'11). Paper presented at Proc. of the European Conference on Mobile Robotics (ECMR'11). Örebro, Sweden. Sep 7-9 2011.
Open this publication in new window or tab >>Hierarchical Multi-modal Place Categorization
2011 (English)In: Proc. of the European Conference on Mobile Robotics (ECMR'11), 2011Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present an hierarchical approach to place categorization. Low level sensory data is processed into more abstract concept, named properties of space. The framework allows for fusing information from heterogeneous sensory modalities and a range of derivatives of their data. Place categories are defined based on the properties that decouples them from the low level sensory data. This gives for better scalability, both in terms of memory and computations. The probabilistic inference is performed in a chain graph which supports incremental learning of the room category models. Experimental results are presented where the shape, size and appearance of the rooms are used as properties along with the number of objects of certain classes and the topology of space.

National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-51160 (URN)
Conference
Proc. of the European Conference on Mobile Robotics (ECMR'11). Örebro, Sweden. Sep 7-9 2011
Note
QC 20111209Available from: 2011-12-09 Created: 2011-12-09 Last updated: 2018-01-12Bibliographically approved
Susano Pinto, A., Pronobis, A. & Paulo Reis, L. (2011). Novelty detection using graphical models for semantic room classification. In: 15th Portuguese Conference on Artificial Intelligence, EPIA 2011: . Paper presented at 15th Portuguese Conference on Artificial Intelligence (EPIA'11), Lisbon, Portugal. 10 October 2011 - 13 October 2011 (pp. 326-339).
Open this publication in new window or tab >>Novelty detection using graphical models for semantic room classification
2011 (English)In: 15th Portuguese Conference on Artificial Intelligence, EPIA 2011, 2011, p. 326-339Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents an approach to the problem of novelty detection in the context of semantic room categorization. The ability to assign semantic labels to areas in the environment is crucial for autonomous agents aiming to perform complex human-like tasks and human interaction. However, in order to be robust and naturally learn the semantics from the human user, the agent must be able to identify gaps in its own knowledge. To this end, we propose a method based on graphical models to identify novel input which does not match any of the previously learnt semantic descriptions. The method employs a novelty threshold defined in terms of conditional and unconditional probabilities. The novelty threshold is then optimized using an unconditional probability density model trained from unlabelled data.

Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 7026
Keywords
Novelty detection – semantic data – probabilistic graphical models – room classification – indoor environments – robotics – multi-modal classification
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-67341 (URN)10.1007/978-3-642-24769-9_24 (DOI)2-s2.0-80054796335 (Scopus ID)
Conference
15th Portuguese Conference on Artificial Intelligence (EPIA'11), Lisbon, Portugal. 10 October 2011 - 13 October 2011
Note

QC 20120130

Available from: 2012-01-27 Created: 2012-01-27 Last updated: 2018-01-12Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1396-0102

Search in DiVA

Show all publications