Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 136) Show all publications
Bore, N., Ekekrantz, J., Jensfelt, P. & Folkesson, J. (2019). Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps. IEEE Transactions on robotics, 35(1), 231-247
Open this publication in new window or tab >>Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps
2019 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 1, p. 231-247Article in journal (Refereed) Published
Abstract [en]

This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Dynamic mapping, mobile robot, movable objects, multitarget tracking (MTT), Rao-Blackwellized particle filter (RBPF), service robots
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-245151 (URN)10.1109/TRO.2018.2876111 (DOI)000458197300017 ()2-s2.0-85057204782 (Scopus ID)
Note

QC 20190313

Available from: 2019-03-13 Created: 2019-03-13 Last updated: 2019-03-18Bibliographically approved
Selin, M., Tiger, M., Duberg, D., Heintz, F. & Jensfelt, P. (2019). Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments. IEEE Robotics and Automation Letters, 4(2), 1699-1706
Open this publication in new window or tab >>Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments
Show others...
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 1699-1706Article in journal (Refereed) Published
Abstract [en]

Exploration is an important aspect of robotics, whether it is for mapping, rescue missions, or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this letter, we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Search and rescue robots, motion and path planning, mapping
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-246228 (URN)10.1109/LRA.2019.2897343 (DOI)000459538100069 ()2-s2.0-85063311333 (Scopus ID)
Note

QC 20190404

Available from: 2019-04-04 Created: 2019-04-04 Last updated: 2019-04-04Bibliographically approved
Barbosa, F. S., Duberg, D., Jensfelt, P. & Tumova, J. (2019). Guiding Autonomous Exploration with Signal Temporal Logic. IEEE Robotics and Automation Letters, 4(4), 3332-3339
Open this publication in new window or tab >>Guiding Autonomous Exploration with Signal Temporal Logic
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 4, p. 3332-3339Article in journal (Refereed) Published
Abstract [en]

Algorithms for autonomous robotic exploration usually focus on optimizing time and coverage, often in a greedy fashion. However, obstacle inflation is conservative and might limit mapping capabilities and even prevent the robot from moving through narrow, important places. This letter proposes a method to influence the manner the robot moves in the environment by taking into consideration a user-defined spatial preference formulated in a fragment of signal temporal logic (STL). We propose to guide the motion planning toward minimizing the violation of such preference through a cost function that integrates the quantitative semantics, i.e., robustness of STL. To demonstrate the effectiveness of the proposed approach, we integrate it into the autonomous exploration planner (AEP). Results from simulations and real-world experiments are presented, highlighting the benefits of our approach.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019
Keywords
Mapping, motion and path planning, formal methods in robotics and automation, search and rescue robots
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-255721 (URN)10.1109/LRA.2019.2926669 (DOI)000476791300029 ()2-s2.0-85069437912 (Scopus ID)
Note

QC 20190813

Available from: 2019-08-13 Created: 2019-08-13 Last updated: 2019-08-13Bibliographically approved
Tang, J., Folkesson, J. & Jensfelt, P. (2019). Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction. IEEE Robotics and Automation Letters, 4(2), 530-537
Open this publication in new window or tab >>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 530-537Article in journal (Refereed) Published
Abstract [en]

In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-243927 (URN)10.1109/LRA.2019.2891433 (DOI)000456673300007 ()
Available from: 2019-03-13 Created: 2019-03-13 Last updated: 2019-03-13Bibliographically approved
Chen, X., Ghadirzadeh, A., Folkesson, J., Björkman, M. & Jensfelt, P. (2018). Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Open this publication in new window or tab >>Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Conference paper, Published paper (Refereed)
Abstract [en]

Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-256310 (URN)10.1109/IROS.2018.8593702 (DOI)2-s2.0-85062964303 (Scopus ID)
Conference
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Funder
EU, Horizon 2020, 644839
Note

QC 20190902

Available from: 2019-08-21 Created: 2019-08-21 Last updated: 2019-09-02Bibliographically approved
Tang, J., Folkesson, J. & Jensfelt, P. (2018). Geometric Correspondence Network for Camera Motion Estimation. IEEE Robotics and Automation Letters, 3(2), 1010-1017
Open this publication in new window or tab >>Geometric Correspondence Network for Camera Motion Estimation
2018 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 1010-1017Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-223775 (URN)10.1109/LRA.2018.2794624 (DOI)000424646100022 ()2-s2.0-85063305858 (Scopus ID)
Note

QC 20180307

Available from: 2018-03-07 Created: 2018-03-07 Last updated: 2019-05-16Bibliographically approved
Kragic, D., Gustafson, J., Karaoǧuz, H., Jensfelt, P. & Krug, R. (2018). Interactive, collaborative robots: Challenges and opportunities. In: IJCAI International Joint Conference on Artificial Intelligence: . Paper presented at 27th International Joint Conference on Artificial Intelligence, IJCAI 2018; Stockholm; Sweden; 13 July 2018 through 19 July 2018 (pp. 18-25). International Joint Conferences on Artificial Intelligence
Open this publication in new window or tab >>Interactive, collaborative robots: Challenges and opportunities
Show others...
2018 (English)In: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2018, p. 18-25Conference paper, Published paper (Refereed)
Abstract [en]

Robotic technology has transformed manufacturing industry ever since the first industrial robot was put in use in the beginning of the 60s. The challenge of developing flexible solutions where production lines can be quickly re-planned, adapted and structured for new or slightly changed products is still an important open problem. Industrial robots today are still largely preprogrammed for their tasks, not able to detect errors in their own performance or to robustly interact with a complex environment and a human worker. The challenges are even more serious when it comes to various types of service robots. Full robot autonomy, including natural interaction, learning from and with human, safe and flexible performance for challenging tasks in unstructured environments will remain out of reach for the foreseeable future. In the envisioned future factory setups, home and office environments, humans and robots will share the same workspace and perform different object manipulation tasks in a collaborative manner. We discuss some of the major challenges of developing such systems and provide examples of the current state of the art.

Place, publisher, year, edition, pages
International Joint Conferences on Artificial Intelligence, 2018
Keywords
Artificial intelligence, Industrial robots, Collaborative robots, Complex environments, Manufacturing industries, Natural interactions, Object manipulation, Office environments, Robotic technologies, Unstructured environments, Human robot interaction
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-247239 (URN)2-s2.0-85055718956 (Scopus ID)9780999241127 (ISBN)
Conference
27th International Joint Conference on Artificial Intelligence, IJCAI 2018; Stockholm; Sweden; 13 July 2018 through 19 July 2018
Funder
Swedish Foundation for Strategic Research Knut and Alice Wallenberg Foundation
Note

QC 20190402

Available from: 2019-04-02 Created: 2019-04-02 Last updated: 2019-05-22Bibliographically approved
Mancini, M., Karaoǧuz, H., Ricci, E., Jensfelt, P. & Caputo, B. (2018). Kitting in the Wild through Online Domain Adaptation. In: Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L (Ed.), 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 01-05, 2018, Madrid, SPAIN (pp. 1103-1109). IEEE
Open this publication in new window or tab >>Kitting in the Wild through Online Domain Adaptation
Show others...
2018 (English)In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, p. 1103-1109Conference paper, Published paper (Refereed)
Abstract [en]

Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.

Place, publisher, year, edition, pages
IEEE, 2018
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-246309 (URN)10.1109/IROS.2018.8593862 (DOI)000458872701034 ()2-s2.0-85063002869 (Scopus ID)978-1-5386-8094-0 (ISBN)
Conference
25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 01-05, 2018, Madrid, SPAIN
Note

QC 20190319

Available from: 2019-03-19 Created: 2019-03-19 Last updated: 2019-05-16Bibliographically approved
Brucker, M., Durner, M., Ambrus, R., Marton, Z. C., Wendt, A., Jensfelt, P., . . . Triebel, R. (2018). Semantic Labeling of Indoor Environments from 3D RGB Maps. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA (pp. 1871-1878). IEEE Computer Society
Open this publication in new window or tab >>Semantic Labeling of Indoor Environments from 3D RGB Maps
Show others...
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 1871-1878Conference paper, Published paper (Refereed)
Abstract [en]

We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-237161 (URN)000446394501066 ()2-s2.0-85063131122 (Scopus ID)978-1-5386-3081-5 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Funder
Swedish Research Council, C0475401Swedish Foundation for Strategic Research
Note

QC 20181024

Available from: 2018-10-24 Created: 2018-10-24 Last updated: 2019-06-12Bibliographically approved
Duberg, D. & Jensfelt, P. (2018). The Obstacle-restriction Method for Tele-operation of Unmanned Aerial Vehicles with Restricted Motion. In: 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV): . Paper presented at 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), NOV 18-21, 2018, Singapore, SINGAPORE (pp. 266-273). IEEE
Open this publication in new window or tab >>The Obstacle-restriction Method for Tele-operation of Unmanned Aerial Vehicles with Restricted Motion
2018 (English)In: 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), IEEE , 2018, p. 266-273Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a collision avoidance method for tele-operated unmanned aerial vehicles (UAVs). The method is designed to assist the operator at all times, such that the operator can focus solely on the main objectives instead of avoiding obstacles. We restrict the altitude to be fixed in a three dimensional environment to simplify the control and operation of the UAV. The method contributes a number of desired properties not found in other collision avoidance systems for tele-operated UAVs. Our method i) can handle situations where there is no input from the user by actively stopping and proceeding to avoid obstacles, ii) allows the operator to slide between prioritizing staying away from objects and getting close to them in a safe way when so required, and iii) provides for intuitive control by not deviating too far from the control input of the operator. We demonstrate the effectiveness of the method in real world experiments with a physical hexacopter in different indoor scenarios. We also present simulation results where we compare controlling the UAV with and without our method activated.

Place, publisher, year, edition, pages
IEEE, 2018
Series
International Conference on Control Automation Robotics and Vision, ISSN 2474-2953
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-246315 (URN)000459847700046 ()978-1-5386-9582-1 (ISBN)
Conference
15th International Conference on Control, Automation, Robotics and Vision (ICARCV), NOV 18-21, 2018, Singapore, SINGAPORE
Note

QC 20190319

Available from: 2019-03-19 Created: 2019-03-19 Last updated: 2019-05-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1170-7162

Search in DiVA

Show all publications