Ändra sökning
Länk till posten
Permanent länk

Direktlänk
BETA
Publikationer (10 of 136) Visa alla publikationer
Bore, N., Ekekrantz, J., Jensfelt, P. & Folkesson, J. (2019). Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps. IEEE Transactions on robotics, 35(1), 231-247
Öppna denna publikation i ny flik eller fönster >>Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps
2019 (Engelska)Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, nr 1, s. 231-247Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

Ort, förlag, år, upplaga, sidor
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Nyckelord
Dynamic mapping, mobile robot, movable objects, multitarget tracking (MTT), Rao-Blackwellized particle filter (RBPF), service robots
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:kth:diva-245151 (URN)10.1109/TRO.2018.2876111 (DOI)000458197300017 ()2-s2.0-85057204782 (Scopus ID)
Anmärkning

QC 20190313

Tillgänglig från: 2019-03-13 Skapad: 2019-03-13 Senast uppdaterad: 2019-03-18Bibliografiskt granskad
Selin, M., Tiger, M., Duberg, D., Heintz, F. & Jensfelt, P. (2019). Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments. IEEE Robotics and Automation Letters, 4(2), 1699-1706
Öppna denna publikation i ny flik eller fönster >>Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments
Visa övriga...
2019 (Engelska)Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 2, s. 1699-1706Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Exploration is an important aspect of robotics, whether it is for mapping, rescue missions, or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this letter, we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.

Ort, förlag, år, upplaga, sidor
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Nyckelord
Search and rescue robots, motion and path planning, mapping
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-246228 (URN)10.1109/LRA.2019.2897343 (DOI)000459538100069 ()2-s2.0-85063311333 (Scopus ID)
Anmärkning

QC 20190404

Tillgänglig från: 2019-04-04 Skapad: 2019-04-04 Senast uppdaterad: 2019-04-04Bibliografiskt granskad
Barbosa, F. S., Duberg, D., Jensfelt, P. & Tumova, J. (2019). Guiding Autonomous Exploration with Signal Temporal Logic. IEEE Robotics and Automation Letters, 4(4), 3332-3339
Öppna denna publikation i ny flik eller fönster >>Guiding Autonomous Exploration with Signal Temporal Logic
2019 (Engelska)Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 4, s. 3332-3339Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Algorithms for autonomous robotic exploration usually focus on optimizing time and coverage, often in a greedy fashion. However, obstacle inflation is conservative and might limit mapping capabilities and even prevent the robot from moving through narrow, important places. This letter proposes a method to influence the manner the robot moves in the environment by taking into consideration a user-defined spatial preference formulated in a fragment of signal temporal logic (STL). We propose to guide the motion planning toward minimizing the violation of such preference through a cost function that integrates the quantitative semantics, i.e., robustness of STL. To demonstrate the effectiveness of the proposed approach, we integrate it into the autonomous exploration planner (AEP). Results from simulations and real-world experiments are presented, highlighting the benefits of our approach.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2019
Nyckelord
Mapping, motion and path planning, formal methods in robotics and automation, search and rescue robots
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:kth:diva-255721 (URN)10.1109/LRA.2019.2926669 (DOI)000476791300029 ()2-s2.0-85069437912 (Scopus ID)
Anmärkning

QC 20190813

Tillgänglig från: 2019-08-13 Skapad: 2019-08-13 Senast uppdaterad: 2019-08-13Bibliografiskt granskad
Tang, J., Folkesson, J. & Jensfelt, P. (2019). Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction. IEEE Robotics and Automation Letters, 4(2), 530-537
Öppna denna publikation i ny flik eller fönster >>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction
2019 (Engelska)Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 2, s. 530-537Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

Ort, förlag, år, upplaga, sidor
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Nyckelord
Visual-based navigation, SLAM, deep learning in robotics and automation
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:kth:diva-243927 (URN)10.1109/LRA.2019.2891433 (DOI)000456673300007 ()
Tillgänglig från: 2019-03-13 Skapad: 2019-03-13 Senast uppdaterad: 2019-03-13Bibliografiskt granskad
Chen, X., Ghadirzadeh, A., Folkesson, J., Björkman, M. & Jensfelt, P. (2018). Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Öppna denna publikation i ny flik eller fönster >>Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments
Visa övriga...
2018 (Engelska)Ingår i: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

Nationell ämneskategori
Robotteknik och automation
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-256310 (URN)10.1109/IROS.2018.8593702 (DOI)2-s2.0-85062964303 (Scopus ID)
Konferens
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Forskningsfinansiär
EU, Horisont 2020, 644839
Anmärkning

QC 20190902

Tillgänglig från: 2019-08-21 Skapad: 2019-08-21 Senast uppdaterad: 2019-09-02Bibliografiskt granskad
Tang, J., Folkesson, J. & Jensfelt, P. (2018). Geometric Correspondence Network for Camera Motion Estimation. IEEE Robotics and Automation Letters, 3(2), 1010-1017
Öppna denna publikation i ny flik eller fönster >>Geometric Correspondence Network for Camera Motion Estimation
2018 (Engelska)Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, nr 2, s. 1010-1017Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

Ort, förlag, år, upplaga, sidor
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Nyckelord
Visual-based navigation, SLAM, deep learning in robotics and automation
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:kth:diva-223775 (URN)10.1109/LRA.2018.2794624 (DOI)000424646100022 ()2-s2.0-85063305858 (Scopus ID)
Anmärkning

QC 20180307

Tillgänglig från: 2018-03-07 Skapad: 2018-03-07 Senast uppdaterad: 2019-05-16Bibliografiskt granskad
Kragic, D., Gustafson, J., Karaoǧuz, H., Jensfelt, P. & Krug, R. (2018). Interactive, collaborative robots: Challenges and opportunities. In: IJCAI International Joint Conference on Artificial Intelligence: . Paper presented at 27th International Joint Conference on Artificial Intelligence, IJCAI 2018; Stockholm; Sweden; 13 July 2018 through 19 July 2018 (pp. 18-25). International Joint Conferences on Artificial Intelligence
Öppna denna publikation i ny flik eller fönster >>Interactive, collaborative robots: Challenges and opportunities
Visa övriga...
2018 (Engelska)Ingår i: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2018, s. 18-25Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Robotic technology has transformed manufacturing industry ever since the first industrial robot was put in use in the beginning of the 60s. The challenge of developing flexible solutions where production lines can be quickly re-planned, adapted and structured for new or slightly changed products is still an important open problem. Industrial robots today are still largely preprogrammed for their tasks, not able to detect errors in their own performance or to robustly interact with a complex environment and a human worker. The challenges are even more serious when it comes to various types of service robots. Full robot autonomy, including natural interaction, learning from and with human, safe and flexible performance for challenging tasks in unstructured environments will remain out of reach for the foreseeable future. In the envisioned future factory setups, home and office environments, humans and robots will share the same workspace and perform different object manipulation tasks in a collaborative manner. We discuss some of the major challenges of developing such systems and provide examples of the current state of the art.

Ort, förlag, år, upplaga, sidor
International Joint Conferences on Artificial Intelligence, 2018
Nyckelord
Artificial intelligence, Industrial robots, Collaborative robots, Complex environments, Manufacturing industries, Natural interactions, Object manipulation, Office environments, Robotic technologies, Unstructured environments, Human robot interaction
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:kth:diva-247239 (URN)2-s2.0-85055718956 (Scopus ID)9780999241127 (ISBN)
Konferens
27th International Joint Conference on Artificial Intelligence, IJCAI 2018; Stockholm; Sweden; 13 July 2018 through 19 July 2018
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF)Knut och Alice Wallenbergs Stiftelse
Anmärkning

QC 20190402

Tillgänglig från: 2019-04-02 Skapad: 2019-04-02 Senast uppdaterad: 2019-05-22Bibliografiskt granskad
Mancini, M., Karaoǧuz, H., Ricci, E., Jensfelt, P. & Caputo, B. (2018). Kitting in the Wild through Online Domain Adaptation. In: Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L (Ed.), 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 01-05, 2018, Madrid, SPAIN (pp. 1103-1109). IEEE
Öppna denna publikation i ny flik eller fönster >>Kitting in the Wild through Online Domain Adaptation
Visa övriga...
2018 (Engelska)Ingår i: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, s. 1103-1109Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.

Ort, förlag, år, upplaga, sidor
IEEE, 2018
Serie
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-246309 (URN)10.1109/IROS.2018.8593862 (DOI)000458872701034 ()2-s2.0-85063002869 (Scopus ID)978-1-5386-8094-0 (ISBN)
Konferens
25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 01-05, 2018, Madrid, SPAIN
Anmärkning

QC 20190319

Tillgänglig från: 2019-03-19 Skapad: 2019-03-19 Senast uppdaterad: 2019-05-16Bibliografiskt granskad
Brucker, M., Durner, M., Ambrus, R., Marton, Z. C., Wendt, A., Jensfelt, P., . . . Triebel, R. (2018). Semantic Labeling of Indoor Environments from 3D RGB Maps. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA (pp. 1871-1878). IEEE Computer Society
Öppna denna publikation i ny flik eller fönster >>Semantic Labeling of Indoor Environments from 3D RGB Maps
Visa övriga...
2018 (Engelska)Ingår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, s. 1871-1878Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.

Ort, förlag, år, upplaga, sidor
IEEE Computer Society, 2018
Serie
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-237161 (URN)000446394501066 ()2-s2.0-85063131122 (Scopus ID)978-1-5386-3081-5 (ISBN)
Konferens
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Forskningsfinansiär
Vetenskapsrådet, C0475401Stiftelsen för strategisk forskning (SSF)
Anmärkning

QC 20181024

Tillgänglig från: 2018-10-24 Skapad: 2018-10-24 Senast uppdaterad: 2019-06-12Bibliografiskt granskad
Duberg, D. & Jensfelt, P. (2018). The Obstacle-restriction Method for Tele-operation of Unmanned Aerial Vehicles with Restricted Motion. In: 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV): . Paper presented at 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), NOV 18-21, 2018, Singapore, SINGAPORE (pp. 266-273). IEEE
Öppna denna publikation i ny flik eller fönster >>The Obstacle-restriction Method for Tele-operation of Unmanned Aerial Vehicles with Restricted Motion
2018 (Engelska)Ingår i: 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), IEEE , 2018, s. 266-273Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper presents a collision avoidance method for tele-operated unmanned aerial vehicles (UAVs). The method is designed to assist the operator at all times, such that the operator can focus solely on the main objectives instead of avoiding obstacles. We restrict the altitude to be fixed in a three dimensional environment to simplify the control and operation of the UAV. The method contributes a number of desired properties not found in other collision avoidance systems for tele-operated UAVs. Our method i) can handle situations where there is no input from the user by actively stopping and proceeding to avoid obstacles, ii) allows the operator to slide between prioritizing staying away from objects and getting close to them in a safe way when so required, and iii) provides for intuitive control by not deviating too far from the control input of the operator. We demonstrate the effectiveness of the method in real world experiments with a physical hexacopter in different indoor scenarios. We also present simulation results where we compare controlling the UAV with and without our method activated.

Ort, förlag, år, upplaga, sidor
IEEE, 2018
Serie
International Conference on Control Automation Robotics and Vision, ISSN 2474-2953
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-246315 (URN)000459847700046 ()978-1-5386-9582-1 (ISBN)
Konferens
15th International Conference on Control, Automation, Robotics and Vision (ICARCV), NOV 18-21, 2018, Singapore, SINGAPORE
Anmärkning

QC 20190319

Tillgänglig från: 2019-03-19 Skapad: 2019-03-19 Senast uppdaterad: 2019-05-13Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-1170-7162

Sök vidare i DiVA

Visa alla publikationer