Endre søk
Link to record
Permanent link

Direct link
BETA
Publikasjoner (10 av 136) Visa alla publikasjoner
Bore, N., Ekekrantz, J., Jensfelt, P. & Folkesson, J. (2019). Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps. IEEE Transactions on robotics, 35(1), 231-247
Åpne denne publikasjonen i ny fane eller vindu >>Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps
2019 (engelsk)Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, nr 1, s. 231-247Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

sted, utgiver, år, opplag, sider
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Emneord
Dynamic mapping, mobile robot, movable objects, multitarget tracking (MTT), Rao-Blackwellized particle filter (RBPF), service robots
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-245151 (URN)10.1109/TRO.2018.2876111 (DOI)000458197300017 ()2-s2.0-85057204782 (Scopus ID)
Merknad

QC 20190313

Tilgjengelig fra: 2019-03-13 Laget: 2019-03-13 Sist oppdatert: 2019-03-18bibliografisk kontrollert
Selin, M., Tiger, M., Duberg, D., Heintz, F. & Jensfelt, P. (2019). Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments. IEEE Robotics and Automation Letters, 4(2), 1699-1706
Åpne denne publikasjonen i ny fane eller vindu >>Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments
Vise andre…
2019 (engelsk)Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 2, s. 1699-1706Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Exploration is an important aspect of robotics, whether it is for mapping, rescue missions, or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this letter, we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.

sted, utgiver, år, opplag, sider
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Emneord
Search and rescue robots, motion and path planning, mapping
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-246228 (URN)10.1109/LRA.2019.2897343 (DOI)000459538100069 ()2-s2.0-85063311333 (Scopus ID)
Merknad

QC 20190404

Tilgjengelig fra: 2019-04-04 Laget: 2019-04-04 Sist oppdatert: 2019-04-04bibliografisk kontrollert
Barbosa, F. S., Duberg, D., Jensfelt, P. & Tumova, J. (2019). Guiding Autonomous Exploration with Signal Temporal Logic. IEEE Robotics and Automation Letters, 4(4), 3332-3339
Åpne denne publikasjonen i ny fane eller vindu >>Guiding Autonomous Exploration with Signal Temporal Logic
2019 (engelsk)Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 4, s. 3332-3339Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Algorithms for autonomous robotic exploration usually focus on optimizing time and coverage, often in a greedy fashion. However, obstacle inflation is conservative and might limit mapping capabilities and even prevent the robot from moving through narrow, important places. This letter proposes a method to influence the manner the robot moves in the environment by taking into consideration a user-defined spatial preference formulated in a fragment of signal temporal logic (STL). We propose to guide the motion planning toward minimizing the violation of such preference through a cost function that integrates the quantitative semantics, i.e., robustness of STL. To demonstrate the effectiveness of the proposed approach, we integrate it into the autonomous exploration planner (AEP). Results from simulations and real-world experiments are presented, highlighting the benefits of our approach.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2019
Emneord
Mapping, motion and path planning, formal methods in robotics and automation, search and rescue robots
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-255721 (URN)10.1109/LRA.2019.2926669 (DOI)000476791300029 ()2-s2.0-85069437912 (Scopus ID)
Merknad

QC 20190813

Tilgjengelig fra: 2019-08-13 Laget: 2019-08-13 Sist oppdatert: 2019-08-13bibliografisk kontrollert
Tang, J., Folkesson, J. & Jensfelt, P. (2019). Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction. IEEE Robotics and Automation Letters, 4(2), 530-537
Åpne denne publikasjonen i ny fane eller vindu >>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction
2019 (engelsk)Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 2, s. 530-537Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

sted, utgiver, år, opplag, sider
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Emneord
Visual-based navigation, SLAM, deep learning in robotics and automation
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-243927 (URN)10.1109/LRA.2019.2891433 (DOI)000456673300007 ()
Tilgjengelig fra: 2019-03-13 Laget: 2019-03-13 Sist oppdatert: 2019-03-13bibliografisk kontrollert
Chen, X., Ghadirzadeh, A., Folkesson, J., Björkman, M. & Jensfelt, P. (2018). Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Åpne denne publikasjonen i ny fane eller vindu >>Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments
Vise andre…
2018 (engelsk)Inngår i: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

HSV kategori
Forskningsprogram
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-256310 (URN)10.1109/IROS.2018.8593702 (DOI)2-s2.0-85062964303 (Scopus ID)
Konferanse
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Forskningsfinansiär
EU, Horizon 2020, 644839
Merknad

QC 20190902

Tilgjengelig fra: 2019-08-21 Laget: 2019-08-21 Sist oppdatert: 2019-09-02bibliografisk kontrollert
Tang, J., Folkesson, J. & Jensfelt, P. (2018). Geometric Correspondence Network for Camera Motion Estimation. IEEE Robotics and Automation Letters, 3(2), 1010-1017
Åpne denne publikasjonen i ny fane eller vindu >>Geometric Correspondence Network for Camera Motion Estimation
2018 (engelsk)Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, nr 2, s. 1010-1017Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

sted, utgiver, år, opplag, sider
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Emneord
Visual-based navigation, SLAM, deep learning in robotics and automation
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-223775 (URN)10.1109/LRA.2018.2794624 (DOI)000424646100022 ()2-s2.0-85063305858 (Scopus ID)
Merknad

QC 20180307

Tilgjengelig fra: 2018-03-07 Laget: 2018-03-07 Sist oppdatert: 2019-05-16bibliografisk kontrollert
Kragic, D., Gustafson, J., Karaoǧuz, H., Jensfelt, P. & Krug, R. (2018). Interactive, collaborative robots: Challenges and opportunities. In: IJCAI International Joint Conference on Artificial Intelligence: . Paper presented at 27th International Joint Conference on Artificial Intelligence, IJCAI 2018; Stockholm; Sweden; 13 July 2018 through 19 July 2018 (pp. 18-25). International Joint Conferences on Artificial Intelligence
Åpne denne publikasjonen i ny fane eller vindu >>Interactive, collaborative robots: Challenges and opportunities
Vise andre…
2018 (engelsk)Inngår i: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2018, s. 18-25Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Robotic technology has transformed manufacturing industry ever since the first industrial robot was put in use in the beginning of the 60s. The challenge of developing flexible solutions where production lines can be quickly re-planned, adapted and structured for new or slightly changed products is still an important open problem. Industrial robots today are still largely preprogrammed for their tasks, not able to detect errors in their own performance or to robustly interact with a complex environment and a human worker. The challenges are even more serious when it comes to various types of service robots. Full robot autonomy, including natural interaction, learning from and with human, safe and flexible performance for challenging tasks in unstructured environments will remain out of reach for the foreseeable future. In the envisioned future factory setups, home and office environments, humans and robots will share the same workspace and perform different object manipulation tasks in a collaborative manner. We discuss some of the major challenges of developing such systems and provide examples of the current state of the art.

sted, utgiver, år, opplag, sider
International Joint Conferences on Artificial Intelligence, 2018
Emneord
Artificial intelligence, Industrial robots, Collaborative robots, Complex environments, Manufacturing industries, Natural interactions, Object manipulation, Office environments, Robotic technologies, Unstructured environments, Human robot interaction
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-247239 (URN)2-s2.0-85055718956 (Scopus ID)9780999241127 (ISBN)
Konferanse
27th International Joint Conference on Artificial Intelligence, IJCAI 2018; Stockholm; Sweden; 13 July 2018 through 19 July 2018
Forskningsfinansiär
Swedish Foundation for Strategic Research Knut and Alice Wallenberg Foundation
Merknad

QC 20190402

Tilgjengelig fra: 2019-04-02 Laget: 2019-04-02 Sist oppdatert: 2019-05-22bibliografisk kontrollert
Mancini, M., Karaoǧuz, H., Ricci, E., Jensfelt, P. & Caputo, B. (2018). Kitting in the Wild through Online Domain Adaptation. In: Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L (Ed.), 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 01-05, 2018, Madrid, SPAIN (pp. 1103-1109). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>Kitting in the Wild through Online Domain Adaptation
Vise andre…
2018 (engelsk)Inngår i: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, s. 1103-1109Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.

sted, utgiver, år, opplag, sider
IEEE, 2018
Serie
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-246309 (URN)10.1109/IROS.2018.8593862 (DOI)000458872701034 ()2-s2.0-85063002869 (Scopus ID)978-1-5386-8094-0 (ISBN)
Konferanse
25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 01-05, 2018, Madrid, SPAIN
Merknad

QC 20190319

Tilgjengelig fra: 2019-03-19 Laget: 2019-03-19 Sist oppdatert: 2019-05-16bibliografisk kontrollert
Brucker, M., Durner, M., Ambrus, R., Marton, Z. C., Wendt, A., Jensfelt, P., . . . Triebel, R. (2018). Semantic Labeling of Indoor Environments from 3D RGB Maps. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA (pp. 1871-1878). IEEE Computer Society
Åpne denne publikasjonen i ny fane eller vindu >>Semantic Labeling of Indoor Environments from 3D RGB Maps
Vise andre…
2018 (engelsk)Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, s. 1871-1878Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.

sted, utgiver, år, opplag, sider
IEEE Computer Society, 2018
Serie
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-237161 (URN)000446394501066 ()2-s2.0-85063131122 (Scopus ID)978-1-5386-3081-5 (ISBN)
Konferanse
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Forskningsfinansiär
Swedish Research Council, C0475401Swedish Foundation for Strategic Research
Merknad

QC 20181024

Tilgjengelig fra: 2018-10-24 Laget: 2018-10-24 Sist oppdatert: 2019-06-12bibliografisk kontrollert
Duberg, D. & Jensfelt, P. (2018). The Obstacle-restriction Method for Tele-operation of Unmanned Aerial Vehicles with Restricted Motion. In: 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV): . Paper presented at 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), NOV 18-21, 2018, Singapore, SINGAPORE (pp. 266-273). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>The Obstacle-restriction Method for Tele-operation of Unmanned Aerial Vehicles with Restricted Motion
2018 (engelsk)Inngår i: 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), IEEE , 2018, s. 266-273Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents a collision avoidance method for tele-operated unmanned aerial vehicles (UAVs). The method is designed to assist the operator at all times, such that the operator can focus solely on the main objectives instead of avoiding obstacles. We restrict the altitude to be fixed in a three dimensional environment to simplify the control and operation of the UAV. The method contributes a number of desired properties not found in other collision avoidance systems for tele-operated UAVs. Our method i) can handle situations where there is no input from the user by actively stopping and proceeding to avoid obstacles, ii) allows the operator to slide between prioritizing staying away from objects and getting close to them in a safe way when so required, and iii) provides for intuitive control by not deviating too far from the control input of the operator. We demonstrate the effectiveness of the method in real world experiments with a physical hexacopter in different indoor scenarios. We also present simulation results where we compare controlling the UAV with and without our method activated.

sted, utgiver, år, opplag, sider
IEEE, 2018
Serie
International Conference on Control Automation Robotics and Vision, ISSN 2474-2953
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-246315 (URN)000459847700046 ()978-1-5386-9582-1 (ISBN)
Konferanse
15th International Conference on Control, Automation, Robotics and Vision (ICARCV), NOV 18-21, 2018, Singapore, SINGAPORE
Merknad

QC 20190319

Tilgjengelig fra: 2019-03-19 Laget: 2019-03-19 Sist oppdatert: 2019-05-13bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-1170-7162