Change search
Link to record
Permanent link

Direct link
BETA
Folkesson, John, Associate ProfessorORCID iD iconorcid.org/0000-0002-7796-1438
Publications (10 of 55) Show all publications
Bore, N., Ekekrantz, J., Jensfelt, P. & Folkesson, J. (2019). Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps. IEEE Transactions on robotics, 35(1), 231-247
Open this publication in new window or tab >>Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps
2019 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 1, p. 231-247Article in journal (Refereed) Published
Abstract [en]

This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Dynamic mapping, mobile robot, movable objects, multitarget tracking (MTT), Rao-Blackwellized particle filter (RBPF), service robots
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-245151 (URN)10.1109/TRO.2018.2876111 (DOI)000458197300017 ()2-s2.0-85057204782 (Scopus ID)
Note

QC 20190313

Available from: 2019-03-13 Created: 2019-03-13 Last updated: 2019-03-18Bibliographically approved
Tang, J., Folkesson, J. & Jensfelt, P. (2019). Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction. IEEE Robotics and Automation Letters, 4(2), 530-537
Open this publication in new window or tab >>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 530-537Article in journal (Refereed) Published
Abstract [en]

In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-243927 (URN)10.1109/LRA.2019.2891433 (DOI)000456673300007 ()
Available from: 2019-03-13 Created: 2019-03-13 Last updated: 2019-03-13Bibliographically approved
Torroba, I., Bore, N. & Folkesson, J. (2018). A Comparison of Submaps Registration Methods for Multibeam Bathymetric Mapping. In: : . Paper presented at 2018 IEEE OES Autonomous Underwater Vehicle Symposium.
Open this publication in new window or tab >>A Comparison of Submaps Registration Methods for Multibeam Bathymetric Mapping
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

On-the-fly registration of overlapping multi-beam images is important for path planning by AUVs per-forming underwater surveys. In order to meet specificationon such things as survey accuracy, coverage and density,precise corrections to the AUV trajectory while underwayare required. There are fast methods for aligning pointclouds that have been developed for robots. We compareseveral state of the art methods to align point clouds oflarge, unstructured, sub-aquatic areas to build a globalmap. We first collect the multibeam point clouds intosmaller submaps that are then aligned using variationsof the ICP algorithm. This alignment step can be appliedif the error in AUV pose is small. It would be the finalstep in correcting a larger error on loop closing where aplace recognition and a rough alignment would precedeit. In the case of a lawn mower pattern survey it would bemaking more continuous corrections to small errors in theoverlap between parallel lines. In this work we comparedifferent methods for registration in order to determinethe most suitable one for underwater terrain mapping. Todo so, we benchmark the current state of the art solutionsaccording to an error metrics and show the results.

Keywords
SLAM, AUV
National Category
Robotics
Research subject
Vehicle and Maritime Engineering; Computer Science
Identifiers
urn:nbn:se:kth:diva-250894 (URN)
Conference
2018 IEEE OES Autonomous Underwater Vehicle Symposium
Projects
SMaRC, SSF IRC15-0046
Funder
Swedish Foundation for Strategic Research , IRC15-0046
Note

QC 20190424

Available from: 2019-05-07 Created: 2019-05-07 Last updated: 2019-05-15Bibliographically approved
Tang, J., Folkesson, J. & Jensfelt, P. (2018). Geometric Correspondence Network for Camera Motion Estimation. IEEE Robotics and Automation Letters, 3(2), 1010-1017
Open this publication in new window or tab >>Geometric Correspondence Network for Camera Motion Estimation
2018 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 1010-1017Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-223775 (URN)10.1109/LRA.2018.2794624 (DOI)000424646100022 ()2-s2.0-85063305858 (Scopus ID)
Note

QC 20180307

Available from: 2018-03-07 Created: 2018-03-07 Last updated: 2019-05-16Bibliographically approved
Mänttäri, J., Folkesson, J. & Ward, E. (2018). Learning to Predict Lane Changes in Highway Scenarios Using Dynamic Filters on a Generic Traffic Representation. In: IEEE Intelligent Vehicles Symposium, Proceedings: . Paper presented at 2018 IEEE Intelligent Vehicles Symposium, IV 2018, 26 September 2018 through 30 September 2018 (pp. 1385-1392). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Learning to Predict Lane Changes in Highway Scenarios Using Dynamic Filters on a Generic Traffic Representation
2018 (English)In: IEEE Intelligent Vehicles Symposium, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 1385-1392Conference paper, Published paper (Refereed)
Abstract [en]

In highway driving scenarios it is important for highly automated driving systems to be able to recognize and predict the intended maneuvers of other drivers in order to make robust and informed decisions. Many methods utilize the current kinematics of vehicles to make these predictions, but it is possible to examine the relations between vehicles as well to gain more information about the traffic scene and make more accurate predictions. The work presented in this paper proposes a novel method of predicting lane change maneuvers in highway scenarios using deep learning and a generic visual representation of the traffic scene. Experimental results suggest that by operating on the visual representation, the spacial relations between arbitrary vehicles can be captured by our method and used for more informed predictions without the need for explicit dynamic or driver interaction models. The proposed method is evaluated on highway driving scenarios using the Interstate-80 dataset and compared to a kinematics based prediction model, with results showing that the proposed method produces more robust predictions across the prediction horizon than the comparison model.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Deep learning, Intelligent vehicle highway systems, Kinematics, Vehicles, Accurate prediction, Comparison modeling, Driver interaction, Highly automated drivings, Lane change maneuvers, Prediction horizon, Robust predictions, Visual representations, Forecasting
National Category
Civil Engineering
Identifiers
urn:nbn:se:kth:diva-247122 (URN)10.1109/IVS.2018.8500426 (DOI)2-s2.0-85056800536 (Scopus ID)9781538644522 (ISBN)
Conference
2018 IEEE Intelligent Vehicles Symposium, IV 2018, 26 September 2018 through 30 September 2018
Note

QC 20190403

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-03Bibliographically approved
Rixon Fuchs, L., Gällström, A. & Folkesson, J. (2018). Object Recognition in Forward Looking Sonar Images using Transfer Learning. In: : . Paper presented at 2018 IEEE OES Autonomous Underwater Vehicle Symposium. IEEE
Open this publication in new window or tab >>Object Recognition in Forward Looking Sonar Images using Transfer Learning
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Forward Looking Sonars (FLS) are a typical choiceof sonar for autonomous underwater vehicles. They are mostoften the main sensor for obstacle avoidance and can be usedfor monitoring, homing, following and docking as well. Thosetasks require discrimination between noise and various classes ofobjects in the sonar images. Robust recognition of sonar data stillremains a problem, but if solved it would enable more autonomyfor underwater vehicles providing more reliable informationabout the surroundings to aid decision making. Recent advancesin image recognition using Deep Learning methods have beenrapid. While image recognition with Deep Learning is known torequire large amounts of labeled data, there are data-efficientlearning methods using generic features learned by a networkpre-trained on data from a different domain. This enables usto work with much smaller domain-specific datasets, makingthe method interesting to explore for sonar object recognitionwith limited amounts of training data. We have developed aConvolutional Neural Network (CNN) based classifier for FLS-images and compared its performance to classification usingclassical methods and hand-crafted features.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
AUV, CNN, Forward Looking Sonar, Object Recognition, Transfer Learning, Underwater, Data Efficient Learning
National Category
Robotics
Research subject
Vehicle and Maritime Engineering; Computer Science
Identifiers
urn:nbn:se:kth:diva-250893 (URN)
Conference
2018 IEEE OES Autonomous Underwater Vehicle Symposium
Projects
SMARC SSF IRC15-0046
Funder
Swedish Foundation for Strategic Research , IRC15-0046
Note

QC 20190423

Available from: 2019-05-07 Created: 2019-05-07 Last updated: 2019-05-16Bibliographically approved
Ward, E. & Folkesson, J. (2018). Towards Risk Minimizing Trajectory Planning in On-Road Scenarios. In: IEEE Intelligent Vehicles Symposium, Proceedings: . Paper presented at 2018 IEEE Intelligent Vehicles Symposium, IV 2018, 26 September 2018 through 30 September 2018 (pp. 490-497). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Towards Risk Minimizing Trajectory Planning in On-Road Scenarios
2018 (English)In: IEEE Intelligent Vehicles Symposium, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 490-497Conference paper, Published paper (Refereed)
Abstract [en]

Trajectory planning for autonomous vehicles should attempt to minimize expected risk given noisy sensor data and uncertain predictions of the near future. In this paper, we present a trajectory planning approach for on-road scenarios where we use a graph search approximation. Uncertain predictions of other vehicles are accounted for by a novel inference technique that allows efficient calculation of the probability of dangerous outcomes for set of modeled situation types.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Intelligent vehicle highway systems, Risk perception, Roads and streets, Trajectories, Vehicles, Autonomous Vehicles, Expected risk, Graph search, Inference techniques, Noisy sensors, Trajectory Planning, Highway planning
National Category
Civil Engineering
Identifiers
urn:nbn:se:kth:diva-247125 (URN)10.1109/IVS.2018.8500643 (DOI)2-s2.0-85056766110 (Scopus ID)9781538644522 (ISBN)
Conference
2018 IEEE Intelligent Vehicles Symposium, IV 2018, 26 September 2018 through 30 September 2018
Note

QC 20190403

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-03Bibliographically approved
Faeulhammer, T., Ambrus, R., Burbridge, C., Zillich, M., Folkesson, J., Hawes, N., . . . Vincze, M. (2017). Autonomous Learning of Object Models on a Mobile Robot. IEEE Robotics and Automation Letters, 2(1), 26-33, Article ID 7393491.
Open this publication in new window or tab >>Autonomous Learning of Object Models on a Mobile Robot
Show others...
2017 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 1, p. 26-33, article id 7393491Article in journal (Refereed) Published
Abstract [en]

In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

Place, publisher, year, edition, pages
IEEE Press, 2017
Keywords
Object Model, Robot
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-183494 (URN)10.1109/LRA.2016.2522086 (DOI)000413732300005 ()2-s2.0-85020019001 (Scopus ID)
Projects
STRANDS
Funder
EU, FP7, Seventh Framework Programme, 600623
Note

QC 20160411

Available from: 2016-03-14 Created: 2016-03-14 Last updated: 2017-11-20Bibliographically approved
Ambrus, R., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Autonomous meshing, texturing and recognition of object models with a mobile robot. In: Bicchi, A Okamura, A (Ed.), 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA (pp. 5071-5078). IEEE
Open this publication in new window or tab >>Autonomous meshing, texturing and recognition of object models with a mobile robot
2017 (English)In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 5071-5078Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object models from RGB-D views acquired autonomously by a mobile robot. We create high-quality textured meshes of the objects by approximating the underlying geometry with a Poisson surface. Our system employs two optimization steps, first registering the views spatially based on image features, and second aligning the RGB images to maximize photometric consistency with respect to the reconstructed mesh. We show that the resulting models can be used robustly for recognition by training a Convolutional Neural Network (CNN) on images rendered from the reconstructed meshes. We perform experiments on data collected autonomously by a mobile robot both in controlled and uncontrolled scenarios. We compare quantitatively and qualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225806 (URN)000426978204127 ()2-s2.0-85041961210 (Scopus ID)978-1-5386-2682-5 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA
Funder
EU, FP7, Seventh Framework Programme, 600623Swedish Foundation for Strategic Research Swedish Research Council, C0475401
Note

QC 20180409

Available from: 2018-04-09 Created: 2018-04-09 Last updated: 2019-08-20Bibliographically approved
Ambrus, R., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Autonomous meshing, texturing and recognition of objectmodels with a mobile robot. In: : . Paper presented at Intelligent Robots and Systems, IEEE/RSJ International Conference on. Vancouver, Canada
Open this publication in new window or tab >>Autonomous meshing, texturing and recognition of objectmodels with a mobile robot
2017 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
Vancouver, Canada: , 2017
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-215232 (URN)
Conference
Intelligent Robots and Systems, IEEE/RSJ International Conference on
Note

QC 20171009

Available from: 2017-10-05 Created: 2017-10-05 Last updated: 2018-01-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7796-1438

Search in DiVA

Show all publications