Change search
Link to record
Permanent link

Direct link
BETA
Folkesson, John, Associate ProfessorORCID iD iconorcid.org/0000-0002-7796-1438
Publications (10 of 59) Show all publications
Shi, J., Yin, W., Du, Y. & Folkesson, J. (2019). Automated Underwater Pipeline Damage Detection using Neural Nets. In: : . Paper presented at ICRA 2019 Workshop on Underwater Robotics Perception.
Open this publication in new window or tab >>Automated Underwater Pipeline Damage Detection using Neural Nets
2019 (English)Conference paper, Oral presentation only (Refereed)
Abstract [en]

Pipeline inspection is a very human intensive taskand automation could improve efficiencies significantly. We propose a system that could allow an autonomous underwater vehicle (AUV), to detect pipeline damage in a stream of images.Our classifiers were based on transfer learning from pre-trained convolutional neural networks (CNN). This allows us to achieve good results despite relatively few training examples of damage. We test the approach using data from an actual pipeline inspection.

National Category
Computer Systems Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-256311 (URN)
Conference
ICRA 2019 Workshop on Underwater Robotics Perception
Funder
Swedish Foundation for Strategic Research , IRC15-0046
Note

QC 20190827

Available from: 2019-08-21 Created: 2019-08-21 Last updated: 2019-08-27Bibliographically approved
Bore, N., Ekekrantz, J., Jensfelt, P. & Folkesson, J. (2019). Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps. IEEE Transactions on robotics, 35(1), 231-247
Open this publication in new window or tab >>Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps
2019 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 1, p. 231-247Article in journal (Refereed) Published
Abstract [en]

This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Dynamic mapping, mobile robot, movable objects, multitarget tracking (MTT), Rao-Blackwellized particle filter (RBPF), service robots
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-245151 (URN)10.1109/TRO.2018.2876111 (DOI)000458197300017 ()2-s2.0-85057204782 (Scopus ID)
Note

QC 20190313

Available from: 2019-03-13 Created: 2019-03-13 Last updated: 2019-03-18Bibliographically approved
Mänttäri, J. & Folkesson, J. (2019). Incorporating Uncertainty in Predicting Vehicle Maneuvers at Intersections With Complex Interactions. In: 2019 IEEE Intelligent Vehicles Symposium (IV): . Paper presented at 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE
Open this publication in new window or tab >>Incorporating Uncertainty in Predicting Vehicle Maneuvers at Intersections With Complex Interactions
2019 (English)In: 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Highly automated driving systems are required to make robust decisions in many complex driving environments, such as urban intersections with high traffic. In order to make as informed and safe decisions as possible, it is necessary for the system to be able to predict the future maneuvers and positions of other traffic agents, as well as to provide information about the uncertainty in the prediction to the decision making module. While Bayesian approaches are a natural way of modeling uncertainty, recently deep learning-based methods have emerged to address this need as well. However, balancing the computational and system complexity, while also taking into account agent interactions and uncertainties, remains a difficult task. The work presented in this paper proposes a method of producing predictions of other traffic agents' trajectories in intersections with a singular Deep Learning module, while incorporating uncertainty and the interactions between traffic participants. The accuracy of the generated predictions is tested on a simulated intersection with a high level of interaction between agents, and different methods of incorporating uncertainty are compared. Preliminary results show that the CVAE-based method produces qualitatively and quantitatively better measurements of uncertainty and manage to more accurately assign probability to the future occupied space of traffic agents.

Place, publisher, year, edition, pages
IEEE, 2019
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-257881 (URN)10.1109/IVS.2019.8814159 (DOI)978-1-7281-0560-4 (ISBN)
Conference
2019 IEEE Intelligent Vehicles Symposium (IV)
Funder
Vinnova, 2016-02547
Note

QC 20190925

Available from: 2019-09-06 Created: 2019-09-06 Last updated: 2019-09-25Bibliographically approved
Tang, J., Folkesson, J. & Jensfelt, P. (2019). Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction. IEEE Robotics and Automation Letters, 4(2), 530-537
Open this publication in new window or tab >>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 530-537Article in journal (Refereed) Published
Abstract [en]

In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-243927 (URN)10.1109/LRA.2019.2891433 (DOI)000456673300007 ()
Available from: 2019-03-13 Created: 2019-03-13 Last updated: 2019-03-13Bibliographically approved
Torroba, I., Bore, N. & Folkesson, J. (2018). A Comparison of Submap Registration Methods for Multibeam Bathymetric Mapping. In: AUV 2018 - 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, Proceedings: . Paper presented at 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, AUV 2018, 6 November 2018 through 9 November 2018, Porto, Portugal. Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>A Comparison of Submap Registration Methods for Multibeam Bathymetric Mapping
2018 (English)In: AUV 2018 - 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018Conference paper, Published paper (Refereed)
Abstract [en]

On-the-fly registration of overlapping multibeam images is important for path planning by AUVs performing underwater surveys. In order to meet specification on such things as survey accuracy, coverage and density, precise corrections to the AUV trajectory while underway are required. There are fast methods for aligning point clouds that have been developed for robots. We compare several state of the art methods to align point clouds of large, unstructured, sub-aquatic areas to build a global map. We first collect the multibeam point clouds into smaller submaps that are then aligned using variations of the ICP algorithm. This alignment step can be applied if the error in AUV pose is small. It would be the final step in correcting a larger error on loop closing where a place recognition and a rough alignment would precede it. In the case of a lawn mower pattern survey it would be making more continuous corrections to small errors in the overlap between parallel lines. In this work we compare different methods for registration in order to determine the most suitable one for underwater terrain mapping. To do so, we benchmark the current state of the art solutions according to an error metrics and show the results.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Autonomous vehicles, Errors, Lawn mowers, Mapping, Motion planning, Surveys, ICP algorithms, Place recognition, Registration methods, State of the art, State-of-the-art methods, Survey accuracy, Terrain mapping, Underwater surveys, Autonomous underwater vehicles
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-262474 (URN)10.1109/AUV.2018.8729731 (DOI)2-s2.0-85068333120 (Scopus ID)9781728102535 (ISBN)
Conference
2018 IEEE/OES Autonomous Underwater Vehicle Workshop, AUV 2018, 6 November 2018 through 9 November 2018, Porto, Portugal
Note

QC 20191017

Available from: 2019-10-17 Created: 2019-10-17 Last updated: 2019-10-17Bibliographically approved
Torroba, I., Bore, N. & Folkesson, J. (2018). A Comparison of Submaps Registration Methods for Multibeam Bathymetric Mapping. In: : . Paper presented at 2018 IEEE OES Autonomous Underwater Vehicle Symposium.
Open this publication in new window or tab >>A Comparison of Submaps Registration Methods for Multibeam Bathymetric Mapping
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

On-the-fly registration of overlapping multi-beam images is important for path planning by AUVs per-forming underwater surveys. In order to meet specificationon such things as survey accuracy, coverage and density,precise corrections to the AUV trajectory while underwayare required. There are fast methods for aligning pointclouds that have been developed for robots. We compareseveral state of the art methods to align point clouds oflarge, unstructured, sub-aquatic areas to build a globalmap. We first collect the multibeam point clouds intosmaller submaps that are then aligned using variationsof the ICP algorithm. This alignment step can be appliedif the error in AUV pose is small. It would be the finalstep in correcting a larger error on loop closing where aplace recognition and a rough alignment would precedeit. In the case of a lawn mower pattern survey it would bemaking more continuous corrections to small errors in theoverlap between parallel lines. In this work we comparedifferent methods for registration in order to determinethe most suitable one for underwater terrain mapping. Todo so, we benchmark the current state of the art solutionsaccording to an error metrics and show the results.

Keywords
SLAM, AUV
National Category
Robotics
Research subject
Vehicle and Maritime Engineering; Computer Science
Identifiers
urn:nbn:se:kth:diva-250894 (URN)
Conference
2018 IEEE OES Autonomous Underwater Vehicle Symposium
Projects
SMaRC, SSF IRC15-0046
Funder
Swedish Foundation for Strategic Research , IRC15-0046
Note

QC 20190424

Available from: 2019-05-07 Created: 2019-05-07 Last updated: 2019-05-15Bibliographically approved
Chen, X., Ghadirzadeh, A., Folkesson, J., Björkman, M. & Jensfelt, P. (2018). Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Open this publication in new window or tab >>Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments
Show others...
2018 (English)In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Conference paper, Published paper (Refereed)
Abstract [en]

Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-256310 (URN)10.1109/IROS.2018.8593702 (DOI)2-s2.0-85062964303 (Scopus ID)
Conference
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Funder
EU, Horizon 2020, 644839
Note

QC 20190902

Available from: 2019-08-21 Created: 2019-08-21 Last updated: 2019-09-02Bibliographically approved
Tang, J., Folkesson, J. & Jensfelt, P. (2018). Geometric Correspondence Network for Camera Motion Estimation. IEEE Robotics and Automation Letters, 3(2), 1010-1017
Open this publication in new window or tab >>Geometric Correspondence Network for Camera Motion Estimation
2018 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 1010-1017Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-223775 (URN)10.1109/LRA.2018.2794624 (DOI)000424646100022 ()2-s2.0-85063305858 (Scopus ID)
Note

QC 20180307

Available from: 2018-03-07 Created: 2018-03-07 Last updated: 2019-05-16Bibliographically approved
Mänttäri, J., Folkesson, J. & Ward, E. (2018). Learning to Predict Lane Changes in Highway Scenarios Using Dynamic Filters on a Generic Traffic Representation. In: IEEE Intelligent Vehicles Symposium, Proceedings: . Paper presented at 2018 IEEE Intelligent Vehicles Symposium, IV 2018, 26 September 2018 through 30 September 2018 (pp. 1385-1392). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Learning to Predict Lane Changes in Highway Scenarios Using Dynamic Filters on a Generic Traffic Representation
2018 (English)In: IEEE Intelligent Vehicles Symposium, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 1385-1392Conference paper, Published paper (Refereed)
Abstract [en]

In highway driving scenarios it is important for highly automated driving systems to be able to recognize and predict the intended maneuvers of other drivers in order to make robust and informed decisions. Many methods utilize the current kinematics of vehicles to make these predictions, but it is possible to examine the relations between vehicles as well to gain more information about the traffic scene and make more accurate predictions. The work presented in this paper proposes a novel method of predicting lane change maneuvers in highway scenarios using deep learning and a generic visual representation of the traffic scene. Experimental results suggest that by operating on the visual representation, the spacial relations between arbitrary vehicles can be captured by our method and used for more informed predictions without the need for explicit dynamic or driver interaction models. The proposed method is evaluated on highway driving scenarios using the Interstate-80 dataset and compared to a kinematics based prediction model, with results showing that the proposed method produces more robust predictions across the prediction horizon than the comparison model.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Deep learning, Intelligent vehicle highway systems, Kinematics, Vehicles, Accurate prediction, Comparison modeling, Driver interaction, Highly automated drivings, Lane change maneuvers, Prediction horizon, Robust predictions, Visual representations, Forecasting
National Category
Civil Engineering
Identifiers
urn:nbn:se:kth:diva-247122 (URN)10.1109/IVS.2018.8500426 (DOI)2-s2.0-85056800536 (Scopus ID)9781538644522 (ISBN)
Conference
2018 IEEE Intelligent Vehicles Symposium, IV 2018, 26 September 2018 through 30 September 2018
Note

QC 20190403

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-03Bibliographically approved
Rixon Fuchs, L., Gällström, A. & Folkesson, J. (2018). Object Recognition in Forward Looking Sonar Images using Transfer Learning. In: : . Paper presented at 2018 IEEE OES Autonomous Underwater Vehicle Symposium. IEEE
Open this publication in new window or tab >>Object Recognition in Forward Looking Sonar Images using Transfer Learning
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Forward Looking Sonars (FLS) are a typical choiceof sonar for autonomous underwater vehicles. They are mostoften the main sensor for obstacle avoidance and can be usedfor monitoring, homing, following and docking as well. Thosetasks require discrimination between noise and various classes ofobjects in the sonar images. Robust recognition of sonar data stillremains a problem, but if solved it would enable more autonomyfor underwater vehicles providing more reliable informationabout the surroundings to aid decision making. Recent advancesin image recognition using Deep Learning methods have beenrapid. While image recognition with Deep Learning is known torequire large amounts of labeled data, there are data-efficientlearning methods using generic features learned by a networkpre-trained on data from a different domain. This enables usto work with much smaller domain-specific datasets, makingthe method interesting to explore for sonar object recognitionwith limited amounts of training data. We have developed aConvolutional Neural Network (CNN) based classifier for FLS-images and compared its performance to classification usingclassical methods and hand-crafted features.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
AUV, CNN, Forward Looking Sonar, Object Recognition, Transfer Learning, Underwater, Data Efficient Learning
National Category
Robotics
Research subject
Vehicle and Maritime Engineering; Computer Science
Identifiers
urn:nbn:se:kth:diva-250893 (URN)
Conference
2018 IEEE OES Autonomous Underwater Vehicle Symposium
Projects
SMARC SSF IRC15-0046
Funder
Swedish Foundation for Strategic Research , IRC15-0046
Note

QC 20190423

Available from: 2019-05-07 Created: 2019-05-07 Last updated: 2019-05-16Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7796-1438

Search in DiVA

Show all publications