Change search
Link to record
Permanent link

Direct link
BETA
Tang, Jiexiong
Publications (2 of 2) Show all publications
Tang, J., Ericson, L., Folkesson, J. & Jensfelt, P. (2019). GCNv2: Efficient Correspondence Prediction for Real-Time SLAM. IEEE Robotics and Automation Letters, 4(4), 3505-3512
Open this publication in new window or tab >>GCNv2: Efficient Correspondence Prediction for Real-Time SLAM
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 4, p. 3505-3512Article in journal (Refereed) Published
Abstract [en]

In this letter, we present a deep learning-based network, GCNv2, for generation of keypoints and descriptors. GCNv2 is built on our previous method, GCN, a network trained for 3D projective geometry. GCNv2 is designed with a binary descriptor vector as the ORB feature so that it can easily replace ORB in systems such as ORB-SLAM2. GCNv2 significantly improves the computational efficiency over GCN that was only able to run on desktop hardware. We show how a modified version of ORBSLAM2 using GCNv2 features runs on a Jetson TX2, an embedded low-power platform. Experimental results show that GCNv2 retains comparable accuracy as GCN and that it is robust enough to use for control of a flying drone. Source code is available at: https://github.com/jiexiong2016/GCNv2_SLAM.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-257883 (URN)10.1109/LRA.2019.2927954 (DOI)000477983400013 ()2-s2.0-85069905338 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research , FactSwedish Research Council
Note

QC 20190909

Available from: 2019-09-06 Created: 2019-09-06 Last updated: 2019-12-10Bibliographically approved
Tang, J., Folkesson, J. & Jensfelt, P. (2019). Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction. IEEE Robotics and Automation Letters, 4(2), 530-537
Open this publication in new window or tab >>Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction
2019 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 530-537Article in journal (Refereed) Published
Abstract [en]

In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-243927 (URN)10.1109/LRA.2019.2891433 (DOI)000456673300007 ()2-s2.0-85063310740 (Scopus ID)
Available from: 2019-03-13 Created: 2019-03-13 Last updated: 2020-03-09Bibliographically approved
Organisations

Search in DiVA

Show all publications