Change search
Link to record
Permanent link

Direct link
BETA
Folkesson, John, Associate ProfessorORCID iD iconorcid.org/0000-0002-7796-1438
Publications (10 of 49) Show all publications
Tang, J., Folkesson, J. & Jensfelt, P. (2018). Geometric Correspondence Network for Camera Motion Estimation. IEEE Robotics and Automation Letters, 3(2), 1010-1017
Open this publication in new window or tab >>Geometric Correspondence Network for Camera Motion Estimation
2018 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 1010-1017Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keyword
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-223775 (URN)10.1109/LRA.2018.2794624 (DOI)000424646100022 ()
Note

QC 20180307

Available from: 2018-03-07 Created: 2018-03-07 Last updated: 2018-03-07Bibliographically approved
Faeulhammer, T., Ambrus, R., Burbridge, C., Zillich, M., Folkesson, J., Hawes, N., . . . Vincze, M. (2017). Autonomous Learning of Object Models on a Mobile Robot. IEEE Robotics and Automation Letters, 2(1), 26-33, Article ID 7393491.
Open this publication in new window or tab >>Autonomous Learning of Object Models on a Mobile Robot
Show others...
2017 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 1, p. 26-33, article id 7393491Article in journal (Refereed) Published
Abstract [en]

In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

Place, publisher, year, edition, pages
IEEE Press, 2017
Keyword
Object Model, Robot
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-183494 (URN)10.1109/LRA.2016.2522086 (DOI)000413732300005 ()2-s2.0-85020019001 (Scopus ID)
Projects
STRANDS
Funder
EU, FP7, Seventh Framework Programme, 600623
Note

QC 20160411

Available from: 2016-03-14 Created: 2016-03-14 Last updated: 2017-11-20Bibliographically approved
Ambrus, R., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Autonomous meshing, texturing and recognition of object models with a mobile robot. In: Bicchi, A Okamura, A (Ed.), 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA (pp. 5071-5078). IEEE
Open this publication in new window or tab >>Autonomous meshing, texturing and recognition of object models with a mobile robot
2017 (English)In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 5071-5078Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object models from RGB-D views acquired autonomously by a mobile robot. We create high-quality textured meshes of the objects by approximating the underlying geometry with a Poisson surface. Our system employs two optimization steps, first registering the views spatially based on image features, and second aligning the RGB images to maximize photometric consistency with respect to the reconstructed mesh. We show that the resulting models can be used robustly for recognition by training a Convolutional Neural Network (CNN) on images rendered from the reconstructed meshes. We perform experiments on data collected autonomously by a mobile robot both in controlled and uncontrolled scenarios. We compare quantitatively and qualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225806 (URN)000426978204127 ()2-s2.0-85041961210 (Scopus ID)978-1-5386-2682-5 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA
Funder
EU, FP7, Seventh Framework Programme, 600623Swedish Foundation for Strategic Research Swedish Research Council, C0475401
Note

QC 20180409

Available from: 2018-04-09 Created: 2018-04-09 Last updated: 2018-04-09Bibliographically approved
Ambrus, R., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Autonomous meshing, texturing and recognition of objectmodels with a mobile robot. In: : . Paper presented at Intelligent Robots and Systems, IEEE/RSJ International Conference on. Vancouver, Canada
Open this publication in new window or tab >>Autonomous meshing, texturing and recognition of objectmodels with a mobile robot
2017 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
Vancouver, Canada: , 2017
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-215232 (URN)
Conference
Intelligent Robots and Systems, IEEE/RSJ International Conference on
Note

QC 20171009

Available from: 2017-10-05 Created: 2017-10-05 Last updated: 2018-01-13Bibliographically approved
Bore, N., Ambrus, R., Jensfelt, P. & Folkesson, J. (2017). Efficient retrieval of arbitrary objects from long-term robot observations. Robotics and Autonomous Systems, 91, 139-150
Open this publication in new window or tab >>Efficient retrieval of arbitrary objects from long-term robot observations
2017 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 91, p. 139-150Article in journal (Refereed) Published
Abstract [en]

We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.

Place, publisher, year, edition, pages
ELSEVIER SCIENCE BV, 2017
Keyword
Mapping, Mobile robotics, Point cloud, Segmentation, Retrieval
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-205426 (URN)10.1016/j.robot.2016.12.013 (DOI)000396949800012 ()2-s2.0-85015091269 (Scopus ID)
Note

QC 20170522

Available from: 2017-05-22 Created: 2017-05-22 Last updated: 2018-01-13Bibliographically approved
Karaoguz, H., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Human-Centric Partitioning of the Environment. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): . Paper presented at IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN (pp. 844-850). IEEE
Open this publication in new window or tab >>Human-Centric Partitioning of the Environment
2017 (English)In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 844-850Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regionsis to detect the objects that are commonly associated withfrequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information inthe robot’s memory to be later used for generating the regions.The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions.The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data isused, the regions are generated at densely occupied locations.

Place, publisher, year, edition, pages
IEEE, 2017
Keyword
human-robot interaction, perception, AI
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-215941 (URN)000427262400131 ()2-s2.0-85045847052 (Scopus ID)
Conference
IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN
Funder
Swedish Foundation for Strategic Research
Note

QC 20171018

Available from: 2017-10-17 Created: 2017-10-17 Last updated: 2018-04-11Bibliographically approved
Folkesson, J. (2017). The antiparticle filter—an adaptive nonlinear estimator. In: 15th International Symposium of Robotics Research, 2011: . Paper presented at 9 December 2011 through 12 December 2011 (pp. 219-234). Springer
Open this publication in new window or tab >>The antiparticle filter—an adaptive nonlinear estimator
2017 (English)In: 15th International Symposium of Robotics Research, 2011, Springer, 2017, p. 219-234Conference paper, Published paper (Refereed)
Abstract [en]

We introduce the antiparticle filter, AF, a new type of recursive Bayesian estimator that is unlike either the extended Kalman Filter, EKF, unscented Kalman Filter, UKF or the particle filter PF. We show that for a classic problem of robot localization the AF can substantially outperform these other filters in some situations. The AF estimates the posterior distribution as an auxiliary variable Gaussian which gives an analytic formula using no random samples. It adaptively changes the complexity of the posterior distribution as the uncertainty changes. It is equivalent to the EKF when the uncertainty is low while being able to represent non-Gaussian distributions as the uncertainty increases. The computation time can be much faster than a particle filter for the same accuracy. We have simulated comparisons of two types of AF to the EKF, the iterative EKF, the UKF, an iterative UKF, and the PF demonstrating that AF can reduce the error to a consistent accurate value.

Place, publisher, year, edition, pages
Springer, 2017
Keyword
Bandpass filters, Kalman filters, Monte Carlo methods, Robot applications, Robotics, Target tracking, Analytic formula, Auxiliary variables, Non-gaussian distribution, Nonlinear estimator, Posterior distributions, Recursive Bayesian estimators, Robot localization, Unscented Kalman Filter, Extended Kalman filters
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-195123 (URN)10.1007/978-3-319-29363-9_13 (DOI)000405326800013 ()2-s2.0-84984843544 (Scopus ID)9783319293622 (ISBN)
Conference
9 December 2011 through 12 December 2011
Note

Funding Details: 501100002367, CAS, Stiftelsen för Strategisk Forskning. QC 20161103

Available from: 2016-11-03 Created: 2016-11-02 Last updated: 2018-01-13Bibliographically approved
Hawes, N., Ambrus, R., Bore, N., Folkesson, J., Jensfelt, P., Hanheide, M. & et al., . (2017). The STRANDS Project Long-Term Autonomy in Everyday Environments. IEEE robotics & automation magazine, 24(3), 146-156
Open this publication in new window or tab >>The STRANDS Project Long-Term Autonomy in Everyday Environments
Show others...
2017 (English)In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 24, no 3, p. 146-156Article in journal (Refereed) Published
Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2017
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-216406 (URN)10.1109/MRA.2016.2636359 (DOI)000411010400017 ()2-s2.0-85007063656 (Scopus ID)
Note

QC 20171020

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2018-01-13Bibliographically approved
Wang, Z., Jensfelt, P. & Folkesson, J. (2016). Building a human behavior map from local observations. In: Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on: . Paper presented at Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on (pp. 64-70). IEEE, Article ID 7745092.
Open this publication in new window or tab >>Building a human behavior map from local observations
2016 (English)In: Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on, IEEE, 2016, p. 64-70, article id 7745092Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a novel method for classifying regions from human movements in service robots' working environments. The entire space is segmented subject to the class type according to the functionality or affordance of each place which accommodates a typical human behavior. This is achieved based on a grid map in two steps. First a probabilistic model is developed to capture human movements for each grid cell by using a non-ergodic HMM. Then the learned transition probabilities corresponding to these movements are used to cluster all cells by using the K-means algorithm. The knowledge of typical human movements for each location, represented by the prototypes from K-means and summarized in a ‘behavior-based map’, enables a robot to adjust the strategy of interacting with people according to where they are located, and thus greatly enhances its capability to assist people. The performance of the proposed classification method is demonstrated by experimental results from 8 hours of data that are collected in a kitchen environment.

Place, publisher, year, edition, pages
IEEE, 2016
Keyword
Hidden Markov models, Tracking, Probabilistic logic, Semantics, Service robots, Trajectory
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-196690 (URN)10.1109/ROMAN.2016.7745092 (DOI)000390682500008 ()2-s2.0-85002822136 (Scopus ID)978-1-5090-3929-6 (ISBN)
Conference
Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on
Projects
STRANDS
Funder
EU, FP7, Seventh Framework Programme, 600623
Note

QC 20170202

Available from: 2016-11-18 Created: 2016-11-18 Last updated: 2017-02-02Bibliographically approved
Ambrus, R., Folkesson, J. & Jensfelt, P. (2016). Unsupervised object segmentation through change detection in a long term autonomy scenario. In: IEEE-RAS International Conference on Humanoid Robots: . Paper presented at 16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016, 15 November 2016 through 17 November 2016 (pp. 1181-1187). IEEE
Open this publication in new window or tab >>Unsupervised object segmentation through change detection in a long term autonomy scenario
2016 (English)In: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, p. 1181-1187Conference paper, Published paper (Refereed)
Abstract [en]

In this work we address the problem of dynamic object segmentation in office environments. We make no prior assumptions on what is dynamic and static, and our reasoning is based on change detection between sparse and non-uniform observations of the scene. We model the static part of the environment, and we focus on improving the accuracy and quality of the segmented dynamic objects over long periods of time. We address the issue of adapting the static structure over time and incorporating new elements, for which we train and use a classifier whose output gives an indication of the dynamic nature of the segmented elements. We show that the proposed algorithms improve the accuracy and the rate of detection of dynamic objects by comparing with a labelled dataset.

Place, publisher, year, edition, pages
IEEE, 2016
Keyword
Anthropomorphic robots, Signal detection, Change detection, Dynamic nature, Dynamic objects, Non-uniform, Object segmentation, Office environments, Static structures, Object detection
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-202843 (URN)10.1109/HUMANOIDS.2016.7803420 (DOI)000403009300175 ()2-s2.0-85010207172 (Scopus ID)9781509047185 (ISBN)
Conference
16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016, 15 November 2016 through 17 November 2016
Note

QC 20170317

Available from: 2017-03-17 Created: 2017-03-17 Last updated: 2018-01-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7796-1438

Search in DiVA

Show all publications