Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 127) Show all publications
Tang, J., Folkesson, J. & Jensfelt, P. (2018). Geometric Correspondence Network for Camera Motion Estimation. IEEE Robotics and Automation Letters, 3(2), 1010-1017
Open this publication in new window or tab >>Geometric Correspondence Network for Camera Motion Estimation
2018 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 1010-1017Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
Visual-based navigation, SLAM, deep learning in robotics and automation
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-223775 (URN)10.1109/LRA.2018.2794624 (DOI)000424646100022 ()
Note

QC 20180307

Available from: 2018-03-07 Created: 2018-03-07 Last updated: 2018-03-07Bibliographically approved
Faeulhammer, T., Ambrus, R., Burbridge, C., Zillich, M., Folkesson, J., Hawes, N., . . . Vincze, M. (2017). Autonomous Learning of Object Models on a Mobile Robot. IEEE Robotics and Automation Letters, 2(1), 26-33, Article ID 7393491.
Open this publication in new window or tab >>Autonomous Learning of Object Models on a Mobile Robot
Show others...
2017 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 1, p. 26-33, article id 7393491Article in journal (Refereed) Published
Abstract [en]

In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

Place, publisher, year, edition, pages
IEEE Press, 2017
Keywords
Object Model, Robot
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-183494 (URN)10.1109/LRA.2016.2522086 (DOI)000413732300005 ()2-s2.0-85020019001 (Scopus ID)
Projects
STRANDS
Funder
EU, FP7, Seventh Framework Programme, 600623
Note

QC 20160411

Available from: 2016-03-14 Created: 2016-03-14 Last updated: 2017-11-20Bibliographically approved
Ambrus, R., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Autonomous meshing, texturing and recognition of object models with a mobile robot. In: Bicchi, A Okamura, A (Ed.), 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA (pp. 5071-5078). IEEE
Open this publication in new window or tab >>Autonomous meshing, texturing and recognition of object models with a mobile robot
2017 (English)In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 5071-5078Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object models from RGB-D views acquired autonomously by a mobile robot. We create high-quality textured meshes of the objects by approximating the underlying geometry with a Poisson surface. Our system employs two optimization steps, first registering the views spatially based on image features, and second aligning the RGB images to maximize photometric consistency with respect to the reconstructed mesh. We show that the resulting models can be used robustly for recognition by training a Convolutional Neural Network (CNN) on images rendered from the reconstructed meshes. We perform experiments on data collected autonomously by a mobile robot both in controlled and uncontrolled scenarios. We compare quantitatively and qualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225806 (URN)000426978204127 ()2-s2.0-85041961210 (Scopus ID)978-1-5386-2682-5 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA
Funder
EU, FP7, Seventh Framework Programme, 600623Swedish Foundation for Strategic Research Swedish Research Council, C0475401
Note

QC 20180409

Available from: 2018-04-09 Created: 2018-04-09 Last updated: 2018-04-09Bibliographically approved
Ambrus, R., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Autonomous meshing, texturing and recognition of objectmodels with a mobile robot. In: : . Paper presented at Intelligent Robots and Systems, IEEE/RSJ International Conference on. Vancouver, Canada
Open this publication in new window or tab >>Autonomous meshing, texturing and recognition of objectmodels with a mobile robot
2017 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
Vancouver, Canada: , 2017
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-215232 (URN)
Conference
Intelligent Robots and Systems, IEEE/RSJ International Conference on
Note

QC 20171009

Available from: 2017-10-05 Created: 2017-10-05 Last updated: 2018-01-13Bibliographically approved
Bore, N., Ambrus, R., Jensfelt, P. & Folkesson, J. (2017). Efficient retrieval of arbitrary objects from long-term robot observations. Robotics and Autonomous Systems, 91, 139-150
Open this publication in new window or tab >>Efficient retrieval of arbitrary objects from long-term robot observations
2017 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 91, p. 139-150Article in journal (Refereed) Published
Abstract [en]

We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.

Place, publisher, year, edition, pages
ELSEVIER SCIENCE BV, 2017
Keywords
Mapping, Mobile robotics, Point cloud, Segmentation, Retrieval
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-205426 (URN)10.1016/j.robot.2016.12.013 (DOI)000396949800012 ()2-s2.0-85015091269 (Scopus ID)
Note

QC 20170522

Available from: 2017-05-22 Created: 2017-05-22 Last updated: 2018-01-13Bibliographically approved
Karaoguz, H., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Human-Centric Partitioning of the Environment. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): . Paper presented at IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN (pp. 844-850). IEEE
Open this publication in new window or tab >>Human-Centric Partitioning of the Environment
2017 (English)In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 844-850Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regionsis to detect the objects that are commonly associated withfrequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information inthe robot’s memory to be later used for generating the regions.The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions.The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data isused, the regions are generated at densely occupied locations.

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
human-robot interaction, perception, AI
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-215941 (URN)000427262400131 ()2-s2.0-85045847052 (Scopus ID)
Conference
IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN
Funder
Swedish Foundation for Strategic Research
Note

QC 20171018

Available from: 2017-10-17 Created: 2017-10-17 Last updated: 2018-04-11Bibliographically approved
Thippur, A., Stork, J. A. & Jensfelt, P. (2017). Non-Parametric Spatial Context Structure Learning for Autonomous Understanding of Human Environments. In: Howard, A Suzuki, K Zollo, L (Ed.), 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN): . Paper presented at 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL (pp. 1317-1324). IEEE
Open this publication in new window or tab >>Non-Parametric Spatial Context Structure Learning for Autonomous Understanding of Human Environments
2017 (English)In: 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) / [ed] Howard, A Suzuki, K Zollo, L, IEEE , 2017, p. 1317-1324Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous scene understanding by object classification today, crucially depends on the accuracy of appearance based robotic perception. However, this is prone to difficulties in object detection arising from unfavourable lighting conditions and vision unfriendly object properties. In our work, we propose a spatial context based system which infers object classes utilising solely structural information captured from the scenes to aid traditional perception systems. Our system operates on novel spatial features (IFRC) that are robust to noisy object detections; It also caters to on-the-fly learned knowledge modification improving performance with practise. IFRC are aligned with human expression of 3D space, thereby facilitating easy HRI and hence simpler supervised learning. We tested our spatial context based system to successfully conclude that it can capture spatio structural information to do joint object classification to not only act as a vision aide, but sometimes even perform on par with appearance based robotic vision.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE RO-MAN, ISSN 1944-9445
Keywords
structure learning, spatial relationships, lazy learners, autonomous scene understanding
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225236 (URN)000427262400205 ()2-s2.0-85045741190 (Scopus ID)978-1-5386-3518-6 (ISBN)
Conference
26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL
Funder
EU, FP7, Seventh Framework Programme, 600623Swedish Research Council, C0475401
Note

QC 20180403

Available from: 2018-04-03 Created: 2018-04-03 Last updated: 2018-04-11Bibliographically approved
Almeida, D., Ambrus, R., Caccamo, S., Chen, X., Cruciani, S., Pinto Basto De Carvalho, J. F., . . . Kragic, D. (2017). Team KTH’s Picking Solution for the Amazon Picking Challenge 2016. In: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge. Paper presented at ICRA 2017.
Open this publication in new window or tab >>Team KTH’s Picking Solution for the Amazon Picking Challenge 2016
Show others...
2017 (English)In: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge, 2017Conference paper, Oral presentation only (Other (popular science, discussion, etc.))
Abstract [en]

In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.

National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-215327 (URN)
Conference
ICRA 2017
Note

QC 20171009

Available from: 2017-10-07 Created: 2017-10-07 Last updated: 2018-05-24Bibliographically approved
Hawes, N., Ambrus, R., Bore, N., Folkesson, J., Jensfelt, P., Hanheide, M. & et al., . (2017). The STRANDS Project Long-Term Autonomy in Everyday Environments. IEEE robotics & automation magazine, 24(3), 146-156
Open this publication in new window or tab >>The STRANDS Project Long-Term Autonomy in Everyday Environments
Show others...
2017 (English)In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 24, no 3, p. 146-156Article in journal (Refereed) Published
Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2017
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-216406 (URN)10.1109/MRA.2016.2636359 (DOI)000411010400017 ()2-s2.0-85007063656 (Scopus ID)
Note

QC 20171020

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2018-01-13Bibliographically approved
Wang, Z., Jensfelt, P. & Folkesson, J. (2016). Building a human behavior map from local observations. In: Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on: . Paper presented at Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on (pp. 64-70). IEEE, Article ID 7745092.
Open this publication in new window or tab >>Building a human behavior map from local observations
2016 (English)In: Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on, IEEE, 2016, p. 64-70, article id 7745092Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a novel method for classifying regions from human movements in service robots' working environments. The entire space is segmented subject to the class type according to the functionality or affordance of each place which accommodates a typical human behavior. This is achieved based on a grid map in two steps. First a probabilistic model is developed to capture human movements for each grid cell by using a non-ergodic HMM. Then the learned transition probabilities corresponding to these movements are used to cluster all cells by using the K-means algorithm. The knowledge of typical human movements for each location, represented by the prototypes from K-means and summarized in a ‘behavior-based map’, enables a robot to adjust the strategy of interacting with people according to where they are located, and thus greatly enhances its capability to assist people. The performance of the proposed classification method is demonstrated by experimental results from 8 hours of data that are collected in a kitchen environment.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Hidden Markov models, Tracking, Probabilistic logic, Semantics, Service robots, Trajectory
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-196690 (URN)10.1109/ROMAN.2016.7745092 (DOI)000390682500008 ()2-s2.0-85002822136 (Scopus ID)978-1-5090-3929-6 (ISBN)
Conference
Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on
Projects
STRANDS
Funder
EU, FP7, Seventh Framework Programme, 600623
Note

QC 20170202

Available from: 2016-11-18 Created: 2016-11-18 Last updated: 2017-02-02Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1170-7162

Search in DiVA

Show all publications