Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 13) Show all publications
Bore, N., Ekekrantz, J., Jensfelt, P. & Folkesson, J. (2019). Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps. IEEE Transactions on robotics, 35(1), 231-247
Open this publication in new window or tab >>Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps
2019 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 1, p. 231-247Article in journal (Refereed) Published
Abstract [en]

This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Dynamic mapping, mobile robot, movable objects, multitarget tracking (MTT), Rao-Blackwellized particle filter (RBPF), service robots
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-245151 (URN)10.1109/TRO.2018.2876111 (DOI)000458197300017 ()2-s2.0-85057204782 (Scopus ID)
Note

QC 20190313

Available from: 2019-03-13 Created: 2019-03-13 Last updated: 2019-03-18Bibliographically approved
Torroba, I., Bore, N. & Folkesson, J. (2018). A Comparison of Submaps Registration Methods for Multibeam Bathymetric Mapping. In: : . Paper presented at 2018 IEEE OES Autonomous Underwater Vehicle Symposium.
Open this publication in new window or tab >>A Comparison of Submaps Registration Methods for Multibeam Bathymetric Mapping
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

On-the-fly registration of overlapping multi-beam images is important for path planning by AUVs per-forming underwater surveys. In order to meet specificationon such things as survey accuracy, coverage and density,precise corrections to the AUV trajectory while underwayare required. There are fast methods for aligning pointclouds that have been developed for robots. We compareseveral state of the art methods to align point clouds oflarge, unstructured, sub-aquatic areas to build a globalmap. We first collect the multibeam point clouds intosmaller submaps that are then aligned using variationsof the ICP algorithm. This alignment step can be appliedif the error in AUV pose is small. It would be the finalstep in correcting a larger error on loop closing where aplace recognition and a rough alignment would precedeit. In the case of a lawn mower pattern survey it would bemaking more continuous corrections to small errors in theoverlap between parallel lines. In this work we comparedifferent methods for registration in order to determinethe most suitable one for underwater terrain mapping. Todo so, we benchmark the current state of the art solutionsaccording to an error metrics and show the results.

Keywords
SLAM, AUV
National Category
Robotics
Research subject
Vehicle and Maritime Engineering; Computer Science
Identifiers
urn:nbn:se:kth:diva-250894 (URN)
Conference
2018 IEEE OES Autonomous Underwater Vehicle Symposium
Projects
SMaRC, SSF IRC15-0046
Funder
Swedish Foundation for Strategic Research , IRC15-0046
Note

QC 20190424

Available from: 2019-05-07 Created: 2019-05-07 Last updated: 2019-05-15Bibliographically approved
Bore, N., Torroba, I. & Folkesson, J. (2018). Sparse Gaussian Process SLAM, Storage and Filtering for AUV Multibeam Bathymetry. In: 2018 IEEE OES Autonomous Underwater Vehicle Symposium: . Paper presented at 2018 IEEE OES Autonomous Underwater Vehicle Symposium.
Open this publication in new window or tab >>Sparse Gaussian Process SLAM, Storage and Filtering for AUV Multibeam Bathymetry
2018 (English)In: 2018 IEEE OES Autonomous Underwater Vehicle Symposium, 2018Conference paper, Published paper (Refereed)
Abstract [en]

With dead-reckoning from velocity sensors,AUVs may construct short-term, local bathymetry mapsof the sea floor using multibeam sensors. However, theposition estimate from dead-reckoning will include somedrift that grows with time. In this work, we focus on long-term onboard storage of these local bathymetry maps,and the alignment of maps with respect to each other. Wepropose using Sparse Gaussian Processes for this purpose,and show that the representation has several advantages,including an intuitive alignment optimization, data com-pression, and sensor noise filtering. We demonstrate thesethree key capabilities on two real-world datasets.

Keywords
AUV, SLAM, Bathymetric Mapping
National Category
Robotics
Research subject
Vehicle and Maritime Engineering; Computer Science
Identifiers
urn:nbn:se:kth:diva-250895 (URN)
Conference
2018 IEEE OES Autonomous Underwater Vehicle Symposium
Projects
SMaRC, SSF IRC15-0046
Funder
Swedish Foundation for Strategic Research , IRC15-0046
Note

QC 20190408

Available from: 2019-05-07 Created: 2019-05-07 Last updated: 2019-05-07Bibliographically approved
Ambrus, R., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Autonomous meshing, texturing and recognition of object models with a mobile robot. In: Bicchi, A Okamura, A (Ed.), 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA (pp. 5071-5078). IEEE
Open this publication in new window or tab >>Autonomous meshing, texturing and recognition of object models with a mobile robot
2017 (English)In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 5071-5078Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object models from RGB-D views acquired autonomously by a mobile robot. We create high-quality textured meshes of the objects by approximating the underlying geometry with a Poisson surface. Our system employs two optimization steps, first registering the views spatially based on image features, and second aligning the RGB images to maximize photometric consistency with respect to the reconstructed mesh. We show that the resulting models can be used robustly for recognition by training a Convolutional Neural Network (CNN) on images rendered from the reconstructed meshes. We perform experiments on data collected autonomously by a mobile robot both in controlled and uncontrolled scenarios. We compare quantitatively and qualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225806 (URN)000426978204127 ()2-s2.0-85041961210 (Scopus ID)978-1-5386-2682-5 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA
Funder
EU, FP7, Seventh Framework Programme, 600623Swedish Foundation for Strategic Research Swedish Research Council, C0475401
Note

QC 20180409

Available from: 2018-04-09 Created: 2018-04-09 Last updated: 2019-08-20Bibliographically approved
Ambrus, R., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Autonomous meshing, texturing and recognition of objectmodels with a mobile robot. In: : . Paper presented at Intelligent Robots and Systems, IEEE/RSJ International Conference on. Vancouver, Canada
Open this publication in new window or tab >>Autonomous meshing, texturing and recognition of objectmodels with a mobile robot
2017 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
Vancouver, Canada: , 2017
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-215232 (URN)
Conference
Intelligent Robots and Systems, IEEE/RSJ International Conference on
Note

QC 20171009

Available from: 2017-10-05 Created: 2017-10-05 Last updated: 2018-01-13Bibliographically approved
Bore, N., Ambrus, R., Jensfelt, P. & Folkesson, J. (2017). Efficient retrieval of arbitrary objects from long-term robot observations. Robotics and Autonomous Systems, 91, 139-150
Open this publication in new window or tab >>Efficient retrieval of arbitrary objects from long-term robot observations
2017 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 91, p. 139-150Article in journal (Refereed) Published
Abstract [en]

We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.

Place, publisher, year, edition, pages
ELSEVIER SCIENCE BV, 2017
Keywords
Mapping, Mobile robotics, Point cloud, Segmentation, Retrieval
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-205426 (URN)10.1016/j.robot.2016.12.013 (DOI)000396949800012 ()2-s2.0-85015091269 (Scopus ID)
Note

QC 20170522

Available from: 2017-05-22 Created: 2017-05-22 Last updated: 2018-01-13Bibliographically approved
Karaoguz, H., Bore, N., Folkesson, J. & Jensfelt, P. (2017). Human-Centric Partitioning of the Environment. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): . Paper presented at IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN (pp. 844-850). IEEE
Open this publication in new window or tab >>Human-Centric Partitioning of the Environment
2017 (English)In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 844-850Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regionsis to detect the objects that are commonly associated withfrequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information inthe robot’s memory to be later used for generating the regions.The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions.The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data isused, the regions are generated at densely occupied locations.

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
human-robot interaction, perception, AI
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-215941 (URN)000427262400131 ()2-s2.0-85045847052 (Scopus ID)
Conference
IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN
Funder
Swedish Foundation for Strategic Research
Note

QC 20171018

Available from: 2017-10-17 Created: 2017-10-17 Last updated: 2018-04-11Bibliographically approved
Bore, N. (2017). Object Instance Detection and Dynamics Modeling in a Long-Term Mobile Robot Context. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Object Instance Detection and Dynamics Modeling in a Long-Term Mobile Robot Context
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In the last years, simple service robots such as autonomous vacuum cleaners and lawn mowers have become commercially available and increasingly common. The next generation of service robots should perform more advanced tasks, such as to clean up objects. Robots then need to learn to robustly navigate, and manipulate, cluttered environments, such as an untidy living room. In this thesis, we focus on representations for tasks such as general cleaning and fetching of objects. We discuss requirements for these specific tasks, and argue that solving them would be generally useful, because of their object-centric nature. We rely on two fundamental insights in our approach to understand environments on a fine-grained level. First, many of today's robot map representations are limited to the spatial domain, and ignore that there is a time axis that constrains how much an environment may change during a given period. We argue that it is of critical importance to also consider the temporal domain. By studying the motion of individual objects, we can enable tasks such as general cleaning and object fetching. The second insight comes from that mobile robots are becoming more robust. They can therefore collect large amounts of data from those environments. With more data, unsupervised learning of models becomes feasible, allowing the robot to adapt to changes in the environment, and to scenarios that the designer could not foresee. We view these capabilities as vital for robots to become truly autonomous. The combination of unsupervised learning and dynamics modelling creates an interesting symbiosis: the dynamics vary between different environments and between the objects in one environment, and learning can capture these variations. A major difficulty when modeling environment dynamics is that the whole environment can not be observed at one time, since the robot is moving between different places. We demonstrate how this can be dealt with in a principled manner, by modeling several modes of object movement. We also demonstrate methods for detection and learning of objects and structures in the static parts of the maps. Using the complete system, we can represent and learn many aspects of the full environment. In real-world experiments, we demonstrate that our system can keep track of varied objects in large and highly dynamic environments.​

Abstract [sv]

Under de senaste åren har enklare service-robotar, såsom autonoma dammsugare och gräsklippare, börjat säljas, och blivit alltmer vanliga. Nästa generations service-robotar förväntas utföra mer komplexa uppgifter, till exempel att städa upp utspridda föremål i ett vardagsrum. För att uppnå detta måste robotarna kunna navigera i ostrukturerade miljöer, och förstå hur de kan bringas i ordning. I denna avhandling undersöker vi abstrakta representationer som kan förverkliga generalla städrobotar, samt robotar som kan hämta föremål. Vi diskuterar vad dessa specifika tillämpningar kräver i form av representationer, och argumenterar för att en lösning på dessa problem vore mer generellt applicerbar på grund av uppgifternas föremåls-centrerade natur. Vi närmar oss uppgiften genom två viktiga insikter. Till att börja medär många av dagens robot-representationer begränsade till rumsdomänen. De utelämnar alltså att modellera den variation som sker över tiden, och utnyttjar därför inte att rörelsen som kan ske under en given tidsperiod är begränsad. Vi argumenterar för att det är kritiskt att också inkorperara miljöns rörelse i robotens modell. Genom att modellera omgivningen på en föremåls-nivå möjliggörs tillämpningar som städning och hämtning av rörliga objekt. Den andra insikten kommer från att mobila robotar nu börjar bli så robusta att de kan patrullera i en och samma omgivning under flera månader. Dekan därför samla in stora mängder data från enskilda omgivningar. Med dessa stora datamängder börjar det bli möjligt att tillämpa så kallade "unsupervised learning"-metoder för att lära sig modeller av enskilda miljöer utan mänsklig inblandning. Detta tillåter robotarna att anpassa sig till förändringar i omgivningen, samt att lära sig koncept som kan vara svåra att förutse på förhand. Vi ser detta som en grundläggande förmåga hos en helt autonom robot. Kombinationen av unsupervised learning och modellering av omgivningens dynamik är intressant. Eftersom dynamiken varierar mellan olika omgivningar,och mellan olika objekt, kan learning hjälpa oss att fånga dessa variationer,och skapa mer precisa dynamik-modeller. Något som försvårar modelleringen av omgivningens dynamik är att roboten inte kan observera hela omgivningen på samma gång. Detta betyder att saker kan flyttas långa sträckor mellan två observationer. Vi visar hur man kan adressera detta i modellen genom att inlemma flera olika sätt som ett föremål kan flyttas på. Det resulterande systemet är helt probabilistiskt, och kan hålla reda på samtliga föremål i robotens omgivning. Vi demonstrerar även metoder för att upptäcka och lära sig föremål i den statiska delen av omgivningen. Med det kombinerade systemet kan vi således representera och lära oss många aspekter av robotens omgivning. Genom experiment i mänskliga miljöer visar vi att systemet kan hålla reda på olika sorters föremål i stora, och dynamiska, miljöer.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2017. p. 58
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2017:27
Keywords
robotics, long-term, mapping, tracking, unsupervised learning, estimation, object modeling
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-219813 (URN)978-91-7729-638-6 (ISBN)
Public defence
2018-01-19, F3, Lindstedtsvägen 26, KTH Campus, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20171213

Available from: 2017-12-13 Created: 2017-12-13 Last updated: 2017-12-14Bibliographically approved
Hawes, N., Ambrus, R., Bore, N., Folkesson, J., Jensfelt, P., Hanheide, M. & et al., . (2017). The STRANDS Project Long-Term Autonomy in Everyday Environments. IEEE robotics & automation magazine, 24(3), 146-156
Open this publication in new window or tab >>The STRANDS Project Long-Term Autonomy in Everyday Environments
Show others...
2017 (English)In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 24, no 3, p. 146-156Article in journal (Refereed) Published
Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2017
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-216406 (URN)10.1109/MRA.2016.2636359 (DOI)000411010400017 ()2-s2.0-85007063656 (Scopus ID)
Note

QC 20171020

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2018-01-13Bibliographically approved
Bore, N., Jensfelt, P. & Folkesson, J. (2015). Querying 3D Data by Adjacency Graphs. In: Nalpantidis, Lazaros and Krüger, Volker and Eklundh, Jan-Olof and Gasteratos, Antonios (Ed.), Computer Vision Systems: . Paper presented at 10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK (pp. 243-252). Paper presented at 10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK. Springer Publishing Company
Open this publication in new window or tab >>Querying 3D Data by Adjacency Graphs
2015 (English)In: Computer Vision Systems / [ed] Nalpantidis, Lazaros and Krüger, Volker and Eklundh, Jan-Olof and Gasteratos, Antonios, Springer Publishing Company, 2015, p. 243-252Chapter in book (Refereed)
Abstract [en]

The need for robots to search the 3D data they have saved is becoming more apparent. We present an approach for finding structures in 3D models such as those built by robots of their environment. The method extracts geometric primitives from point cloud data. An attributed graph over these primitives forms our representation of the surface structures. Recurring substructures are found with frequent graph mining techniques. We investigate if a model invariant to changes in size and reflection using only the geometric information of and between primitives can be discriminative enough for practical use. Experiments confirm that it can be used to support queries of 3D models.

Place, publisher, year, edition, pages
Springer Publishing Company, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9163
Keywords
Object retrieval, 3D data, point cloud
National Category
Computer Sciences Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-177976 (URN)10.1007/978-3-319-20904-3_23 (DOI)000364183300023 ()2-s2.0-84949036031 (Scopus ID)978-3-319-20904-3; 978-3-319-20903-6 (ISBN)
Conference
10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK
Funder
EU, FP7, Seventh Framework Programme, 600623
Note

QC 20160321

Available from: 2015-12-02 Created: 2015-11-30 Last updated: 2018-01-10Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1189-6634

Search in DiVA

Show all publications