kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 168) Show all publications
Ericson, L. & Jensfelt, P. (2024). Beyond the Frontier: Predicting Unseen Walls From Occupancy Grids by Learning From Floor Plans. IEEE Robotics and Automation Letters, 9(8), 6832-6839
Open this publication in new window or tab >>Beyond the Frontier: Predicting Unseen Walls From Occupancy Grids by Learning From Floor Plans
2024 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 8, p. 6832-6839Article in journal (Refereed) Published
Abstract [en]

In this letter, we tackle the challenge of predicting the unseen walls of a partially observed environment as a set of 2D line segments, conditioned on occupancy grids integrated along the trajectory of a 360(degrees) LIDAR sensor. A dataset of such occupancy grids and their corresponding target wall segments is collected by navigating a virtual robot between a set of randomly sampled waypoints in a collection of office-scale floor plans from a university campus. The line segment prediction task is formulated as an autoregressive sequence prediction task, and an attention-based deep network is trained on the dataset. The sequence-based autoregressive formulation is evaluated through predicted information gain, as in frontier-based autonomous exploration, demonstrating significant improvements over both non-predictive estimation and convolution-based image prediction found in the literature. Ablations on key components are evaluated, as well as sensor range and the occupancy grid's metric area. Finally, model generality is validated by predicting walls in a novel floor plan reconstructed on-the-fly in a real-world office environment.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Deep learning methods, planning under uncertainty, autonomous agents, learning from experience, map-predictive exploration
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-350044 (URN)10.1109/LRA.2024.3410164 (DOI)001251164900004 ()2-s2.0-85195423165 (Scopus ID)
Note

QC 20240705

Available from: 2024-07-05 Created: 2024-07-05 Last updated: 2024-07-05Bibliographically approved
Zhang, Q., Duberg, D., Geng, R., Jia, M., Wang, L. & Jensfelt, P. (2023). A Dynamic Points Removal Benchmark in Point Cloud Maps. In: 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023: . Paper presented at 26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023, Bilbao, Spain, Sep 24 2023 - Sep 28 2023 (pp. 608-614). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A Dynamic Points Removal Benchmark in Point Cloud Maps
Show others...
2023 (English)In: 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 608-614Conference paper, Published paper (Refereed)
Abstract [en]

In the field of robotics, the point cloud has become an essential map representation. From the perspective of downstream tasks like localization and global path planning, points corresponding to dynamic objects will adversely affect their performance. Existing methods for removing dynamic points in point clouds often lack clarity in comparative evaluations and comprehensive analysis. Therefore, we propose an easy-to-extend unified benchmarking framework for evaluating techniques for removing dynamic points in maps. It includes refactored state-of-art methods and novel metrics to analyze the limitations of these approaches. This enables researchers to dive deep into the underlying reasons behind these limitations. The benchmark makes use of several datasets with different sensor types. All the code and datasets related to our study are publicly available for further development and utilization.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-344365 (URN)10.1109/ITSC57777.2023.10422094 (DOI)2-s2.0-85186537890 (Scopus ID)
Conference
26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023, Bilbao, Spain, Sep 24 2023 - Sep 28 2023
Note

Part of ISBN 9798350399462

QC 20240315

Available from: 2024-03-13 Created: 2024-03-13 Last updated: 2024-03-15Bibliographically approved
Zangeneh, F., Bruns, L., Dekel, A., Pieropan, A. & Jensfelt, P. (2023). A Probabilistic Framework for Visual Localization in Ambiguous Scenes. In: Proceedings - ICRA 2023: IEEE International Conference on Robotics and Automation. Paper presented at 2023 IEEE International Conference on Robotics and Automation, ICRA 2023, London, United Kingdom of Great Britain and Northern Ireland, May 29 2023 - Jun 2 2023 (pp. 3969-3975). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A Probabilistic Framework for Visual Localization in Ambiguous Scenes
Show others...
2023 (English)In: Proceedings - ICRA 2023: IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 3969-3975Conference paper, Published paper (Refereed)
Abstract [en]

Visual localization allows autonomous robots to relocalize when losing track of their pose by matching their current observation with past ones. However, ambiguous scenes pose a challenge for such systems, as repetitive structures can be viewed from many distinct, equally likely camera poses, which means it is not sufficient to produce a single best pose hypothesis. In this work, we propose a probabilistic framework that for a given image predicts the arbitrarily shaped posterior distribution of its camera pose. We do this via a novel formulation of camera pose regression using variational inference, which allows sampling from the predicted distribution. Our method outperforms existing methods on localization in ambiguous scenes. We open-source our approach and share our recorded data sequence at github.com/efreidun/vapor.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Computer Vision and Robotics (Autonomous Systems) Robotics Signal Processing
Identifiers
urn:nbn:se:kth:diva-336775 (URN)10.1109/ICRA48891.2023.10160466 (DOI)001036713003052 ()2-s2.0-85168671933 (Scopus ID)
Conference
2023 IEEE International Conference on Robotics and Automation, ICRA 2023, London, United Kingdom of Great Britain and Northern Ireland, May 29 2023 - Jun 2 2023
Note

Part of ISBN 9798350323658

QC 20230920

Available from: 2023-09-20 Created: 2023-09-20 Last updated: 2024-03-12Bibliographically approved
Wozniak, M. K., Kårefjärd, V., Hansson, M., Thiel, M. & Jensfelt, P. (2023). Applying 3D Object Detection from Self-Driving Cars to Mobile Robots: A Survey and Experiments. In: Lopes, AC Pires, G Pinto, VH Lima, JL Fonseca, P (Ed.), 2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC: . Paper presented at IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), APR 26-27, 2023, Tomar, PORTUGAL (pp. 3-9). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Applying 3D Object Detection from Self-Driving Cars to Mobile Robots: A Survey and Experiments
Show others...
2023 (English)In: 2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC / [ed] Lopes, AC Pires, G Pinto, VH Lima, JL Fonseca, P, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 3-9Conference paper, Published paper (Refereed)
Abstract [en]

3D object detection is crucial for the safety and reliability of mobile robots. Mobile robots must understand dynamic environments to operate safely and successfully carry out their tasks. However, most of the open-source datasets and methods are built for autonomous driving. In this paper, we present a detailed review of available 3D object detection methods, focusing on the ones that could be easily adapted and used on mobile robots. We show that the methods do not perform well if used off-the-shelf on mobile robots or are too computationally expensive to run on mobile robotic platforms. Therefore, we propose a domain adaptation approach to use publicly available data to retrain the perception modules of mobile robots, resulting in higher performance. Finally, we run the tests on the real-world robot and provide data for testing our approach.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE International Conference on Autonomous Robot Systems and Competitions ICARSC, ISSN 2573-9360
Keywords
perception, mobile robots, object detection
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-333736 (URN)10.1109/ICARSC58346.2023.10129637 (DOI)001011040500003 ()2-s2.0-85161974029 (Scopus ID)
Conference
IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), APR 26-27, 2023, Tomar, PORTUGAL
Note

QC 20230810

Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2023-08-10Bibliographically approved
Wozniak, M. K., Stower, R., Jensfelt, P. & Abelho Pereira, A. T. (2023). Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 1573-1580). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1573-1580Conference paper, Published paper (Refereed)
Abstract [en]

While we can see robots in more areas of our lives, they still make errors. One common cause of failure stems from the robot perception module when detecting objects. Allowing users to correct such errors can help improve the interaction and prevent the same errors in the future. Consequently, we investigate the effectiveness of a virtual reality (VR) framework for correcting perception errors of a Franka Panda robot. We conducted a user study with 56 participants who interacted with the robot using both VR and screen interfaces. Participants learned to collaborate with the robot faster in the VR interface compared to the screen interface. Additionally, participants found the VR interface more immersive, enjoyable, and expressed a preference for using it again. These findings suggest that VR interfaces may offer advantages over screen interfaces for human-robot interaction in erroneous environments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-341975 (URN)10.1109/RO-MAN57019.2023.10309446 (DOI)001108678600198 ()2-s2.0-85186968933 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2024-03-22Bibliographically approved
van Waveren, S., Rudling, R., Leite, I., Jensfelt, P. & Pek, C. (2023). Increasing perceived safety in motion planning for human-drone interaction. In: HRI 2023: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 446-455). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Increasing perceived safety in motion planning for human-drone interaction
Show others...
2023 (English)In: HRI 2023: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 446-455Conference paper, Published paper (Refereed)
Abstract [en]

Safety is crucial for autonomous drones to operate close to humans. Besides avoiding unwanted or harmful contact, people should also perceive the drone as safe. Existing safe motion planning approaches for autonomous robots, such as drones, have primarily focused on ensuring physical safety, e.g., by imposing constraints on motion planners. However, studies indicate that ensuring physical safety does not necessarily lead to perceived safety. Prior work in Human-Drone Interaction (HDI) shows that factors such as the drone's speed and distance to the human are important for perceived safety. Building on these works, we propose a parameterized control barrier function (CBF) that constrains the drone's maximum deceleration and minimum distance to the human and update its parameters on people's ratings of perceived safety. We describe an implementation and evaluation of our approach. Results of a withinsubject user study (Ng= 15) show that we can improve perceived safety of a drone by adjusting to people individually.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
control barrier functions, human-drone interaction, motion planning, perceived safety
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-333381 (URN)10.1145/3568162.3576966 (DOI)2-s2.0-85150349732 (Scopus ID)
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Note

Part of ISBN 9781450399647

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2023-08-01Bibliographically approved
Bruns, L. & Jensfelt, P. (2023). On the Evaluation of RGB-D-Based Categorical Pose and Shape Estimation. In: Petrovic, I Menegatti, E Markovic, I (Ed.), Intelligent Autonomous Systems 17, IAS-17: . Paper presented at 17th International Conference on Intelligent Autonomous Systems (IAS), JUN 13-16, 2022, Zagreb, CROATIA (pp. 360-377). Springer Nature, 577
Open this publication in new window or tab >>On the Evaluation of RGB-D-Based Categorical Pose and Shape Estimation
2023 (English)In: Intelligent Autonomous Systems 17, IAS-17 / [ed] Petrovic, I Menegatti, E Markovic, I, Springer Nature , 2023, Vol. 577, p. 360-377Conference paper, Published paper (Refereed)
Abstract [en]

Recently, various methods for 6D pose and shape estimation of objects have been proposed. Typically, these methods evaluate their pose estimation in terms of average precision and reconstruction quality in terms of chamfer distance. In this work, we take a critical look at this predominant evaluation protocol, including metrics and datasets. We propose a new set of metrics, contribute new annotations for the Redwood dataset, and evaluate state-of-the-art methods in a fair comparison. We find that existing methods do not generalize well to unconstrained orientations and are actually heavily biased towards objects being upright. We provide an easy-to-use evaluation toolbox with well-defined metrics, method, and dataset interfaces, which allows evaluation and comparison with various state-of-the-art approaches (https://github.com/roym899/pose and shape evaluation).

Place, publisher, year, edition, pages
Springer Nature, 2023
Series
Lecture Notes in Networks and Systems, ISSN 2367-3370
Keywords
Pose estimation, Shape reconstruction, RGB-D-based perception
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-328417 (URN)10.1007/978-3-031-22216-0_25 (DOI)000992458200025 ()2-s2.0-85148744517 (Scopus ID)
Conference
17th International Conference on Intelligent Autonomous Systems (IAS), JUN 13-16, 2022, Zagreb, CROATIA
Note

QC 20230613

Available from: 2023-06-13 Created: 2023-06-13 Last updated: 2023-06-13Bibliographically approved
Bruns, L. & Jensfelt, P. (2023). RGB-D-based categorical object pose and shape estimation: Methods, datasets, and evaluation. Robotics and Autonomous Systems, 168, Article ID 104507.
Open this publication in new window or tab >>RGB-D-based categorical object pose and shape estimation: Methods, datasets, and evaluation
2023 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 168, article id 104507Article in journal (Refereed) Published
Abstract [en]

Recently, various methods for 6D pose and shape estimation of objects at a per-category level have been proposed. This work provides an overview of the field in terms of methods, datasets, and evaluation protocols. First, an overview of existing works and their commonalities and differences is provided. Second, we take a critical look at the predominant evaluation protocol, including metrics and datasets. Based on the findings, we propose a new set of metrics, contribute new annotations for the Redwood dataset, and evaluate state-of-the-art methods in a fair comparison. The results indicate that existing methods do not generalize well to unconstrained orientations and are actually heavily biased towards objects being upright. We provide an easy-to-use evaluation toolbox with well-defined metrics, methods, and dataset interfaces, which allows evaluation and comparison with various state-of-the-art approaches (https://github.com/roym899/pose_and_shape_evaluation).

Place, publisher, year, edition, pages
Elsevier BV, 2023
Keywords
Pose estimation, RGB-D-based perception, Shape estimation, Shape reconstruction
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-336565 (URN)10.1016/j.robot.2023.104507 (DOI)001090698300001 ()2-s2.0-85169011550 (Scopus ID)
Note

QC 20230918

Available from: 2023-09-18 Created: 2023-09-18 Last updated: 2023-11-20Bibliographically approved
Nguyen, T.-M., Duberg, D., Jensfelt, P., Yuan, S. & Xie, L. (2023). SLICT: Multi-Input Multi-Scale Surfel-Based Lidar-Inertial Continuous-Time Odometry and Mapping. IEEE Robotics and Automation Letters, 8(4), 2102-2109
Open this publication in new window or tab >>SLICT: Multi-Input Multi-Scale Surfel-Based Lidar-Inertial Continuous-Time Odometry and Mapping
Show others...
2023 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 4, p. 2102-2109Article in journal (Refereed) Published
Abstract [en]

While feature association to a global map has significant benefits, to keep the computations from growing exponentially, most lidar-based odometry and mapping methods opt to associate features with local maps at one voxel scale. Taking advantage of the fact that surfels (surface elements) at different voxel scales can be organized in a tree-like structure, we propose an octree-based global map of multi-scale surfels that can be updated incrementally. This alleviates the need for recalculating, for example, a k-d tree of the whole map repeatedly. The system can also take input from a single or a number of sensors, reinforcing the robustness in degenerate cases. We also propose a point-to-surfel (PTS) association scheme, continuous-time optimization on PTS and IMU preintegration factors, along with loop closure and bundle adjustment, making a complete framework for Lidar-Inertial continuous-time odometry and mapping. Experiments on public and in-house datasets demonstrate the advantages of our system compared to other state-of-the-art methods.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Laser radar, Feature extraction, Optimization, Costs, Robot kinematics, Source coding, Octrees, Localization, mapping, sensor fusion
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-325223 (URN)10.1109/LRA.2023.3246390 (DOI)000942347900010 ()2-s2.0-85149417556 (Scopus ID)
Note

QC 20230403

Available from: 2023-04-03 Created: 2023-04-03 Last updated: 2024-01-17Bibliographically approved
Wozniak, M. K., Kårefjärd, V., Thiel, M. & Jensfelt, P. (2023). Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data. IEEE Robotics and Automation Letters, 8(11), 7018-7025
Open this publication in new window or tab >>Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data
2023 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 11, p. 7018-7025Article in journal (Refereed) Published
Abstract [en]

Multimodal sensor fusion methods for 3D object detection have been revolutionizing the autonomous driving research field. Nevertheless, most of these methods heavily rely on dense LiDAR data and accurately calibrated sensors which is often not the case in real-world scenarios. Data from LiDAR and cameras often come misaligned due to the miscalibration, decalibration, or different frequencies of the sensors. Additionally, some parts of the LiDAR data may be occluded and parts of the data may be missing due to hardware malfunction or weather conditions. This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust. Through extensive experiments, we demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Object detection, segmentation and categorization, sensor fusion, deep learning for visual perception
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-344105 (URN)10.1109/LRA.2023.3313924 (DOI)001157878900002 ()2-s2.0-85171591218 (Scopus ID)
Note

QC 20240301

Available from: 2024-03-01 Created: 2024-03-01 Last updated: 2024-03-01Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1170-7162

Search in DiVA

Show all publications