kth.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Publikationer (10 of 168) Visa alla publikationer
Ericson, L. & Jensfelt, P. (2024). Beyond the Frontier: Predicting Unseen Walls From Occupancy Grids by Learning From Floor Plans. IEEE Robotics and Automation Letters, 9(8), 6832-6839
Öppna denna publikation i ny flik eller fönster >>Beyond the Frontier: Predicting Unseen Walls From Occupancy Grids by Learning From Floor Plans
2024 (Engelska)Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, nr 8, s. 6832-6839Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

In this letter, we tackle the challenge of predicting the unseen walls of a partially observed environment as a set of 2D line segments, conditioned on occupancy grids integrated along the trajectory of a 360(degrees) LIDAR sensor. A dataset of such occupancy grids and their corresponding target wall segments is collected by navigating a virtual robot between a set of randomly sampled waypoints in a collection of office-scale floor plans from a university campus. The line segment prediction task is formulated as an autoregressive sequence prediction task, and an attention-based deep network is trained on the dataset. The sequence-based autoregressive formulation is evaluated through predicted information gain, as in frontier-based autonomous exploration, demonstrating significant improvements over both non-predictive estimation and convolution-based image prediction found in the literature. Ablations on key components are evaluated, as well as sensor range and the occupancy grid's metric area. Finally, model generality is validated by predicting walls in a novel floor plan reconstructed on-the-fly in a real-world office environment.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2024
Nyckelord
Deep learning methods, planning under uncertainty, autonomous agents, learning from experience, map-predictive exploration
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-350044 (URN)10.1109/LRA.2024.3410164 (DOI)001251164900004 ()2-s2.0-85195423165 (Scopus ID)
Anmärkning

QC 20240705

Tillgänglig från: 2024-07-05 Skapad: 2024-07-05 Senast uppdaterad: 2024-07-05Bibliografiskt granskad
Zhang, Q., Duberg, D., Geng, R., Jia, M., Wang, L. & Jensfelt, P. (2023). A Dynamic Points Removal Benchmark in Point Cloud Maps. In: 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023: . Paper presented at 26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023, Bilbao, Spain, Sep 24 2023 - Sep 28 2023 (pp. 608-614). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>A Dynamic Points Removal Benchmark in Point Cloud Maps
Visa övriga...
2023 (Engelska)Ingår i: 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, s. 608-614Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In the field of robotics, the point cloud has become an essential map representation. From the perspective of downstream tasks like localization and global path planning, points corresponding to dynamic objects will adversely affect their performance. Existing methods for removing dynamic points in point clouds often lack clarity in comparative evaluations and comprehensive analysis. Therefore, we propose an easy-to-extend unified benchmarking framework for evaluating techniques for removing dynamic points in maps. It includes refactored state-of-art methods and novel metrics to analyze the limitations of these approaches. This enables researchers to dive deep into the underlying reasons behind these limitations. The benchmark makes use of several datasets with different sensor types. All the code and datasets related to our study are publicly available for further development and utilization.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:kth:diva-344365 (URN)10.1109/ITSC57777.2023.10422094 (DOI)2-s2.0-85186537890 (Scopus ID)
Konferens
26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023, Bilbao, Spain, Sep 24 2023 - Sep 28 2023
Anmärkning

Part of ISBN 9798350399462

QC 20240315

Tillgänglig från: 2024-03-13 Skapad: 2024-03-13 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Zangeneh, F., Bruns, L., Dekel, A., Pieropan, A. & Jensfelt, P. (2023). A Probabilistic Framework for Visual Localization in Ambiguous Scenes. In: Proceedings - ICRA 2023: IEEE International Conference on Robotics and Automation. Paper presented at 2023 IEEE International Conference on Robotics and Automation, ICRA 2023, London, United Kingdom of Great Britain and Northern Ireland, May 29 2023 - Jun 2 2023 (pp. 3969-3975). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>A Probabilistic Framework for Visual Localization in Ambiguous Scenes
Visa övriga...
2023 (Engelska)Ingår i: Proceedings - ICRA 2023: IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE) , 2023, s. 3969-3975Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Visual localization allows autonomous robots to relocalize when losing track of their pose by matching their current observation with past ones. However, ambiguous scenes pose a challenge for such systems, as repetitive structures can be viewed from many distinct, equally likely camera poses, which means it is not sufficient to produce a single best pose hypothesis. In this work, we propose a probabilistic framework that for a given image predicts the arbitrarily shaped posterior distribution of its camera pose. We do this via a novel formulation of camera pose regression using variational inference, which allows sampling from the predicted distribution. Our method outperforms existing methods on localization in ambiguous scenes. We open-source our approach and share our recorded data sequence at github.com/efreidun/vapor.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Nationell ämneskategori
Datorseende och robotik (autonoma system) Robotteknik och automation Signalbehandling
Identifikatorer
urn:nbn:se:kth:diva-336775 (URN)10.1109/ICRA48891.2023.10160466 (DOI)001036713003052 ()2-s2.0-85168671933 (Scopus ID)
Konferens
2023 IEEE International Conference on Robotics and Automation, ICRA 2023, London, United Kingdom of Great Britain and Northern Ireland, May 29 2023 - Jun 2 2023
Anmärkning

Part of ISBN 9798350323658

QC 20230920

Tillgänglig från: 2023-09-20 Skapad: 2023-09-20 Senast uppdaterad: 2024-03-12Bibliografiskt granskad
Wozniak, M. K., Kårefjärd, V., Hansson, M., Thiel, M. & Jensfelt, P. (2023). Applying 3D Object Detection from Self-Driving Cars to Mobile Robots: A Survey and Experiments. In: Lopes, AC Pires, G Pinto, VH Lima, JL Fonseca, P (Ed.), 2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC: . Paper presented at IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), APR 26-27, 2023, Tomar, PORTUGAL (pp. 3-9). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Applying 3D Object Detection from Self-Driving Cars to Mobile Robots: A Survey and Experiments
Visa övriga...
2023 (Engelska)Ingår i: 2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC / [ed] Lopes, AC Pires, G Pinto, VH Lima, JL Fonseca, P, Institute of Electrical and Electronics Engineers (IEEE) , 2023, s. 3-9Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

3D object detection is crucial for the safety and reliability of mobile robots. Mobile robots must understand dynamic environments to operate safely and successfully carry out their tasks. However, most of the open-source datasets and methods are built for autonomous driving. In this paper, we present a detailed review of available 3D object detection methods, focusing on the ones that could be easily adapted and used on mobile robots. We show that the methods do not perform well if used off-the-shelf on mobile robots or are too computationally expensive to run on mobile robotic platforms. Therefore, we propose a domain adaptation approach to use publicly available data to retrain the perception modules of mobile robots, resulting in higher performance. Finally, we run the tests on the real-world robot and provide data for testing our approach.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Serie
IEEE International Conference on Autonomous Robot Systems and Competitions ICARSC, ISSN 2573-9360
Nyckelord
perception, mobile robots, object detection
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:kth:diva-333736 (URN)10.1109/ICARSC58346.2023.10129637 (DOI)001011040500003 ()2-s2.0-85161974029 (Scopus ID)
Konferens
IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), APR 26-27, 2023, Tomar, PORTUGAL
Anmärkning

QC 20230810

Tillgänglig från: 2023-08-10 Skapad: 2023-08-10 Senast uppdaterad: 2023-08-10Bibliografiskt granskad
Wozniak, M. K., Stower, R., Jensfelt, P. & Abelho Pereira, A. T. (2023). Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 1573-1580). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality
2023 (Engelska)Ingår i: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, s. 1573-1580Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

While we can see robots in more areas of our lives, they still make errors. One common cause of failure stems from the robot perception module when detecting objects. Allowing users to correct such errors can help improve the interaction and prevent the same errors in the future. Consequently, we investigate the effectiveness of a virtual reality (VR) framework for correcting perception errors of a Franka Panda robot. We conducted a user study with 56 participants who interacted with the robot using both VR and screen interfaces. Participants learned to collaborate with the robot faster in the VR interface compared to the screen interface. Additionally, participants found the VR interface more immersive, enjoyable, and expressed a preference for using it again. These findings suggest that VR interfaces may offer advantages over screen interfaces for human-robot interaction in erroneous environments.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Serie
IEEE RO-MAN, ISSN 1944-9445
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:kth:diva-341975 (URN)10.1109/RO-MAN57019.2023.10309446 (DOI)001108678600198 ()2-s2.0-85186968933 (Scopus ID)
Konferens
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Anmärkning

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Tillgänglig från: 2024-01-10 Skapad: 2024-01-10 Senast uppdaterad: 2024-03-22Bibliografiskt granskad
van Waveren, S., Rudling, R., Leite, I., Jensfelt, P. & Pek, C. (2023). Increasing perceived safety in motion planning for human-drone interaction. In: HRI 2023: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 446-455). Association for Computing Machinery (ACM)
Öppna denna publikation i ny flik eller fönster >>Increasing perceived safety in motion planning for human-drone interaction
Visa övriga...
2023 (Engelska)Ingår i: HRI 2023: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, s. 446-455Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Safety is crucial for autonomous drones to operate close to humans. Besides avoiding unwanted or harmful contact, people should also perceive the drone as safe. Existing safe motion planning approaches for autonomous robots, such as drones, have primarily focused on ensuring physical safety, e.g., by imposing constraints on motion planners. However, studies indicate that ensuring physical safety does not necessarily lead to perceived safety. Prior work in Human-Drone Interaction (HDI) shows that factors such as the drone's speed and distance to the human are important for perceived safety. Building on these works, we propose a parameterized control barrier function (CBF) that constrains the drone's maximum deceleration and minimum distance to the human and update its parameters on people's ratings of perceived safety. We describe an implementation and evaluation of our approach. Results of a withinsubject user study (Ng= 15) show that we can improve perceived safety of a drone by adjusting to people individually.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery (ACM), 2023
Nyckelord
control barrier functions, human-drone interaction, motion planning, perceived safety
Nationell ämneskategori
Datorsystem
Identifikatorer
urn:nbn:se:kth:diva-333381 (URN)10.1145/3568162.3576966 (DOI)2-s2.0-85150349732 (Scopus ID)
Konferens
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Anmärkning

Part of ISBN 9781450399647

QC 20230801

Tillgänglig från: 2023-08-01 Skapad: 2023-08-01 Senast uppdaterad: 2023-08-01Bibliografiskt granskad
Bruns, L. & Jensfelt, P. (2023). On the Evaluation of RGB-D-Based Categorical Pose and Shape Estimation. In: Petrovic, I Menegatti, E Markovic, I (Ed.), Intelligent Autonomous Systems 17, IAS-17: . Paper presented at 17th International Conference on Intelligent Autonomous Systems (IAS), JUN 13-16, 2022, Zagreb, CROATIA (pp. 360-377). Springer Nature, 577
Öppna denna publikation i ny flik eller fönster >>On the Evaluation of RGB-D-Based Categorical Pose and Shape Estimation
2023 (Engelska)Ingår i: Intelligent Autonomous Systems 17, IAS-17 / [ed] Petrovic, I Menegatti, E Markovic, I, Springer Nature , 2023, Vol. 577, s. 360-377Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Recently, various methods for 6D pose and shape estimation of objects have been proposed. Typically, these methods evaluate their pose estimation in terms of average precision and reconstruction quality in terms of chamfer distance. In this work, we take a critical look at this predominant evaluation protocol, including metrics and datasets. We propose a new set of metrics, contribute new annotations for the Redwood dataset, and evaluate state-of-the-art methods in a fair comparison. We find that existing methods do not generalize well to unconstrained orientations and are actually heavily biased towards objects being upright. We provide an easy-to-use evaluation toolbox with well-defined metrics, method, and dataset interfaces, which allows evaluation and comparison with various state-of-the-art approaches (https://github.com/roym899/pose and shape evaluation).

Ort, förlag, år, upplaga, sidor
Springer Nature, 2023
Serie
Lecture Notes in Networks and Systems, ISSN 2367-3370
Nyckelord
Pose estimation, Shape reconstruction, RGB-D-based perception
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-328417 (URN)10.1007/978-3-031-22216-0_25 (DOI)000992458200025 ()2-s2.0-85148744517 (Scopus ID)
Konferens
17th International Conference on Intelligent Autonomous Systems (IAS), JUN 13-16, 2022, Zagreb, CROATIA
Anmärkning

QC 20230613

Tillgänglig från: 2023-06-13 Skapad: 2023-06-13 Senast uppdaterad: 2023-06-13Bibliografiskt granskad
Bruns, L. & Jensfelt, P. (2023). RGB-D-based categorical object pose and shape estimation: Methods, datasets, and evaluation. Robotics and Autonomous Systems, 168, Article ID 104507.
Öppna denna publikation i ny flik eller fönster >>RGB-D-based categorical object pose and shape estimation: Methods, datasets, and evaluation
2023 (Engelska)Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 168, artikel-id 104507Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Recently, various methods for 6D pose and shape estimation of objects at a per-category level have been proposed. This work provides an overview of the field in terms of methods, datasets, and evaluation protocols. First, an overview of existing works and their commonalities and differences is provided. Second, we take a critical look at the predominant evaluation protocol, including metrics and datasets. Based on the findings, we propose a new set of metrics, contribute new annotations for the Redwood dataset, and evaluate state-of-the-art methods in a fair comparison. The results indicate that existing methods do not generalize well to unconstrained orientations and are actually heavily biased towards objects being upright. We provide an easy-to-use evaluation toolbox with well-defined metrics, methods, and dataset interfaces, which allows evaluation and comparison with various state-of-the-art approaches (https://github.com/roym899/pose_and_shape_evaluation).

Ort, förlag, år, upplaga, sidor
Elsevier BV, 2023
Nyckelord
Pose estimation, RGB-D-based perception, Shape estimation, Shape reconstruction
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-336565 (URN)10.1016/j.robot.2023.104507 (DOI)001090698300001 ()2-s2.0-85169011550 (Scopus ID)
Anmärkning

QC 20230918

Tillgänglig från: 2023-09-18 Skapad: 2023-09-18 Senast uppdaterad: 2023-11-20Bibliografiskt granskad
Nguyen, T.-M., Duberg, D., Jensfelt, P., Yuan, S. & Xie, L. (2023). SLICT: Multi-Input Multi-Scale Surfel-Based Lidar-Inertial Continuous-Time Odometry and Mapping. IEEE Robotics and Automation Letters, 8(4), 2102-2109
Öppna denna publikation i ny flik eller fönster >>SLICT: Multi-Input Multi-Scale Surfel-Based Lidar-Inertial Continuous-Time Odometry and Mapping
Visa övriga...
2023 (Engelska)Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, nr 4, s. 2102-2109Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

While feature association to a global map has significant benefits, to keep the computations from growing exponentially, most lidar-based odometry and mapping methods opt to associate features with local maps at one voxel scale. Taking advantage of the fact that surfels (surface elements) at different voxel scales can be organized in a tree-like structure, we propose an octree-based global map of multi-scale surfels that can be updated incrementally. This alleviates the need for recalculating, for example, a k-d tree of the whole map repeatedly. The system can also take input from a single or a number of sensors, reinforcing the robustness in degenerate cases. We also propose a point-to-surfel (PTS) association scheme, continuous-time optimization on PTS and IMU preintegration factors, along with loop closure and bundle adjustment, making a complete framework for Lidar-Inertial continuous-time odometry and mapping. Experiments on public and in-house datasets demonstrate the advantages of our system compared to other state-of-the-art methods.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Nyckelord
Laser radar, Feature extraction, Optimization, Costs, Robot kinematics, Source coding, Octrees, Localization, mapping, sensor fusion
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-325223 (URN)10.1109/LRA.2023.3246390 (DOI)000942347900010 ()2-s2.0-85149417556 (Scopus ID)
Anmärkning

QC 20230403

Tillgänglig från: 2023-04-03 Skapad: 2023-04-03 Senast uppdaterad: 2024-01-17Bibliografiskt granskad
Wozniak, M. K., Kårefjärd, V., Thiel, M. & Jensfelt, P. (2023). Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data. IEEE Robotics and Automation Letters, 8(11), 7018-7025
Öppna denna publikation i ny flik eller fönster >>Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data
2023 (Engelska)Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, nr 11, s. 7018-7025Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Multimodal sensor fusion methods for 3D object detection have been revolutionizing the autonomous driving research field. Nevertheless, most of these methods heavily rely on dense LiDAR data and accurately calibrated sensors which is often not the case in real-world scenarios. Data from LiDAR and cameras often come misaligned due to the miscalibration, decalibration, or different frequencies of the sensors. Additionally, some parts of the LiDAR data may be occluded and parts of the data may be missing due to hardware malfunction or weather conditions. This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust. Through extensive experiments, we demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Nyckelord
Object detection, segmentation and categorization, sensor fusion, deep learning for visual perception
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:kth:diva-344105 (URN)10.1109/LRA.2023.3313924 (DOI)001157878900002 ()2-s2.0-85171591218 (Scopus ID)
Anmärkning

QC 20240301

Tillgänglig från: 2024-03-01 Skapad: 2024-03-01 Senast uppdaterad: 2024-03-01Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-1170-7162

Sök vidare i DiVA

Visa alla publikationer