kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (9 of 9) Show all publications
Marta, D., Holk, S., Vasco, M., Lundell, J., Homberger, T., Busch, F. L., . . . Leite, I. (2025). FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions. In: : . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), Atlanta, USA, 19-23 May 2025. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions
Show others...
2025 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Preference-based reinforcement learning (PbRL) is a suitable approach for style adaptation of pre-trained robotic behavior: adapting the robot's policy to follow human user preferences while still being able to perform the original task. However, collecting preferences for the adaptation process in robotics is often challenging and time-consuming. In this work we explore the adaptation of pre-trained robots in the low-preference-data regime. We show that, in this regime, recent adaptation approaches suffer from catastrophic reward forgetting (CRF), where the updated reward model overfits to the new preferences, leading the agent to become unable to perform the original task. To mitigate CRF, we propose to enhance the original reward model with a small number of parameters (low-rank matrices) responsible for modeling the preference adaptation. Our evaluation shows that our method can efficiently and effectively adjust robotic behavior to human preferences across simulation benchmark tasks and multiple real-world robotic tasks. We provide videos of our results and source code at https://sites.google.com/view/preflora/

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-360980 (URN)
Conference
IEEE International Conference on Robotics and Automation (ICRA), Atlanta, USA, 19-23 May 2025
Note

QC 20250618

Available from: 2025-03-07 Created: 2025-03-07 Last updated: 2025-06-18Bibliographically approved
Perugini, P., Lundell, J., Friedl, K. & Kragic Jensfelt, D. (2025). Pushing Everything Everywhere All at Once: Probabilistic Prehensile Pushing. IEEE Robotics and Automation Letters, 10(5), 4540-4547
Open this publication in new window or tab >>Pushing Everything Everywhere All at Once: Probabilistic Prehensile Pushing
2025 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, no 5, p. 4540-4547Article in journal (Refereed) Published
Abstract [en]

We address prehensile pushing, the problem of manipulating a grasped object by pushing against the environment. Our solution is an efficient nonlinear trajectory optimization problem relaxed from an exact mixed integer non-linear trajectory optimization formulation. The critical insight is recasting the external pushers (environment) as a discrete probability distribution instead of binary variables and minimizing the entropy of the distribution. The probabilistic reformulation allows all pushers to be used simultaneously, but at the optimum, the probability mass concentrates onto one due to the entropy minimization. We numerically compare our method against a state-of-the-art sampling-based baseline on a prehensile pushing task. The results demonstrate that our method finds trajectories 8 times faster and at a 20 times lower cost than the baseline. Finally, we demonstrate that a simulated and real Frank Panda robot can successfully manipulate different objects following the trajectories proposed by our method.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Dexterous manipulation, manipulation planning, optimization and optimal control
National Category
Robotics and automation Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-362513 (URN)10.1109/LRA.2025.3552267 (DOI)001455440600008 ()2-s2.0-105001989745 (Scopus ID)
Note

QC 20250428

Available from: 2025-04-16 Created: 2025-04-16 Last updated: 2025-06-12Bibliographically approved
Weng, Z., Lu, H., Lundell, J. & Kragic, D. (2024). CAPGrasp: An R3×SO(2)-Equivariant Continuous Approach-Constrained Generative Grasp Sampler. IEEE Robotics and Automation Letters, 9(4), 3641-3647
Open this publication in new window or tab >>CAPGrasp: An R3×SO(2)-Equivariant Continuous Approach-Constrained Generative Grasp Sampler
2024 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 4, p. 3641-3647Article in journal (Refereed) Published
Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-363361 (URN)10.1109/lra.2024.3369444 (DOI)001180758700020 ()2-s2.0-85186071186 (Scopus ID)
Note

QC 20250514

Available from: 2025-05-14 Created: 2025-05-14 Last updated: 2025-05-14Bibliographically approved
Longhini, A., Büsching, M., Duisterhof, B. P., Lundell, J., Ichnowski, J., Björkman, M. & Kragic, D. (2024). Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision. In: Proceedings of the 8th Conference on Robot Learning, CoRL 2024: . Paper presented at 8th Annual Conference on Robot Learning, November 6-9, 2024, Munich, Germany (pp. 2845-2865). ML Research Press
Open this publication in new window or tab >>Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision
Show others...
2024 (English)In: Proceedings of the 8th Conference on Robot Learning, CoRL 2024, ML Research Press , 2024, p. 2845-2865Conference paper, Published paper (Refereed)
Abstract [en]

We introduce Cloth-Splatting, a method for estimating 3D states of cloth from RGB images through a prediction-update framework. Cloth-Splatting leverages an action-conditioned dynamics model for predicting future states and uses 3D Gaussian Splatting to update the predicted states. Our key insight is that coupling a 3D mesh-based representation with Gaussian Splatting allows us to define a differentiable map between the cloth's state space and the image space. This enables the use of gradient-based optimization techniques to refine inaccurate state estimates using only RGB supervision. Our experiments demonstrate that Cloth-Splatting not only improves state estimation accuracy over current baselines but also reduces convergence time by ∼85 %.

Place, publisher, year, edition, pages
ML Research Press, 2024
Keywords
3D State Estimation, Gaussian Splatting, Vision-based Tracking, Deformable Objects
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-357192 (URN)2-s2.0-86000735293 (Scopus ID)
Conference
8th Annual Conference on Robot Learning, November 6-9, 2024, Munich, Germany
Note

QC 20250328

Available from: 2024-12-04 Created: 2024-12-04 Last updated: 2025-03-28Bibliographically approved
Weng, Z., Lu, H., Kragic, D. & Lundell, J. (2024). DexDiffuser: Generating Dexterous Grasps With Diffusion Models. IEEE Robotics and Automation Letters, 9(12), 11834-11840
Open this publication in new window or tab >>DexDiffuser: Generating Dexterous Grasps With Diffusion Models
2024 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 12, p. 11834-11840Article in journal (Refereed) Published
Abstract [en]

We introduce DexDiffuser, a novel dexterous grasping method that generates, evaluates, and refines grasps on partial object point clouds. DexDiffuser includes the conditional diffusion-based grasp sampler DexSampler and the dexterous grasp evaluator DexEvaluator. DexSampler generates high-quality grasps conditioned on object point clouds by iterative denoising of randomly sampled grasps. We also introduce two grasp refinement strategies: Evaluator-Guided Diffusion and Evaluator-based Sampling Refinement. The experiment results demonstrate that DexDiffuser consistently outperforms the state-of-the-art multi-finger grasp generation method FFHNet with an, on average, 9.12% and 19.44% higher grasp success rate in simulation and real robot experiments, respectively.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Diffusion models, Grasping, Robots, Point cloud compression, Grippers, Diffusion processes, Shape, Noise reduction, Encoding, Hardware, robot learning
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-360078 (URN)10.1109/LRA.2024.3498776 (DOI)001409548200007 ()2-s2.0-85210159095 (Scopus ID)
Note

QC 20250217

Available from: 2025-02-17 Created: 2025-02-17 Last updated: 2025-02-17Bibliographically approved
Lundell, J., Verdoja, F., Le, T. N., Mousavian, A., Fox, D. & Kyrki, V. (2023). Constrained Generative Sampling of 6-DoF Grasps. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023: . Paper presented at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, United States of America, Oct 1 2023 - Oct 5 2023 (pp. 2940-2946). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Constrained Generative Sampling of 6-DoF Grasps
Show others...
2023 (English)In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2940-2946Conference paper, Published paper (Refereed)
Abstract [en]

Most state-of-the-art data-driven grasp sampling methods propose stable and collision-free grasps uniformly on the target object. For bin-picking, executing any of those reachable grasps is sufficient. However, for completing specific tasks, such as squeezing out liquid from a bottle, we want the grasp to be on a specific part of the object's body while avoiding other locations, such as the cap. This work presents a generative grasp sampling network, VCGS, capable of constrained 6-Degrees of Freedom (DoF) grasp sampling. In addition, we also curate a new dataset designed to train and evaluate methods for constrained grasping. The new dataset, called CONG, consists of over 14 million training samples of synthetically rendered point clouds and grasps at random target areas on 2889 objects. VCGS is benchmarked against GraspNet, a state-of-the-art unconstrained grasp sampler, in simulation and on a real robot. The results demonstrate that VCGS achieves a 10-15% higher grasp success rate than the baseline while being 2-3 times as sample efficient. Supplementary material is available on our project website.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Robotics and automation Computer Sciences
Identifiers
urn:nbn:se:kth:diva-342644 (URN)10.1109/IROS55552.2023.10341344 (DOI)001133658802025 ()2-s2.0-85182524128 (Scopus ID)
Conference
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, United States of America, Oct 1 2023 - Oct 5 2023
Note

Part of proceedings ISBN 9781665491907

QC 20240201

Available from: 2024-01-25 Created: 2024-01-25 Last updated: 2025-02-05Bibliographically approved
Welle, M. C., Lippi, M., Lu, H., Lundell, J., Gasparri, A. & Kragic, D. (2023). Enabling Robot Manipulation of Soft and Rigid Objects with Vision-based Tactile Sensors. In: 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE 2023: . Paper presented at 19th IEEE International Conference on Automation Science and Engineering, CASE 2023, Auckland, New Zealand, Aug 26 2023 - Aug 30 2023. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Enabling Robot Manipulation of Soft and Rigid Objects with Vision-based Tactile Sensors
Show others...
2023 (English)In: 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Endowing robots with tactile capabilities opens up new possibilities for their interaction with the environment, including the ability to handle fragile and/or soft objects. In this work, we equip the robot gripper with low-cost vision-based tactile sensors and propose a manipulation algorithm that adapts to both rigid and soft objects without requiring any knowledge of their properties. The algorithm relies on a touch and slip detection method, which considers the variation in the tactile images with respect to reference ones. We validate the approach on seven different objects, with different properties in terms of rigidity and fragility, to perform unplugging and lifting tasks. Furthermore, to enhance applicability, we combine the manipulation algorithm with a grasp sampler for the task of finding and picking a grape from a bunch without damaging it.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-350241 (URN)10.1109/CASE56687.2023.10260563 (DOI)2-s2.0-85174385279 (Scopus ID)
Conference
19th IEEE International Conference on Automation Science and Engineering, CASE 2023, Auckland, New Zealand, Aug 26 2023 - Aug 30 2023
Note

Part of ISBN 9798350320695

QC 20240711

Available from: 2024-07-11 Created: 2024-07-11 Last updated: 2025-02-09Bibliographically approved
Weng, Z., Lu, H., Lundell, J. & Kragic, D. (2023). GoNet: An Approach-Constrained Generative Grasp Sampling Network. In: 2023 IEEE-RAS 22nd International Conference on Humanoid Robots: . Paper presented at IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), DEC 12-14, 2023, Austin, TX. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>GoNet: An Approach-Constrained Generative Grasp Sampling Network
2023 (English)In: 2023 IEEE-RAS 22nd International Conference on Humanoid Robots, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

This work addresses the problem of learning approach-constrained data-driven grasp samplers. To this end, we propose GoNet: a generative grasp sampler that can constrain the grasp approach direction to a subset of SO(3). The key insight is to discretize SO(3) into a predefined number of bins and train GoNet to generate grasps whose approach directions are within those bins. At run-time, the bin aligning with the second largest principal component of the observed point cloud is selected. GoNet is benchmarked against GraspNet, a state-of-the-art unconstrained grasp sampler, in an unconfined grasping experiment in simulation and on an unconfined and confined grasping experiment in the real world. The results demonstrate that GoNet achieves higher success-over-coverage in simulation and a 12%-18% higher success rate in real-world table-picking and shelf-picking tasks than the baseline.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE-RAS International Conference on Humanoid Robots, ISSN 2164-0572
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-344667 (URN)10.1109/HUMANOIDS57100.2023.10375235 (DOI)001156965200096 ()2-s2.0-85164161523 (Scopus ID)
Conference
IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), DEC 12-14, 2023, Austin, TX
Note

QC 20240326

Part of ISBN 979-8-3503-0327-8

Available from: 2024-03-26 Created: 2024-03-26 Last updated: 2025-05-14Bibliographically approved
Le, T. N., Lundell, J., Abu-Dakka, F. J. & Kyrki, V. (2022). A Novel Simulation-Based Quality Metric for Evaluating Grasps on 3D Deformable Objects. In: 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 23-27, 2022, Kyoto, JAPAN (pp. 3123-3129). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A Novel Simulation-Based Quality Metric for Evaluating Grasps on 3D Deformable Objects
2022 (English)In: 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 3123-3129Conference paper, Published paper (Refereed)
Abstract [en]

Evaluation of grasps on deformable 3D objects is a little-studied problem, even if the applicability of rigid object grasp quality measures for deformable ones is an open question. A central issue with most quality measures is their dependence on contact points, which for deformable objects depend on the deformations. This paper proposes a grasp quality measure for deformable objects that uses information about object deformation to calculate the grasp quality. Grasps are evaluated by simulating the deformations during grasping and predicting the contacts between the gripper and the grasped object. The contact information is then used as input for a new grasp quality metric to quantify the grasp quality. The approach is benchmarked against two classical rigid-body quality metrics on over 600 grasps in the Isaac gym simulation and over 50 real-world grasps. Experimental results show an average improvement of 18% in the grasp success rate for deformable objects compared to the classical rigid-body quality metrics. Furthermore, the proposed approach is approximately fifteen times faster to calculate than the shake task, which, to date, is one of the most reliable approaches to quantify a grasp on a deformable object.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-325035 (URN)10.1109/IROS47612.2022.9981169 (DOI)000908368202066 ()2-s2.0-85146351855 (Scopus ID)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 23-27, 2022, Kyoto, JAPAN
Note

QC 20230329

Available from: 2023-03-29 Created: 2023-03-29 Last updated: 2025-02-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2296-6685

Search in DiVA

Show all publications