kth.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Publikationer (9 of 9) Visa alla publikationer
Marta, D., Holk, S., Vasco, M., Lundell, J., Homberger, T., Busch, F. L., . . . Leite, I. (2025). FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions. In: : . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), Atlanta, USA, 19-23 May 2025. Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions
Visa övriga...
2025 (Engelska)Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Preference-based reinforcement learning (PbRL) is a suitable approach for style adaptation of pre-trained robotic behavior: adapting the robot's policy to follow human user preferences while still being able to perform the original task. However, collecting preferences for the adaptation process in robotics is often challenging and time-consuming. In this work we explore the adaptation of pre-trained robots in the low-preference-data regime. We show that, in this regime, recent adaptation approaches suffer from catastrophic reward forgetting (CRF), where the updated reward model overfits to the new preferences, leading the agent to become unable to perform the original task. To mitigate CRF, we propose to enhance the original reward model with a small number of parameters (low-rank matrices) responsible for modeling the preference adaptation. Our evaluation shows that our method can efficiently and effectively adjust robotic behavior to human preferences across simulation benchmark tasks and multiple real-world robotic tasks. We provide videos of our results and source code at https://sites.google.com/view/preflora/

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2025
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:kth:diva-360980 (URN)
Konferens
IEEE International Conference on Robotics and Automation (ICRA), Atlanta, USA, 19-23 May 2025
Anmärkning

QC 20250618

Tillgänglig från: 2025-03-07 Skapad: 2025-03-07 Senast uppdaterad: 2025-06-18Bibliografiskt granskad
Perugini, P., Lundell, J., Friedl, K. & Kragic Jensfelt, D. (2025). Pushing Everything Everywhere All at Once: Probabilistic Prehensile Pushing. IEEE Robotics and Automation Letters, 10(5), 4540-4547
Öppna denna publikation i ny flik eller fönster >>Pushing Everything Everywhere All at Once: Probabilistic Prehensile Pushing
2025 (Engelska)Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, nr 5, s. 4540-4547Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

We address prehensile pushing, the problem of manipulating a grasped object by pushing against the environment. Our solution is an efficient nonlinear trajectory optimization problem relaxed from an exact mixed integer non-linear trajectory optimization formulation. The critical insight is recasting the external pushers (environment) as a discrete probability distribution instead of binary variables and minimizing the entropy of the distribution. The probabilistic reformulation allows all pushers to be used simultaneously, but at the optimum, the probability mass concentrates onto one due to the entropy minimization. We numerically compare our method against a state-of-the-art sampling-based baseline on a prehensile pushing task. The results demonstrate that our method finds trajectories 8 times faster and at a 20 times lower cost than the baseline. Finally, we demonstrate that a simulated and real Frank Panda robot can successfully manipulate different objects following the trajectories proposed by our method.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2025
Nyckelord
Dexterous manipulation, manipulation planning, optimization and optimal control
Nationell ämneskategori
Robotik och automation Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:kth:diva-362513 (URN)10.1109/LRA.2025.3552267 (DOI)001455440600008 ()2-s2.0-105001989745 (Scopus ID)
Anmärkning

QC 20250428

Tillgänglig från: 2025-04-16 Skapad: 2025-04-16 Senast uppdaterad: 2025-06-12Bibliografiskt granskad
Weng, Z., Lu, H., Lundell, J. & Kragic, D. (2024). CAPGrasp: An R3×SO(2)-Equivariant Continuous Approach-Constrained Generative Grasp Sampler. IEEE Robotics and Automation Letters, 9(4), 3641-3647
Öppna denna publikation i ny flik eller fönster >>CAPGrasp: An R3×SO(2)-Equivariant Continuous Approach-Constrained Generative Grasp Sampler
2024 (Engelska)Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, nr 4, s. 3641-3647Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

We propose CAPGrasp, an R3×SO(2)-equivariant 6-Degrees of Freedom (DoF) continuous approach-constrained generative grasp sampler. It includes a novel learning strategy for training CAPGrasp that eliminates the need to curate massive conditionally labeled datasets and a constrained grasp refinement technique that improves grasp poses while respecting the grasp approach directional constraints. The experimental results demonstrate that CAPGrasp is more than three times as sample efficient as unconstrained grasp samplers while achieving up to 38% grasp success rate improvement. CAPGrasp also achieves 4–10% higher grasp success rates than constrained but noncontinuous grasp samplers. Overall, CAPGrasp is a sample-efficient solution when grasps must originate from specific directions, such as grasping in confined spaces.

Ort, förlag, år, upplaga, sidor
IEEE, 2024
Nyckelord
Deep learning in grasping and manipulation, grasping
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:kth:diva-363361 (URN)10.1109/lra.2024.3369444 (DOI)001180758700020 ()2-s2.0-85186071186 (Scopus ID)
Anmärkning

QC 20250714

Tillgänglig från: 2025-05-14 Skapad: 2025-05-14 Senast uppdaterad: 2025-07-14Bibliografiskt granskad
Longhini, A., Büsching, M., Duisterhof, B. P., Lundell, J., Ichnowski, J., Björkman, M. & Kragic, D. (2024). Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision. In: Proceedings of the 8th Conference on Robot Learning, CoRL 2024: . Paper presented at 8th Annual Conference on Robot Learning, November 6-9, 2024, Munich, Germany (pp. 2845-2865). ML Research Press
Öppna denna publikation i ny flik eller fönster >>Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision
Visa övriga...
2024 (Engelska)Ingår i: Proceedings of the 8th Conference on Robot Learning, CoRL 2024, ML Research Press , 2024, s. 2845-2865Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We introduce Cloth-Splatting, a method for estimating 3D states of cloth from RGB images through a prediction-update framework. Cloth-Splatting leverages an action-conditioned dynamics model for predicting future states and uses 3D Gaussian Splatting to update the predicted states. Our key insight is that coupling a 3D mesh-based representation with Gaussian Splatting allows us to define a differentiable map between the cloth's state space and the image space. This enables the use of gradient-based optimization techniques to refine inaccurate state estimates using only RGB supervision. Our experiments demonstrate that Cloth-Splatting not only improves state estimation accuracy over current baselines but also reduces convergence time by ∼85 %.

Ort, förlag, år, upplaga, sidor
ML Research Press, 2024
Nyckelord
3D State Estimation, Gaussian Splatting, Vision-based Tracking, Deformable Objects
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:kth:diva-357192 (URN)2-s2.0-86000735293 (Scopus ID)
Konferens
8th Annual Conference on Robot Learning, November 6-9, 2024, Munich, Germany
Anmärkning

QC 20250328

Tillgänglig från: 2024-12-04 Skapad: 2024-12-04 Senast uppdaterad: 2025-03-28Bibliografiskt granskad
Weng, Z., Lu, H., Kragic, D. & Lundell, J. (2024). DexDiffuser: Generating Dexterous Grasps With Diffusion Models. IEEE Robotics and Automation Letters, 9(12), 11834-11840
Öppna denna publikation i ny flik eller fönster >>DexDiffuser: Generating Dexterous Grasps With Diffusion Models
2024 (Engelska)Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, nr 12, s. 11834-11840Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

We introduce DexDiffuser, a novel dexterous grasping method that generates, evaluates, and refines grasps on partial object point clouds. DexDiffuser includes the conditional diffusion-based grasp sampler DexSampler and the dexterous grasp evaluator DexEvaluator. DexSampler generates high-quality grasps conditioned on object point clouds by iterative denoising of randomly sampled grasps. We also introduce two grasp refinement strategies: Evaluator-Guided Diffusion and Evaluator-based Sampling Refinement. The experiment results demonstrate that DexDiffuser consistently outperforms the state-of-the-art multi-finger grasp generation method FFHNet with an, on average, 9.12% and 19.44% higher grasp success rate in simulation and real robot experiments, respectively.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2024
Nyckelord
Diffusion models, Grasping, Robots, Point cloud compression, Grippers, Diffusion processes, Shape, Noise reduction, Encoding, Hardware, robot learning
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:kth:diva-360078 (URN)10.1109/LRA.2024.3498776 (DOI)001409548200007 ()2-s2.0-85210159095 (Scopus ID)
Anmärkning

QC 20250217

Tillgänglig från: 2025-02-17 Skapad: 2025-02-17 Senast uppdaterad: 2025-02-17Bibliografiskt granskad
Lundell, J., Verdoja, F., Le, T. N., Mousavian, A., Fox, D. & Kyrki, V. (2023). Constrained Generative Sampling of 6-DoF Grasps. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023: . Paper presented at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, United States of America, Oct 1 2023 - Oct 5 2023 (pp. 2940-2946). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Constrained Generative Sampling of 6-DoF Grasps
Visa övriga...
2023 (Engelska)Ingår i: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, s. 2940-2946Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Most state-of-the-art data-driven grasp sampling methods propose stable and collision-free grasps uniformly on the target object. For bin-picking, executing any of those reachable grasps is sufficient. However, for completing specific tasks, such as squeezing out liquid from a bottle, we want the grasp to be on a specific part of the object's body while avoiding other locations, such as the cap. This work presents a generative grasp sampling network, VCGS, capable of constrained 6-Degrees of Freedom (DoF) grasp sampling. In addition, we also curate a new dataset designed to train and evaluate methods for constrained grasping. The new dataset, called CONG, consists of over 14 million training samples of synthetically rendered point clouds and grasps at random target areas on 2889 objects. VCGS is benchmarked against GraspNet, a state-of-the-art unconstrained grasp sampler, in simulation and on a real robot. The results demonstrate that VCGS achieves a 10-15% higher grasp success rate than the baseline while being 2-3 times as sample efficient. Supplementary material is available on our project website.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Nationell ämneskategori
Robotik och automation Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:kth:diva-342644 (URN)10.1109/IROS55552.2023.10341344 (DOI)001133658802025 ()2-s2.0-85182524128 (Scopus ID)
Konferens
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, United States of America, Oct 1 2023 - Oct 5 2023
Anmärkning

Part of proceedings ISBN 9781665491907

QC 20240201

Tillgänglig från: 2024-01-25 Skapad: 2024-01-25 Senast uppdaterad: 2025-02-05Bibliografiskt granskad
Welle, M. C., Lippi, M., Lu, H., Lundell, J., Gasparri, A. & Kragic, D. (2023). Enabling Robot Manipulation of Soft and Rigid Objects with Vision-based Tactile Sensors. In: 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE 2023: . Paper presented at 19th IEEE International Conference on Automation Science and Engineering, CASE 2023, Auckland, New Zealand, Aug 26 2023 - Aug 30 2023. Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Enabling Robot Manipulation of Soft and Rigid Objects with Vision-based Tactile Sensors
Visa övriga...
2023 (Engelska)Ingår i: 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Endowing robots with tactile capabilities opens up new possibilities for their interaction with the environment, including the ability to handle fragile and/or soft objects. In this work, we equip the robot gripper with low-cost vision-based tactile sensors and propose a manipulation algorithm that adapts to both rigid and soft objects without requiring any knowledge of their properties. The algorithm relies on a touch and slip detection method, which considers the variation in the tactile images with respect to reference ones. We validate the approach on seven different objects, with different properties in terms of rigidity and fragility, to perform unplugging and lifting tasks. Furthermore, to enhance applicability, we combine the manipulation algorithm with a grasp sampler for the task of finding and picking a grape from a bunch without damaging it.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:kth:diva-350241 (URN)10.1109/CASE56687.2023.10260563 (DOI)2-s2.0-85174385279 (Scopus ID)
Konferens
19th IEEE International Conference on Automation Science and Engineering, CASE 2023, Auckland, New Zealand, Aug 26 2023 - Aug 30 2023
Anmärkning

Part of ISBN 9798350320695

QC 20240711

Tillgänglig från: 2024-07-11 Skapad: 2024-07-11 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Weng, Z., Lu, H., Lundell, J. & Kragic, D. (2023). GoNet: An Approach-Constrained Generative Grasp Sampling Network. In: 2023 IEEE-RAS 22nd International Conference on Humanoid Robots: . Paper presented at IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), DEC 12-14, 2023, Austin, TX. Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>GoNet: An Approach-Constrained Generative Grasp Sampling Network
2023 (Engelska)Ingår i: 2023 IEEE-RAS 22nd International Conference on Humanoid Robots, Institute of Electrical and Electronics Engineers (IEEE) , 2023Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This work addresses the problem of learning approach-constrained data-driven grasp samplers. To this end, we propose GoNet: a generative grasp sampler that can constrain the grasp approach direction to a subset of SO(3). The key insight is to discretize SO(3) into a predefined number of bins and train GoNet to generate grasps whose approach directions are within those bins. At run-time, the bin aligning with the second largest principal component of the observed point cloud is selected. GoNet is benchmarked against GraspNet, a state-of-the-art unconstrained grasp sampler, in an unconfined grasping experiment in simulation and on an unconfined and confined grasping experiment in the real world. The results demonstrate that GoNet achieves higher success-over-coverage in simulation and a 12%-18% higher success rate in real-world table-picking and shelf-picking tasks than the baseline.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Serie
IEEE-RAS International Conference on Humanoid Robots, ISSN 2164-0572
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:kth:diva-344667 (URN)10.1109/HUMANOIDS57100.2023.10375235 (DOI)001156965200096 ()2-s2.0-85164161523 (Scopus ID)
Konferens
IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), DEC 12-14, 2023, Austin, TX
Anmärkning

QC 20240326

Part of ISBN 979-8-3503-0327-8

Tillgänglig från: 2024-03-26 Skapad: 2024-03-26 Senast uppdaterad: 2025-05-14Bibliografiskt granskad
Le, T. N., Lundell, J., Abu-Dakka, F. J. & Kyrki, V. (2022). A Novel Simulation-Based Quality Metric for Evaluating Grasps on 3D Deformable Objects. In: 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 23-27, 2022, Kyoto, JAPAN (pp. 3123-3129). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>A Novel Simulation-Based Quality Metric for Evaluating Grasps on 3D Deformable Objects
2022 (Engelska)Ingår i: 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), Institute of Electrical and Electronics Engineers (IEEE) , 2022, s. 3123-3129Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Evaluation of grasps on deformable 3D objects is a little-studied problem, even if the applicability of rigid object grasp quality measures for deformable ones is an open question. A central issue with most quality measures is their dependence on contact points, which for deformable objects depend on the deformations. This paper proposes a grasp quality measure for deformable objects that uses information about object deformation to calculate the grasp quality. Grasps are evaluated by simulating the deformations during grasping and predicting the contacts between the gripper and the grasped object. The contact information is then used as input for a new grasp quality metric to quantify the grasp quality. The approach is benchmarked against two classical rigid-body quality metrics on over 600 grasps in the Isaac gym simulation and over 50 real-world grasps. Experimental results show an average improvement of 18% in the grasp success rate for deformable objects compared to the classical rigid-body quality metrics. Furthermore, the proposed approach is approximately fifteen times faster to calculate than the shake task, which, to date, is one of the most reliable approaches to quantify a grasp on a deformable object.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2022
Serie
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:kth:diva-325035 (URN)10.1109/IROS47612.2022.9981169 (DOI)000908368202066 ()2-s2.0-85146351855 (Scopus ID)
Konferens
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 23-27, 2022, Kyoto, JAPAN
Anmärkning

QC 20230329

Tillgänglig från: 2023-03-29 Skapad: 2023-03-29 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-2296-6685

Sök vidare i DiVA

Visa alla publikationer