Diffusion Trajectory-Guided Policy for Long-Horizon Robot ManipulationShow others and affiliations
2025 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, no 12, p. 12788-12795Article in journal (Refereed) Published
Abstract [en]
Recently, Vision-Language-Action models (VLA) have advanced robot imitation learning, but high data collection costs and limited demonstrations hinder generalization and current imitation learning methods struggle in out-of-distribution scenarios, especially for long-horizon tasks. A key challenge is how to mitigate compounding errors in imitation learning, which lead to cascading failures over extended trajectories. To address these challenges, we propose the Diffusion Trajectory-guided Policy (DTP) framework, which generates 2D trajectories through a diffusion model to guide policy learning for long-horizon tasks. By leveraging task-relevant trajectories, DTP provides trajectory-level guidance to reduce error accumulation. Our two-stage approach first trains a generative vision-language model to create diffusion-based trajectories, then refines the imitation policy using them. Experiments on the CALVIN benchmark show that DTP outperforms state-of-the-art baselines by 25% in success rate, starting from scratch without external pretraining. Moreover, DTP significantly improves real-world robot performance.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025. Vol. 10, no 12, p. 12788-12795
Keywords [en]
Trajectory, Robots, Diffusion models, Training, Imitation learning, Visualization, Videos, Robot kinematics, Predictive models, Cameras, learning from demonstration, deep learning in grasping and manipulation
National Category
Mathematical Analysis
Identifiers
URN: urn:nbn:se:kth:diva-375542DOI: 10.1109/LRA.2025.3619794ISI: 001608977900030Scopus ID: 2-s2.0-105018845537OAI: oai:DiVA.org:kth-375542DiVA, id: diva2:2031284
Note
QC 20260122
2026-01-222026-01-222026-01-22Bibliographically approved