kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Diffusion Trajectory-Guided Policy for Long-Horizon Robot Manipulation
Beihang Univ, Sch Mech Engn & Automat, Beijing 100191, Peoples R China.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-5655-0990
Beihang Univ, Sch Comp Sci & Engn, Beijing 100191, Peoples R China; Zhejiang Ind Big Data & Robot Intelligent Syst Key, Hangzhou 310027, Peoples R China.
Beijing Innovat Ctr Humanoid Robot, Beijing 101111, Peoples R China.
Show others and affiliations
2025 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, no 12, p. 12788-12795Article in journal (Refereed) Published
Abstract [en]

Recently, Vision-Language-Action models (VLA) have advanced robot imitation learning, but high data collection costs and limited demonstrations hinder generalization and current imitation learning methods struggle in out-of-distribution scenarios, especially for long-horizon tasks. A key challenge is how to mitigate compounding errors in imitation learning, which lead to cascading failures over extended trajectories. To address these challenges, we propose the Diffusion Trajectory-guided Policy (DTP) framework, which generates 2D trajectories through a diffusion model to guide policy learning for long-horizon tasks. By leveraging task-relevant trajectories, DTP provides trajectory-level guidance to reduce error accumulation. Our two-stage approach first trains a generative vision-language model to create diffusion-based trajectories, then refines the imitation policy using them. Experiments on the CALVIN benchmark show that DTP outperforms state-of-the-art baselines by 25% in success rate, starting from scratch without external pretraining. Moreover, DTP significantly improves real-world robot performance.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025. Vol. 10, no 12, p. 12788-12795
Keywords [en]
Trajectory, Robots, Diffusion models, Training, Imitation learning, Visualization, Videos, Robot kinematics, Predictive models, Cameras, learning from demonstration, deep learning in grasping and manipulation
National Category
Mathematical Analysis
Identifiers
URN: urn:nbn:se:kth:diva-375542DOI: 10.1109/LRA.2025.3619794ISI: 001608977900030Scopus ID: 2-s2.0-105018845537OAI: oai:DiVA.org:kth-375542DiVA, id: diva2:2031284
Note

QC 20260122

Available from: 2026-01-22 Created: 2026-01-22 Last updated: 2026-01-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Yang, Quantao

Search in DiVA

By author/editor
Yang, Quantao
By organisation
Robotics, Perception and Learning, RPL
In the same journal
IEEE Robotics and Automation Letters
Mathematical Analysis

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 15 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf