kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Dexterous robotic manipulation using deep reinforcement learning and knowledge transfer for complex sparse reward-based tasks
University College Dublin, Dublin, Ireland.
Dublin City University, Dublin, Ireland; Insight SFI Research Centre for Data Analytics, Dublin, Ireland.
University College Dublin, Dublin, Ireland.
University College Dublin, Dublin, Ireland.
Show others and affiliations
2023 (English)In: Expert systems (Print), ISSN 0266-4720, E-ISSN 1468-0394, Vol. 40, no 6, article id e13205Article in journal (Refereed) Published
Abstract [en]

This paper describes a deep reinforcement learning (DRL) approach that won Phase 1 of the Real Robot Challenge (RRC) 2021, and then extends this method to a more difficult manipulation task. The RRC consisted of using a TriFinger robot to manipulate a cube along a specified positional trajectory, but with no requirement for the cube to have any specific orientation. We used a relatively simple reward function, a combination of a goal-based sparse reward and a distance reward, in conjunction with Hindsight Experience Replay (HER) to guide the learning of the DRL agent (Deep Deterministic Policy Gradient [DDPG]). Our approach allowed our agents to acquire dexterous robotic manipulation strategies in simulation. These strategies were then deployed on the real robot and outperformed all other competition submissions, including those using more traditional robotic control techniques, in the final evaluation stage of the RRC. Here we extend this method, by modifying the task of Phase 1 of the RRC to require the robot to maintain the cube in a particular orientation, while the cube is moved along the required positional trajectory. The requirement to also orient the cube makes the agent less able to learn the task through blind exploration due to increased problem complexity. To circumvent this issue, we make novel use of a Knowledge Transfer (KT) technique that allows the strategies learned by the agent in the original task (which was agnostic to cube orientation) to be transferred to this task (where orientation matters). KT allowed the agent to learn and perform the extended task in the simulator, which improved the average positional deviation from 0.134 to 0.02 m, and average orientation deviation from 142° to 76° during evaluation. This KT concept shows good generalization properties and could be applied to any actor-critic learning algorithm.

Place, publisher, year, edition, pages
Wiley , 2023. Vol. 40, no 6, article id e13205
Keywords [en]
deep reinforcement learning, Real Robot Challenge, robotic manipulation, sim-to-real transfer, transfer reinforcement learning
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-349562DOI: 10.1111/exsy.13205ISI: 000910515800001Scopus ID: 2-s2.0-85143421268OAI: oai:DiVA.org:kth-349562DiVA, id: diva2:1880883
Note

QC 20240702

Available from: 2024-07-02 Created: 2024-07-02 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Bauer, Stefan

Search in DiVA

By author/editor
Bauer, Stefan
By organisation
Decision and Control Systems (Automatic Control)
In the same journal
Expert systems (Print)
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 25 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf