kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Vpe: Variational policy embedding for transfer reinforcement learning
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-6824-6443
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-2965-2953
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Center for Applied Autonomous Sensor Systems, Örebro University, Sweden.ORCID iD: 0000-0003-3958-6179
2019 (English)In: 2019 International Conference on Robotics And Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 36-42Conference paper, Published paper (Refereed)
Abstract [en]

Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments. We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019. p. 36-42
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
National Category
Computer graphics and computer vision Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-258072DOI: 10.1109/ICRA.2019.8793556ISI: 000494942300006Scopus ID: 2-s2.0-85071508761OAI: oai:DiVA.org:kth-258072DiVA, id: diva2:1349756
Conference
2019 International Conference on Robotics and Automation, ICRA 2019; Palais des Congres de Montreal, Montreal; Canada; 20-24 May 2019
Projects
Factories of the Future (FACT)
Note

Part of proceedings ISBN 9781538660263

QC 20190916

Available from: 2019-09-09 Created: 2019-09-09 Last updated: 2025-02-01Bibliographically approved
In thesis
1. Transfer Learning using low-dimensional Representations in Reinforcement Learning
Open this publication in new window or tab >>Transfer Learning using low-dimensional Representations in Reinforcement Learning
2020 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Successful learning of behaviors in Reinforcement Learning (RL) are often learned tabula rasa, requiring many observations and interactions in the environment. Performing this outside of a simulator, in the real world, often becomes infeasible due to the large amount of interactions needed. This has motivated the use of Transfer Learning for Reinforcement Learning, where learning is accelerated by using experiences from previous learning in related tasks. In this thesis, I explore how we can transfer from a simple single-object pushing policy, to a wide array of non-prehensile rearrangement problems. I then explain how we can model task differences using a low-dimensional latent variable representation to make adaption to novel tasks efficient. Lastly, the dependence of accurate function approximation is sometimes problematic, especially in RL, where statistics of target variables are not known a priori. I present observations, along with explanations, that small target variances along with momentum optimization of ReLU-activated neural network parameters leads to dying ReLU.

Abstract [sv]

Framgångsrik inlärning av beteenden inom ramen för Reinforcement Learning (RL) sker ofta tabula rasa och kräver stora mängder observationer och interaktioner. Att använda RL-algoritmer utanför simulering, i den riktiga världen, är därför ofta inte praktiskt utförbart. Detta har motiverat studier i Transfer Learning för RL, där inlärningen accelereras av erfarenheter från tidigare inlärning av liknande uppgifter. I denna licentiatuppsats utforskar jag hur vi kan vi kan åstadkomma transfer från en enklare manipulationspolicy, till en större samling omarrangeringsproblem. Jag fortsätter sedan med att beskriva hur vi kan modellera hur olika inlärningsproblem skiljer sig åt med hjälp av en lågdimensionell parametrisering, och på så vis effektivisera inlärningen av nya problem. Beroendet av bra funktionsapproximation är ibland problematiskt, särskilt inom RL där statistik om målvariabler inte är kända i förväg. Jag presenterar därför slutligen observationer, och förklaringar, att små varianser för målvariabler tillsammans med momentum-optimering leder till dying ReLU.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2020. p. 123
Series
TRITA-EECS-AVL ; 2020:39
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-279120 (URN)978-91-7873-593-8 (ISBN)
Presentation
2020-09-22, 304, Teknikringen 14, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20200819

Available from: 2020-08-19 Created: 2020-08-16 Last updated: 2022-06-26Bibliographically approved

Open Access in DiVA

fulltext(1920 kB)318 downloads
File information
File name FULLTEXT01.pdfFile size 1920 kBChecksum SHA-512
79416b630debf40b08f8117208b5c90aed006c88fbffb583616ecec39e96ab6ebb3dcf79ffd3315be963179b8ea20c8b74f6e88dded4526703fb615ef8d61662
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusarXiv

Authority records

Kragic, DanicaStork, Johannes A.

Search in DiVA

By author/editor
Arnekvist, IsacKragic, DanicaStork, Johannes A.
By organisation
Robotics, Perception and Learning, RPL
Computer graphics and computer visionComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 318 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 593 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf