Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Rearrangement with Nonprehensile Manipulation Using Deep Reinforcement Learning
Hong Kong Univ Sci & Technol, Hong Kong, Hong Kong, Peoples R China.;HKUST Robot Inst, Hong Kong, Hong Kong, Peoples R China.;Dept Elect & Comp Engn, Hong Kong, Hong Kong, Peoples R China..
KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.ORCID-id: 0000-0003-2965-2953
Hong Kong Univ Sci & Technol, Hong Kong, Hong Kong, Peoples R China.;HKUST Robot Inst, Hong Kong, Hong Kong, Peoples R China.;Dept Mech & Aerosp Engn, Hong Kong, Hong Kong, Peoples R China..
Vise andre og tillknytning
2018 (engelsk)Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, s. 270-277Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Rearranging objects on a tabletop surface by means of nonprehensile manipulation is a task which requires skillful interaction with the physical world. Usually, this is achieved by precisely modeling physical properties of the objects, robot, and the environment for explicit planning. In contrast, as explicitly modeling the physical environment is not always feasible and involves various uncertainties, we learn a nonprehensile rearrangement strategy with deep reinforcement learning based on only visual feedback. For this, we model the task with rewards and train a deep Q-network. Our potential field-based heuristic exploration strategy reduces the amount of collisions which lead to suboptimal outcomes and we actively balance the training set to avoid bias towards poor examples. Our training process leads to quicker learning and better performance on the task as compared to uniform exploration and standard experience replay. We demonstrate empirical evidence from simulation that our method leads to a success rate of 85%, show that our system can cope with sudden changes of the environment, and compare our performance with human level performance.

sted, utgiver, år, opplag, sider
IEEE Computer Society, 2018. s. 270-277
Serie
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-237158ISI: 000446394500028Scopus ID: 2-s2.0-85063133829ISBN: 978-1-5386-3081-5 (tryckt)OAI: oai:DiVA.org:kth-237158DiVA, id: diva2:1258415
Konferanse
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Forskningsfinansiär
Knut and Alice Wallenberg Foundation
Merknad

QC 20181024

Tilgjengelig fra: 2018-10-24 Laget: 2018-10-24 Sist oppdatert: 2019-08-20bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Scopusconference

Personposter BETA

Stork, Johannes A.Kragic, Danica

Søk i DiVA

Av forfatter/redaktør
Stork, Johannes A.Kragic, Danica
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 40 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf