kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Dynamic Environments with Deformable Objects
Stanford University, CA, USA.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-3599-440X
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-9486-9238
Show others and affiliations
2021 (English)In: Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, Neural information processing systems foundation , 2021Conference paper, Published paper (Refereed)
Abstract [en]

We propose a set of environments with dynamic tasks that involve highly deformable topologically non-trivial objects. These environments facilitate easy experimentation: offer fast runtime, support large-scale parallel data generation, are easy to connect to reinforcement learning frameworks with OpenAI Gym API. We offer several types of benchmark tasks with varying levels of complexity, provide variants with procedurally generated cloth objects and randomized material textures. Moreover, we allow users to customize the tasks: import custom objects and textures, adjust size and material properties of deformable objects. We prioritize dynamic aspects of the tasks: forgoing 2D tabletop manipulation in favor of 3D tasks, with gravity and inertia playing a non-negligible role. Such advanced challenges require insights from multiple fields: machine learning and computer vision to process high-dimensional inputs, methods from computer graphics and topology to inspire structured and interpretable representations, insights from robotics to learn advanced control policies. We aim to help researches from these fields contribute their insights and simplify establishing interdisciplinary collaborations.

Place, publisher, year, edition, pages
Neural information processing systems foundation , 2021.
National Category
Computer Sciences Robotics and automation Software Engineering Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-361956Scopus ID: 2-s2.0-105000335714OAI: oai:DiVA.org:kth-361956DiVA, id: diva2:1949629
Conference
35th Conference on Neural Information Processing Systems - Track on Datasets and Benchmarks, NeurIPS Datasets and Benchmarks 2021, Virtual, Online, December 6-14, 2021
Note

QC 20250408

Available from: 2025-04-03 Created: 2025-04-03 Last updated: 2025-04-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Scopusfulltext

Authority records

Shi, PeiyangYin, HangWeng, ZehangKragic, Danica

Search in DiVA

By author/editor
Shi, PeiyangYin, HangWeng, ZehangKragic, Danica
By organisation
Robotics, Perception and Learning, RPLCollaborative Autonomous Systems
Computer SciencesRobotics and automationSoftware EngineeringComputer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 15 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf