kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Benchmarking Bimanual Cloth Manipulation
CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-3827-3824
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-3599-440X
Show others and affiliations
2020 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 5, no 2, p. 1111-1118Article in journal (Refereed) Published
Abstract [en]

Cloth manipulation is a challenging task that, despite its importance, has received relatively little attention compared to rigid object manipulation. In this letter, we provide three benchmarks for evaluation and comparison of different approaches towards three basic tasks in cloth manipulation: spreading a tablecloth over a table, folding a towel, and dressing. The tasks can be executed on any bimanual robotic platform and the objects involved in the tasks are standardized and easy to acquire. We provide several complexity levels for each task, and describe the quality measures to evaluate task execution. Furthermore, we provide baseline solutions for all the tasks and evaluate them according to the proposed metrics.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC , 2020. Vol. 5, no 2, p. 1111-1118
Keywords [en]
Cooperating robots, performance evaluation and benchmarking
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-269022DOI: 10.1109/LRA.2020.2965891ISI: 000511836600009Scopus ID: 2-s2.0-85079233626OAI: oai:DiVA.org:kth-269022DiVA, id: diva2:1414468
Note

QC 20200313

Available from: 2020-03-13 Created: 2020-03-13 Last updated: 2025-02-09Bibliographically approved
In thesis
1. Learning Structured Representations for Rigid and Deformable Object Manipulation
Open this publication in new window or tab >>Learning Structured Representations for Rigid and Deformable Object Manipulation
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The performance of learning based algorithms largely depends on the given representation of data. Therefore the questions arise, i) how to obtain useful representations, ii) how to evaluate representations, and iii) how to leverage these representations in a real-world robotic setting. In this thesis, we aim to answer all three of this questions in order to learn structured representations for rigid and deformable object manipulation. We firstly take a look into how to learn structured representation and show that imposing structure, informed from task priors, into the representation space is beneficial for certain robotic tasks. Furthermore we discuss and present suitable evaluation practices for structured representations as well as a benchmark for bimanual cloth manipulation. Finally, we introduce the Latent SpaceRoadmap (LSR) framework for visual action planning, where raw observations are mapped into a lower-dimensional latent space. Those are connected via the LSR, and visual action plans are generated that are able to perform a wide range of tasks. The framework is validated on a simulated rigid box stacking task, a simulated hybrid rope-box manipulation task, and a T-shirt folding task performed on a real robotic system.

Abstract [sv]

Prestandan av inlärningbaserade algoritmer beror på stor del av hur datan representeras. Av denna anledning ställs följande frågor: (i) hur vi tar fram användarbara representationer, (ii) hur utvärderar vi dem samt (iii) hur kan vi använda dem i riktiga robotikscenarier. I den här avhandlingen försöker vi att svara på dessa frågor för att hitta inlärda, strukturerade, representationer för manipulation av rigida och icke-rigida objekt. Först behandlar vi hur man kan lära in en strukturerad representation och visar att inkorporering av struktur, genom användandet av statistiska priors, är fördelaktigt inom vissa robotikuppgifter. Vidare så diskuterar vi passande tillvägagångssätt för att utvärdera strukturerade representationer, samt presenterar ett standardiserat test för tygmanipulering för robotar med två armar. Till sist så introducerar vi ramverket Latent Space Roadmap (LSR) för visuell beslutsplanering, där råa observationer mappas till en lågdimensionell latent rymd. Dessa punkter kopplas samman med hjälp av LSR, och visuella beslutsplaner genereras för en simulerad uppgift för att placera objekt i staplar, för manipulation av ett rep, samt för att vika T-shirts på ett riktigt robotiksystem.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2021. p. 44
Series
TRITA-EECS-AVL ; 2021:72
Keywords
Representation learning, Object Manipulation
National Category
Robotics and automation
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-304615 (URN)978-91-8040-050-3 (ISBN)
Public defence
2021-11-09, https://kth-se.zoom.us/j/66216068903, Ångdomen, Osquars backe 31, Stockholm, 15:00 (English)
Opponent
Supervisors
Note

QC 20211109

Available from: 2021-11-09 Created: 2021-11-08 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Lippi, MartinaWelle, Michael C.Yin, HangAntonova, RikaVarava, AnastasiiaKragic, Danica

Search in DiVA

By author/editor
Lippi, MartinaWelle, Michael C.Yin, HangAntonova, RikaVarava, AnastasiiaTorras, CarmeKragic, Danica
By organisation
Robotics, Perception and Learning, RPL
In the same journal
IEEE Robotics and Automation Letters
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 391 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf