kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Cloth manipulation based on category classification and landmark detection
KTH, School of Electrical Engineering and Computer Science (EECS).
ETH Eidgenossische Tech Hsch Zurich, Zurich, Switzerland..
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-3827-3824
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-5344-8042
Show others and affiliations
2022 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 19, no 4, article id 17298806221110445Article in journal (Refereed) Published
Abstract [en]

Cloth manipulation remains a challenging problem for the robotic community. Recently, there has been an increased interest in applying deep learning techniques to problems in the fashion industry. As a result, large annotated data sets for cloth category classification and landmark detection were created. In this work, we leverage these advances in deep learning to perform cloth manipulation. We propose a full cloth manipulation framework that, performs category classification and landmark detection based on an image of a garment, followed by a manipulation strategy. The process is performed iteratively to achieve a stretching task where the goal is to bring a crumbled cloth into a stretched out position. We extensively evaluate our learning pipeline and show a detailed evaluation of our framework on different types of garments in a total of 140 recorded and available experiments. Finally, we demonstrate the benefits of training a network on augmented fashion data over using a small robotic-specific data set.

Place, publisher, year, edition, pages
SAGE Publications , 2022. Vol. 19, no 4, article id 17298806221110445
Keywords [en]
Cloth, garment manipulation, classification, vision for robotics, data augmentation
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-316295DOI: 10.1177/17298806221110445ISI: 000834130100001Scopus ID: 2-s2.0-85134880223OAI: oai:DiVA.org:kth-316295DiVA, id: diva2:1686939
Note

QC 20220812

Available from: 2022-08-12 Created: 2022-08-12 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Gustavsson, OscarWelle, Michael C.Butepage, JudithVarava, AnastasiiaKragic, Danica

Search in DiVA

By author/editor
Gustavsson, OscarWelle, Michael C.Butepage, JudithVarava, AnastasiiaKragic, Danica
By organisation
School of Electrical Engineering and Computer Science (EECS)Robotics, Perception and Learning, RPLCentre for Autonomous Systems, CAS
In the same journal
International Journal of Advanced Robotic Systems
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 156 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf