kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Textile Taxonomy and Classification Using Pulling and Twisting
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-3827-3824
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-4933-1778
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-2965-2953
2021 (English)In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): Prague/Online 27.09-01.10.2021, Institute of Electrical and Electronics Engineers (IEEE), 2021, p. 7541-7548Conference paper, Published paper (Refereed)
Abstract [en]

— Identification of textile properties is an important milestone toward advanced robotic manipulation tasks that consider interaction with clothing items such as assisted dressing, laundry folding, automated sewing, textile recycling and reusing. Despite the abundance of work considering this class of deformable objects, many open problems remain. These relate to the choice and modelling of the sensory feedback as well as the control and planning of the interaction and manipulation strategies. Most importantly, there is no structured approach for studying and assessing different approaches that may bridge the gap between the robotics community and textile production industry. To this end, we outline a textile taxonomy considering fiber types and production methods, commonly used in textile industry. We devise datasets according to the taxonomy, and study how robotic actions, such as pulling and twisting of the textile samples, can be used for the classification. We also provide important insights from the perspective of visualization and interpretability of the gathered data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021. p. 7541-7548
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-304613DOI: 10.1109/IROS51168.2021.9635992ISI: 000755125506011Scopus ID: 2-s2.0-85124364312OAI: oai:DiVA.org:kth-304613DiVA, id: diva2:1609594
Conference
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague/Online 27.09-01.10.2021
Note

QC 20220324

Part of conference proceedings: ISBN 978-166541714-3

Available from: 2021-11-08 Created: 2021-11-08 Last updated: 2025-02-09Bibliographically approved
In thesis
1. Learning Structured Representations for Rigid and Deformable Object Manipulation
Open this publication in new window or tab >>Learning Structured Representations for Rigid and Deformable Object Manipulation
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The performance of learning based algorithms largely depends on the given representation of data. Therefore the questions arise, i) how to obtain useful representations, ii) how to evaluate representations, and iii) how to leverage these representations in a real-world robotic setting. In this thesis, we aim to answer all three of this questions in order to learn structured representations for rigid and deformable object manipulation. We firstly take a look into how to learn structured representation and show that imposing structure, informed from task priors, into the representation space is beneficial for certain robotic tasks. Furthermore we discuss and present suitable evaluation practices for structured representations as well as a benchmark for bimanual cloth manipulation. Finally, we introduce the Latent SpaceRoadmap (LSR) framework for visual action planning, where raw observations are mapped into a lower-dimensional latent space. Those are connected via the LSR, and visual action plans are generated that are able to perform a wide range of tasks. The framework is validated on a simulated rigid box stacking task, a simulated hybrid rope-box manipulation task, and a T-shirt folding task performed on a real robotic system.

Abstract [sv]

Prestandan av inlärningbaserade algoritmer beror på stor del av hur datan representeras. Av denna anledning ställs följande frågor: (i) hur vi tar fram användarbara representationer, (ii) hur utvärderar vi dem samt (iii) hur kan vi använda dem i riktiga robotikscenarier. I den här avhandlingen försöker vi att svara på dessa frågor för att hitta inlärda, strukturerade, representationer för manipulation av rigida och icke-rigida objekt. Först behandlar vi hur man kan lära in en strukturerad representation och visar att inkorporering av struktur, genom användandet av statistiska priors, är fördelaktigt inom vissa robotikuppgifter. Vidare så diskuterar vi passande tillvägagångssätt för att utvärdera strukturerade representationer, samt presenterar ett standardiserat test för tygmanipulering för robotar med två armar. Till sist så introducerar vi ramverket Latent Space Roadmap (LSR) för visuell beslutsplanering, där råa observationer mappas till en lågdimensionell latent rymd. Dessa punkter kopplas samman med hjälp av LSR, och visuella beslutsplaner genereras för en simulerad uppgift för att placera objekt i staplar, för manipulation av ett rep, samt för att vika T-shirts på ett riktigt robotiksystem.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2021. p. 44
Series
TRITA-EECS-AVL ; 2021:72
Keywords
Representation learning, Object Manipulation
National Category
Robotics and automation
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-304615 (URN)978-91-8040-050-3 (ISBN)
Public defence
2021-11-09, https://kth-se.zoom.us/j/66216068903, Ångdomen, Osquars backe 31, Stockholm, 15:00 (English)
Opponent
Supervisors
Note

QC 20211109

Available from: 2021-11-09 Created: 2021-11-08 Last updated: 2025-02-09Bibliographically approved
2. Adapting to Variations in Textile Properties for Robotic Manipulation
Open this publication in new window or tab >>Adapting to Variations in Textile Properties for Robotic Manipulation
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In spite of the rapid advancements in AI, tasks like laundry, tidying, and general household assistance remain challenging for robots due to their limited capacity to generalize manipulation skills across different variations of everyday objects.Manipulation of textiles, in particular, poses unique challenges due to their deformable nature and complex dynamics.  In this thesis, we aim to enhance the generalization of robotic manipulation skills for textiles by addressing how robots can adapt their strategies based on the physical properties of deformable objects. We begin by identifying key factors of variation in textiles relevant to manipulation, drawing insights from overlooked taxonomies in the textile industry. The core challenge of adaptation is addressed by leveraging the synergies between interactive perception and cloth dynamics models. These are utilized to tackle two fundamental estimation problems to achieve adaptation: property identification, as these properties define the system’s dynamic and how the object responds to external forces, and state estimation, which provides the feedback necessary for closing the action-perception loop.  To identify object properties, we investigate how combining exploratory actions, such as pulling and twisting, with sensory feedback can enhance a robot’s understanding of textile characteristics. Central to this investigation is the development of an adaptation module designed to encode textile properties from recent observations, enabling data-driven dynamics models to adjust their predictions accordingly to the perceived properties. To address state estimation challenges arising from cloth self-occlusions, we explore semantic descriptors and 3D tracking methods that integrate geometric observations, such as point clouds, with visual cues from RGB data.Finally, we integrate these modeling and perceptual components into a model-based manipulation framework and evaluate the generalization of the proposed method across a diverse set of real-world textiles. The results, demonstrating enhanced generalization, underscore the potential of adapting the manipulation in response to variations in textiles' properties and highlight the critical role of the action-perception loop in achieving adaptability.

Abstract [sv]

Trots de snabba framstegen inom AI förblir uppgifter som att tvätta, städa och allmän hushållshjälp utmanande för robotar på grund av deras begränsade förmåga att generalisera manipulationsfärdigheter över olika variationer av vardagsföremål. Manipulation av textilier utgör i synnerhet unika utmaningar på grund av deras deformerbara natur och komplexa dynamik.I denna avhandling syftar vi till att förbättra generaliseringen av robotiska manipulationsfärdigheter för textilier genom att undersöka hur robotar kan anpassa sina strategier baserat på de fysiska egenskaperna hos deformerbara objekt. Vi börjar med att identifiera nyckelfaktorer för variation i textilier som är relevanta för manipulation och drar insikter från förbisedda taxonomier inom textilindustrin.Den centrala utmaningen med anpassning adresseras genom att utnyttja synergierna mellan interaktiv perception och modeller för textildynamik. Dessa används för att lösa två grundläggande estimeringsproblem för att uppnå anpassning: egenskapsidentifiering, eftersom dessa egenskaper definierar systemets dynamik och hur objektet reagerar på yttre krafter, samt tillståndsestimering, som ger den återkoppling som krävs för att stänga åtgärds-perceptionsslingan. För att identifiera objektets egenskaper undersöker vi hur kombinationen av utforskande handlingar, såsom att dra och vrida, med sensorisk återkoppling kan förbättra robotens förståelse för textilens egenskaper. Centralt i denna undersökning är utvecklingen av en anpassningsmodul utformad för att koda textilens egenskaper från nyligen gjorda observationer, vilket gör det möjligt för datadrivna dynamikmodeller att justera sina förutsägelser utifrån de uppfattade egenskaperna.För att hantera utmaningar med tillståndsestimering som uppstår vid textilens självocklusioner utforskar vi semantiska deskriptorer och 3D-spårningsmetoder som integrerar geometriska observationer, såsom punktmoln, med visuella ledtrådar från RGB-data.Slutligen integrerar vi dessa modellerings- och perceptionskomponenter i ett modellbaserat manipulationsramverk och utvärderar generaliseringen av den föreslagna metoden på ett brett urval av textilier i verkliga miljöer. Resultaten, som visar förbättrad generalisering, understryker potentialen i att anpassa manipulation till variationer i textilernas egenskaper och framhäver den avgörande rollen för åtgärds-perceptionsslingan i att uppnå anpassningsförmåga.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2025. p. 82
Series
TRITA-EECS-AVL ; 2025:1
Keywords
Textile Variations, Robotic Manipulation, Generalization, Adaptation, Textila Variationer, Robotmanipulation, Generalisering, Anpassning
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-357508 (URN)978-91-8106-125-3 (ISBN)
Public defence
2025-01-14, https://kth-se.zoom.us/j/66979575369, F3 (Flodis), Lindstedtsvägen 26 & 28, KTH Campus, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20241213

Available from: 2024-12-13 Created: 2024-12-12 Last updated: 2025-04-01Bibliographically approved

Open Access in DiVA

fulltext(20503 kB)216 downloads
File information
File name FULLTEXT01.pdfFile size 20503 kBChecksum SHA-512
2be7723dd571b08aeef4eeb5493f3ddb6f710875a7f55ffb0c14d6a941fea68c247a3e9d35c0ac360a9f3244a09dc8256ad2d04b8bcb1c3cba24c47f5aed6f86
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Longhini, AlbertaWelle, Michael C.Mitsioni, IoannaKragic, Danica

Search in DiVA

By author/editor
Longhini, AlbertaWelle, Michael C.Mitsioni, IoannaKragic, Danica
By organisation
Robotics, Perception and Learning, RPL
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar
Total: 217 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 380 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf