kth.sePublications
12345672 of 17
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adapting to Variations in Textile Properties for Robotic Manipulation
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-9125-6615
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In spite of the rapid advancements in AI, tasks like laundry, tidying, and general household assistance remain challenging for robots due to their limited capacity to generalize manipulation skills across different variations of everyday objects.Manipulation of textiles, in particular, poses unique challenges due to their deformable nature and complex dynamics.  In this thesis, we aim to enhance the generalization of robotic manipulation skills for textiles by addressing how robots can adapt their strategies based on the physical properties of deformable objects. We begin by identifying key factors of variation in textiles relevant to manipulation, drawing insights from overlooked taxonomies in the textile industry. The core challenge of adaptation is addressed by leveraging the synergies between interactive perception and cloth dynamics models. These are utilized to tackle two fundamental estimation problems to achieve adaptation: property identification, as these properties define the system’s dynamic and how the object responds to external forces, and state estimation, which provides the feedback necessary for closing the action-perception loop.  To identify object properties, we investigate how combining exploratory actions, such as pulling and twisting, with sensory feedback can enhance a robot’s understanding of textile characteristics. Central to this investigation is the development of an adaptation module designed to encode textile properties from recent observations, enabling data-driven dynamics models to adjust their predictions accordingly to the perceived properties. To address state estimation challenges arising from cloth self-occlusions, we explore semantic descriptors and 3D tracking methods that integrate geometric observations, such as point clouds, with visual cues from RGB data.Finally, we integrate these modeling and perceptual components into a model-based manipulation framework and evaluate the generalization of the proposed method across a diverse set of real-world textiles. The results, demonstrating enhanced generalization, underscore the potential of adapting the manipulation in response to variations in textiles' properties and highlight the critical role of the action-perception loop in achieving adaptability.

Abstract [sv]

Trots de snabba framstegen inom AI förblir uppgifter som att tvätta, städa och allmän hushållshjälp utmanande för robotar på grund av deras begränsade förmåga att generalisera manipulationsfärdigheter över olika variationer av vardagsföremål. Manipulation av textilier utgör i synnerhet unika utmaningar på grund av deras deformerbara natur och komplexa dynamik.I denna avhandling syftar vi till att förbättra generaliseringen av robotiska manipulationsfärdigheter för textilier genom att undersöka hur robotar kan anpassa sina strategier baserat på de fysiska egenskaperna hos deformerbara objekt. Vi börjar med att identifiera nyckelfaktorer för variation i textilier som är relevanta för manipulation och drar insikter från förbisedda taxonomier inom textilindustrin.Den centrala utmaningen med anpassning adresseras genom att utnyttja synergierna mellan interaktiv perception och modeller för textildynamik. Dessa används för att lösa två grundläggande estimeringsproblem för att uppnå anpassning: egenskapsidentifiering, eftersom dessa egenskaper definierar systemets dynamik och hur objektet reagerar på yttre krafter, samt tillståndsestimering, som ger den återkoppling som krävs för att stänga åtgärds-perceptionsslingan. För att identifiera objektets egenskaper undersöker vi hur kombinationen av utforskande handlingar, såsom att dra och vrida, med sensorisk återkoppling kan förbättra robotens förståelse för textilens egenskaper. Centralt i denna undersökning är utvecklingen av en anpassningsmodul utformad för att koda textilens egenskaper från nyligen gjorda observationer, vilket gör det möjligt för datadrivna dynamikmodeller att justera sina förutsägelser utifrån de uppfattade egenskaperna.För att hantera utmaningar med tillståndsestimering som uppstår vid textilens självocklusioner utforskar vi semantiska deskriptorer och 3D-spårningsmetoder som integrerar geometriska observationer, såsom punktmoln, med visuella ledtrådar från RGB-data.Slutligen integrerar vi dessa modellerings- och perceptionskomponenter i ett modellbaserat manipulationsramverk och utvärderar generaliseringen av den föreslagna metoden på ett brett urval av textilier i verkliga miljöer. Resultaten, som visar förbättrad generalisering, understryker potentialen i att anpassa manipulation till variationer i textilernas egenskaper och framhäver den avgörande rollen för åtgärds-perceptionsslingan i att uppnå anpassningsförmåga.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2025. , p. 82
Series
TRITA-EECS-AVL ; 2025:1
Keywords [en]
Textile Variations, Robotic Manipulation, Generalization, Adaptation
Keywords [sv]
Textila Variationer, Robotmanipulation, Generalisering, Anpassning
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-357508ISBN: 978-91-8106-125-3 (print)OAI: oai:DiVA.org:kth-357508DiVA, id: diva2:1920741
Public defence
2025-01-14, https://kth-se.zoom.us/j/66979575369, F3 (Flodis), Lindstedtsvägen 26 & 28, KTH Campus, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20241213

Available from: 2024-12-13 Created: 2024-12-12 Last updated: 2024-12-19Bibliographically approved
List of papers
1. Textile Taxonomy and Classification Using Pulling and Twisting
Open this publication in new window or tab >>Textile Taxonomy and Classification Using Pulling and Twisting
2021 (English)In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): Prague/Online 27.09-01.10.2021, Institute of Electrical and Electronics Engineers (IEEE), 2021, p. 7541-7548Conference paper, Published paper (Refereed)
Abstract [en]

— Identification of textile properties is an important milestone toward advanced robotic manipulation tasks that consider interaction with clothing items such as assisted dressing, laundry folding, automated sewing, textile recycling and reusing. Despite the abundance of work considering this class of deformable objects, many open problems remain. These relate to the choice and modelling of the sensory feedback as well as the control and planning of the interaction and manipulation strategies. Most importantly, there is no structured approach for studying and assessing different approaches that may bridge the gap between the robotics community and textile production industry. To this end, we outline a textile taxonomy considering fiber types and production methods, commonly used in textile industry. We devise datasets according to the taxonomy, and study how robotic actions, such as pulling and twisting of the textile samples, can be used for the classification. We also provide important insights from the perspective of visualization and interpretability of the gathered data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-304613 (URN)10.1109/IROS51168.2021.9635992 (DOI)000755125506011 ()2-s2.0-85124364312 (Scopus ID)
Conference
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague/Online 27.09-01.10.2021
Note

QC 20220324

Part of conference proceedings: ISBN 978-166541714-3

Available from: 2021-11-08 Created: 2021-11-08 Last updated: 2024-12-12Bibliographically approved
2. Elastic Context: Encoding Elasticity for Data-driven Models of Textiles
Open this publication in new window or tab >>Elastic Context: Encoding Elasticity for Data-driven Models of Textiles
Show others...
2023 (English)In: Proceedings - ICRA 2023: IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1764-1770Conference paper, Published paper (Refereed)
Abstract [en]

Physical interaction with textiles, such as assistivedressing or household tasks, requires advanced dexterous skills.The complexity of textile behavior during stretching and pullingis influenced by the material properties of the yarn and bythe textile’s construction technique, which are often unknownin real-world settings. Moreover, identification of physicalproperties of textiles through sensing commonly available onrobotic platforms remains an open problem. To address this,we introduce Elastic Context (EC), a method to encode theelasticity of textiles using stress-strain curves adapted fromtextile engineering for robotic applications. We employ EC tolearn generalized elastic behaviors of textiles and examine theeffect of EC dimension on accurate force modeling of real-worldnon-linear elastic behaviors.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-328397 (URN)10.1109/ICRA48891.2023.10160740 (DOI)001036713001083 ()2-s2.0-85168704167 (Scopus ID)
Conference
2023 IEEE International Conference on Robotics and Automation, ICRA 2023, London, United Kingdom of Great Britain and Northern Ireland, May 29 2023 - Jun 2 2023
Note

Part of ISBN 9798350323658

QC 20230615

Available from: 2023-06-08 Created: 2023-06-08 Last updated: 2024-12-12Bibliographically approved
3. EDO-Net: Learning Elastic Properties of Deformable Objects from Graph Dynamics
Open this publication in new window or tab >>EDO-Net: Learning Elastic Properties of Deformable Objects from Graph Dynamics
Show others...
2023 (English)In: Proceedings - ICRA 2023: IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 3875-3881Conference paper, Published paper (Refereed)
Abstract [en]

We study the problem of learning graph dynamics of deformable objects that generalizes to unknown physical properties. Our key insight is to leverage a latent representation of elastic physical properties of cloth-like deformable objects that can be extracted, for example, from a pulling interaction. In this paper we propose EDO-Net (Elastic Deformable Object - Net), a model of graph dynamics trained on a large variety of samples with different elastic properties that does not rely on ground-truth labels of the properties. EDO-Net jointly learns an adaptation module, and a forward-dynamics module. The former is responsible for extracting a latent representation of the physical properties of the object, while the latter leverages the latent representation to predict future states of cloth-like objects represented as graphs. We evaluate EDO-Net both in simulation and real world, assessing its capabilities of: 1) generalizing to unknown physical properties, 2) transferring the learned representation to new downstream tasks.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Sciences
Identifiers
urn:nbn:se:kth:diva-336773 (URN)10.1109/ICRA48891.2023.10161234 (DOI)001036713003039 ()2-s2.0-85168652855 (Scopus ID)
Conference
2023 IEEE International Conference on Robotics and Automation, ICRA 2023, London, United Kingdom of Great Britain and Northern Ireland, May 29 2023 - Jun 2 2023
Note

Part of ISBN 9798350323658

QC 20230920

Available from: 2023-09-20 Created: 2023-09-20 Last updated: 2024-12-12Bibliographically approved
4. AdaFold: Adapting Folding Trajectories of Cloths via Feedback-Loop Manipulation
Open this publication in new window or tab >>AdaFold: Adapting Folding Trajectories of Cloths via Feedback-Loop Manipulation
2024 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 11, p. 9183-9190Article in journal (Refereed) Published
Abstract [en]

We present AdaFold, a model-based feedback-loop framework for optimizing folding trajectories. AdaFold extracts a particle-based representation of cloth from RGB-D images and feeds back the representation to a model predictive control to re-plan folding trajectory at every time-step. A key component of AdaFold that enables feedback-loop manipulation is the use of semantic descriptors extracted from geometric features. These descriptors enhance the particle representation of the cloth to distinguish between ambiguous point clouds of differently folded cloths. Our experiments demonstrate AdaFold's ability to adapt folding trajectories of cloths with varying physical properties and generalize from simulated training to real-world execution.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2024
Keywords
Trajectory optimization, Shape, Manipulation planning, perception for grasping and manipulation, RGB-D perception, semantic scene understanding
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-354332 (URN)10.1109/LRA.2024.3436329 (DOI)001316209900014 ()2-s2.0-85199779805 (Scopus ID)
Note

QC 20241004

Available from: 2024-10-04 Created: 2024-10-04 Last updated: 2024-12-12Bibliographically approved
5. Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision
Open this publication in new window or tab >>Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision
Show others...
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We introduce Cloth-Splatting, a method for estimating 3D states of cloth from RGB images through a prediction-update framework. Cloth-Splatting leverages an action-conditioned dynamics model for predicting future states and uses 3D Gaussian Splatting to update the predicted states. Our key insight is that coupling a 3D mesh-based representation with Gaussian Splatting allows us to define a differentiable map between the cloth's state space and the image space. This enables the use of gradient-based optimization techniques to refine inaccurate state estimates using only RGB supervision. Our experiments demonstrate that Cloth-Splatting not only improves state estimation accuracy over current baselines but also reduces convergence time by ∼85 %.

Keywords
3D State Estimation, Gaussian Splatting, Vision-based Tracking, Deformable Objects
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-357192 (URN)
Conference
8th Annual Conference on Robot Learning, November 6-9, 2024, Munich, Germany
Note

QC 20241205

Available from: 2024-12-04 Created: 2024-12-04 Last updated: 2024-12-12Bibliographically approved

Open Access in DiVA

summary(6555 kB)59 downloads
File information
File name SUMMARY01.pdfFile size 6555 kBChecksum SHA-512
c9ab5fddec95fff4c3a87d0ca654ba7c1c0df6d01570331788a003c74e7b8f0be444adffce24ac4d884f47417c4fd3a23aae27fca1579fb88e34ecb119d27164
Type summaryMimetype application/pdf

Authority records

Longhini, Alberta

Search in DiVA

By author/editor
Longhini, Alberta
By organisation
Robotics, Perception and Learning, RPL
Computer Vision and Robotics (Autonomous Systems)Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 341 hits
12345672 of 17
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf