kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Interactive Perception for Deformable Object Manipulation
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0002-9486-9238
Hong Kong Polytech Univ PolyU, Kowloon, Hong Kong, Peoples R China..ORCID iD: 0000-0002-7020-0943
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-3599-440x
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemistry, Organic chemistry.ORCID iD: 0000-0002-9001-7708
Show others and affiliations
2024 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 9, p. 7763-7770Article in journal (Refereed) Published
Abstract [en]

Interactive perception enables robots to manipulate the environment and objects to bring them into states that benefit the perception process. Deformable objects pose challenges to this due to manipulation difficulty and occlusion in vision-based perception. In this work, we address such a problem with a setup involving both an active camera and an object manipulator. Our approach is based on a sequential decision-making framework and explicitly considers the motion regularity and structure in coupling the camera and manipulator. We contribute a method for constructing and computing a subspace, called Dynamic Active Vision Space (DAVS), for effectively utilizing the regularity in motion exploration. The effectiveness of the framework and approach are validated in both a simulation and a real dual-arm robot setup. Our results confirm the necessity of an active camera and coordinative motion in interactive perception for deformable objects.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2024. Vol. 9, no 9, p. 7763-7770
Keywords [en]
Cameras, Manifolds, IP networks, End effectors, Task analysis, Couplings, Robot kinematics, Perception for grasping and manipulation, perception-action coupling, manipulation planning
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-352106DOI: 10.1109/LRA.2024.3431943ISI: 001283670800004Scopus ID: 2-s2.0-85199505576OAI: oai:DiVA.org:kth-352106DiVA, id: diva2:1891398
Note

QC 20240822

Available from: 2024-08-22 Created: 2024-08-22 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Weng, ZehangYin, HangKravchenko, AlexanderVarava, AnastasiiaKragic, Danica

Search in DiVA

By author/editor
Weng, ZehangZhou, PengYin, HangKravchenko, AlexanderVarava, AnastasiiaKragic, Danica
By organisation
Robotics, Perception and Learning, RPLCentre for Autonomous Systems, CASOrganic chemistryCollaborative Autonomous Systems
In the same journal
IEEE Robotics and Automation Letters
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 137 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf