Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Cooperative grasping through topological object representation
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0001-9362-0644
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Electrical Engineering (EES), Automatic Control.ORCID iD: 0000-0001-7309-8086
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0003-2965-2953
2015 (English)In: IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2015, 685-692 p.Conference paper, Published paper (Refereed)
Resource type
Text
Abstract [en]

We present a cooperative grasping approach based on a topological representation of objects. Using point cloud data we extract loops on objects suitable for generating entanglement. We use the Gauss Linking Integral to derive controllers for multi-agent systems that generate hooking grasps on such loops while minimizing the entanglement between robots. The approach copes well with noisy point cloud data, it is computationally simple and robust. We demonstrate the method for performing object grasping and transportation, through a hooking maneuver, with two coordinated NAO robots.

Place, publisher, year, edition, pages
IEEE Computer Society, 2015. 685-692 p.
Keyword [en]
Anthropomorphic robots, Robots, Topology, Noisy point, Object grasping, Point cloud data, Topological objects, Topological representation, Multi agent systems
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-181529DOI: 10.1109/HUMANOIDS.2014.7041437Scopus ID: 2-s2.0-84945179216ISBN: 9781479971749 (print)OAI: oai:DiVA.org:kth-181529DiVA: diva2:913006
Conference
2014 14th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2014, 18 November 2014 through 20 November 2014
Note

QC 20160318

Available from: 2016-03-18 Created: 2016-02-02 Last updated: 2017-03-22Bibliographically approved
In thesis
1. Flexible Robot to Object Interactions Through Rigid and Deformable Cages
Open this publication in new window or tab >>Flexible Robot to Object Interactions Through Rigid and Deformable Cages
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In this thesis we study the problem of robotic interaction with objects from a flexible perspective that complements the rigid force-closure approach. In a flexible interaction the object is not firmly bound to the robot (immobilized), which leads to many interesting scenarios. We focus on the secure kind of flexible interactions, commonly referred to as caging grasps. In this context, the adjective secure implies that the object is not able to escape arbitrarily far away from the robot which is caging it. A cage is a secure flexible interaction because it does not immobilize the object, but restricts its motion to a finite set of possible configurations. We study cages in two novel scenarios for objects with holes: caging through multi-agent cooperation and through dual-arm knotting with a rope. From these two case studies, we were able to analyze the caging problem in a broader perspective leading to the definition of a hierarchical classification of flexible interactions and cages.

In parallel to the geometric and physical problem of flexible interactions with objects, we study also the problem of discrete action scheduling through a novel control architecture called Behavior Trees (BTs). In this thesis we propose a formulation that unifies the competing BT philosophies into a single framework. We analyze how the mainstream BT formulations differ from each other, as well as their benefits and limitations. We also compare the plan representation capabilities of BTs with respect to the traditional approach of Controlled Hybrid Dynamical Systems (CHDSs). In this regard, we present bidirectional translation algorithms between such representations as well as the necessary and sufficient conditions for translation convergence. Lastly, we demonstrate our action scheduling BT architecture showcasing the aforementioned caging scenarios, as well as other examples that show how BTs can be interfaced with other high level planners.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2017. 145 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2017:08
Keyword
planning, control, perception, caging, cage, grasping, multi-agent, robot, robotic, knot, knotting, behaviour trees, behavior trees, action scheduling, RRT
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-203994 (URN)978-91-7729-316-3 (ISBN)978-91-7729-316-3 (ISBN)
Public defence
2017-04-10, F3, Lindstedtsvägen 26, KTH Campus., Stockholm, 14:00 (English)
Opponent
Supervisors
Funder
EU, FP7, Seventh Framework Programme, 600825
Note

QC 20170322

Available from: 2017-03-22 Created: 2017-03-21 Last updated: 2017-03-22Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Marzinotto, AlejandroStork, Johannes A.Dimarogonas, Dino V.Kragic Jensfelt, Danica
By organisation
Computer Vision and Active Perception, CVAPAutomatic Control
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 32 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf