Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Representations for cross-task, cross-object grasp Transfer
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0002-1031-9600
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
2014 (English)In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2014, 5699-5704 p.Conference paper, Published paper (Refereed)
Abstract [en]

We address The problem of Transferring grasp knowledge across objects and Tasks. This means dealing with Two important issues: 1) The induction of possible Transfers, i.e., whether a given object affords a given Task, and 2) The planning of a grasp That will allow The robot To fulfill The Task. The induction of object affordances is approached by abstracting The sensory input of an object as a set of attributes That The agent can reason about Through similarity and proximity. For grasp execution, we combine a part-based grasp planner with a model of Task constraints. The Task constraint model indicates areas of The object That The robot can grasp To execute The Task. Within These areas, The part-based planner finds a hand placement That is compatible with The object shape. The key contribution is The ability To Transfer Task parameters across objects while The part-based grasp planner allows for Transferring grasp information across Tasks. As a result, The robot is able To synthesize plans for previously unobserved Task/object combinations. We illustrate our approach with experiments conducted on a real robot.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2014. 5699-5704 p.
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-176152DOI: 10.1109/ICRA.2014.6907697Scopus ID: 2-s2.0-84929192413OAI: oai:DiVA.org:kth-176152DiVA: diva2:875045
Conference
2014 IEEE International Conference on Robotics and Automation, ICRA 2014, 31 May 2014 through 7 June 2014
Note

QC 20151130

Available from: 2015-11-30 Created: 2015-11-02 Last updated: 2015-11-30Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Hjelm, MartinKragic, Danica

Search in DiVA

By author/editor
Hjelm, MartinEk, Carl HenrikKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 23 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf