kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Embodiment-Specific Representation of Robot Grasping using Graphical Models and Latent-Space Discretization
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0003-2965-2953
2011 (English)In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011, p. 980-986Conference paper, Published paper (Refereed)
Abstract [en]

We study embodiment-specific robot grasping tasks, represented in a probabilistic framework. The framework consists of a Bayesian network (BN) integrated with a novel multi-variate discretization model. The BN models the probabilistic relationships among tasks, objects, grasping actions and constraints. The discretization model provides compact data representation that allows efficient learning of the conditional structures in the BN. To evaluate the framework, we use a database generated in a simulated environment including examples of a human and a robot hand interacting with objects. The results show that the different kinematic structures of the hands affect both the BN structure and the conditional distributions over the modeled variables. Both models achieve accurate task classification, and successfully encode the semantic task requirements in the continuous observation spaces. In an imitation experiment, we demonstrate that the representation framework can transfer task knowledge between different embodiments, therefore is a suitable model for grasp planning and imitation in a goal-directed manner.

Place, publisher, year, edition, pages
2011. p. 980-986
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-55310DOI: 10.1109/IROS.2011.6048145ISI: 000297477501051Scopus ID: 2-s2.0-84455207493ISBN: 978-1-61284-454-1 (print)OAI: oai:DiVA.org:kth-55310DiVA, id: diva2:471483
Conference
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). San Francisco, CA, USA. September 25-30, 2011
Note
QC 20120103Available from: 2012-01-02 Created: 2012-01-02 Last updated: 2024-03-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Ek, Carl HenrikKragic, Danica

Search in DiVA

By author/editor
Song, DanEk, Carl HenrikHuebner, KaiKragic, Danica
By organisation
Computer Vision and Active Perception, CVAP
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 101 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf