kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning Task Constraints for Robot Grasping using Graphical Models
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (Computer Vision and Active Perception)
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
Lappeenranta University of Technology, Finland, Department.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0003-2965-2953
2010 (English)In: IEEE/RSJ International Conference on Intelligent RObots and Systems, IEEE , 2010Conference paper, Published paper (Refereed)
Abstract [en]

This paper studies the learning of task constraints that allow grasp generation in a goal-directed manner. We show how an object representation and a grasp generated on it can be integrated with the task requirements. The scientific problems tackled are (i) identification and modeling of such task constraints, and (ii) integration between a semantically expressed goal of a task and quantitative constraint functions defined in the continuous object-action domains. We first define constraint functions given a set of object and action attributes, and then model the relationships between object, action, constraint features and the task using Bayesian networks. The probabilistic framework deals with uncertainty, combines apriori knowledge with observed data, and allows inference on target attributes given only partial observations. We present a system designed to structure data generation and constraintvlearning processes that is applicable to new tasks, embodiments and sensory data. The application of the task constraint model is demonstrated in a goal-directed imitation experiment.

Place, publisher, year, edition, pages
IEEE , 2010.
Keywords [en]
belief networks, feature extraction, grippers, image representation, solid modelling
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-46162DOI: 10.1109/IROS.2010.5649406Scopus ID: 2-s2.0-78651509683ISBN: 978-1-4244-6674-0 (print)OAI: oai:DiVA.org:kth-46162DiVA, id: diva2:453364
Conference
IEEE/RSJ International Conference on Intelligent RObots and Systems
Funder
EU, European Research Council, IST-FP6-IP-027657
Note

© 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC 20111102

Available from: 2011-11-02 Created: 2011-11-02 Last updated: 2022-06-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopusPublished version

Authority records

Kragic, Danica

Search in DiVA

By author/editor
Song, DanKai, HübnerVille, KyrkiKragic, Danica
By organisation
Computer Vision and Active Perception, CVAP
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 165 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf