Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Representations for Object Grasping and Learning from Experience
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
2010 (English)In: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, 1566-1571 p.Conference paper, Published paper (Refereed)
Abstract [en]

We study two important problems in the area of robot grasping: i) the methodology and representations for grasp selection on known and unknown objects, and ii) learning from experience for grasping of similar objects. The core part of the paper is the study of different representations necessary for implementing grasping tasks on objects of different complexity. We show how to select a grasp satisfying force-closure, taking into account the parameters of the robot hand and collision-free paths. Our implementation takes also into account efficient computation at different levels of the system regarding representation, description and grasp hypotheses generation.

Place, publisher, year, edition, pages
2010. 1566-1571 p.
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Keyword [en]
Collision-free paths, Core part, Efficient computation, Force-closure, Hypotheses generation, Object grasping, Robot grasping, Robot hand, Unknown objects
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-32006DOI: 10.1109/IROS.2010.5648993ISI: 000287672000127ISBN: 978-1-4244-6675-7 (print)OAI: oai:DiVA.org:kth-32006DiVA: diva2:409171
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, TAIWAN, OCT 18-22, 2010
Note
QC 20110407Available from: 2011-04-07 Created: 2011-04-04 Last updated: 2011-04-07Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Authority records BETA

Kragic, Danica

Search in DiVA

By author/editor
Rubio, Oscar J.Hübner, KaiKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 51 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf