Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Grasping by parts: Robot grasp generation from 3D box primitives
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
2010 (English)In: 4th International Conference on Cognitive Systems, CogSys 2010, 2010Conference paper, Published paper (Refereed)
Abstract [en]

Robot grasping capabilities are essential for perceiving, interpreting and acting in arbitrary and dynamic environments. While classical computer vision and visual interpretation of scenes focus on the robot's internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. Grasping is a central issue of various robot applications, especially when unknown objects have to be manipulated by the system. We present an approach aimed at the object description, but constrain it by performable actions. In particular, we will connect box-like representations of objects with grasping, and motivate this approach in a number of ways. The contributions of our work are two-fold: in terms of shape approximation, we provide an algorithm for a 3D box primitive representation to identify object parts from 3D point clouds. We motivate and evaluate this choice particularly toward the task of grasping. As a contribution in the field of grasping, we present a grasp hypothesis generation framework that utilizes the box presentation in a highly flexible manner.

Place, publisher, year, edition, pages
2010.
Keyword [en]
3D point cloud, Dynamic environments, Hypothesis generation, Internal representation, Object description, Shape approximation, Unknown objects, Visual interpretation, Approximation algorithms, Robot applications, Robot learning, Three dimensional computer graphics, Cognitive systems
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-148888Scopus ID: 2-s2.0-84878287698OAI: oai:DiVA.org:kth-148888DiVA: diva2:738325
Conference
4th International Conference on Cognitive Systems, CogSys 2010, 27 January 2010 through 28 January 2010, Zurich, Switzerland
Note

QC 20140818

Available from: 2014-08-18 Created: 2014-08-14 Last updated: 2014-08-18Bibliographically approved

Open Access in DiVA

No full text

Scopus

Authority records BETA

Kragic, Danica

Search in DiVA

By author/editor
Hübner, KaiKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 110 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf