kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Visual Recognition of Grasps for Human-to-Robot Mapping
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0002-5750-9655
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
2008 (English)In: 2008 IEEE/RSJ International Conference On Robots And Intelligent Systems, Vols 1-3, Conference Proceedings / [ed] Chatila, R; Kelly, A; Merlet, JP, 2008, p. 3192-3199Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a vision based method for grasp classification. It is developed as part of a Programming by Demonstration (PbD) system for which recognition of objects and pick-and-place actions represent basic building blocks for task learning. In contrary to earlier approaches, no articulated 3D reconstruction of the hand over time is taking place. The indata consists of a single image of the human hand. A 2D representation of the hand shape, based on gradient orientation histograms, is extracted from the image. The hand shape is then classified as one of six grasps by finding similar hand shapes in a large database of grasp images. The database search is performed using Locality Sensitive Hashing (LSH), an approximate k-nearest neighbor approach. The nearest neighbors also give an estimated hand orientation with respect to the camera. The six human grasps are mapped to three Barret hand grasps. Depending on the type of robot grasp, a precomputed grasp strategy is selected. The strategy is further parameterized by the orientation of the hand relative to the object. To evaluate the potential for the method to be part of a robust vision system, experiments were performed, comparing classification results to a baseline of human classification performance. The experiments showed the LSH recognition performance to be comparable to human performance.

Place, publisher, year, edition, pages
2008. p. 3192-3199
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-38800DOI: 10.1109/IROS.2008.4650917ISI: 000259998202065Scopus ID: 2-s2.0-69549102036ISBN: 978-1-4244-2057-5 (print)OAI: oai:DiVA.org:kth-38800DiVA, id: diva2:438144
Conference
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS; Nice; 22 September 2008 through 26 September 2008
Available from: 2011-09-01 Created: 2011-08-31 Last updated: 2022-06-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kjellström, HedvigKragic, Danica

Search in DiVA

By author/editor
Kjellström, HedvigRomero, JavierKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 123 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf