Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Spatio-Temporal Modeling of Grasping Actions
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0002-5750-9655
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
2010 (English)In: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, 2103-2108 p.Conference paper, Published paper (Refereed)
Abstract [en]

Understanding the spatial dimensionality and temporal context of human hand actions can provide representations for programming grasping actions in robots and inspire design of new robotic and prosthetic hands. The natural representation of human hand motion has high dimensionality. For specific activities such as handling and grasping of objects, the commonly observed hand motions lie on a lower-dimensional non-linear manifold in hand posture space. Although full body human motion is well studied within Computer Vision and Biomechanics, there is very little work on the analysis of hand motion with nonlinear dimensionality reduction techniques. In this paper we use Gaussian Process Latent Variable Models (GPLVMs) to model the lower dimensional manifold of human hand motions during object grasping. We show how the technique can be used to embed high-dimensional grasping actions in a lower-dimensional space suitable for modeling, recognition and mapping.

Place, publisher, year, edition, pages
2010. 2103-2108 p.
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Keyword [en]
Dimensional spaces, Full body, Gaussian Processes, Hand motion, Hand posture, High dimensionality, High-dimensional, Human hand motions, Human hands, Human motions, Latent variable models, Lower dimensional manifolds, Natural representation, Nonlinear dimensionality reduction, Nonlinear manifolds, Object grasping, Prosthetic hands, Spatial dimensionalities, Spatiotemporal modeling, Specific activity, Biomechanics, Intelligent robots, Machine design, Robot programming
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-32007DOI: 10.1109/IROS.2010.5650701ISI: 000287672002109ISBN: 978-1-4244-6675-7 (print)OAI: oai:DiVA.org:kth-32007DiVA: diva2:409167
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, TAIWAN, OCT 18-22, 2010
Note
QC 20110407Available from: 2011-04-07 Created: 2011-04-04 Last updated: 2011-04-07Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Authority records BETA

Kjellström, HedvigKragic, Danica

Search in DiVA

By author/editor
Romero, JavierKjellström, HedvigKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 20 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf