Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Predicting human intention in visual observations of hand/object interactions
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
Show others and affiliations
2013 (English)In: 2013 IEEE International Conference On Robotics And Automation (ICRA), New York: IEEE , 2013, 1608-1615 p.Conference paper, Published paper (Refereed)
Abstract [en]

The main contribution of this paper is a probabilistic method for predicting human manipulation intention from image sequences of human-object interaction. Predicting intention amounts to inferring the imminent manipulation task when human hand is observed to have stably grasped the object. Inference is performed by means of a probabilistic graphical model that encodes object grasping tasks over the 3D state of the observed scene. The 3D state is extracted from RGB-D image sequences by a novel vision-based, markerless hand-object 3D tracking framework. To deal with the high-dimensional state-space and mixed data types (discrete and continuous) involved in grasping tasks, we introduce a generative vector quantization method using mixture models and self-organizing maps. This yields a compact model for encoding of grasping actions, able of handling uncertain and partial sensory data. Experimentation showed that the model trained on simulated data can provide a potent basis for accurate goal-inference with partial and noisy observations of actual real-world demonstrations. We also show a grasp selection process, guided by the inferred human intention, to illustrate the use of the system for goal-directed grasp imitation.

Place, publisher, year, edition, pages
New York: IEEE , 2013. 1608-1615 p.
Series
Proceedings - IEEE International Conference on Robotics and Automation, ISSN 1050-4729
Keyword [en]
High-dimensional, Human manipulation, Human-object interaction, Manipulation task, Noisy observations, Probabilistic graphical models, Probabilistic methods, Visual observations, Computer vision, Conformal mapping, Encoding (symbols), Forecasting, Three dimensional, Vector quantization, Robotics
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-139955DOI: 10.1109/ICRA.2013.6630785ISI: 000337617301091Scopus ID: 2-s2.0-84887266463ISBN: 978-1-4673-5643-5 (print)ISBN: 978-1-4673-5641-1 (print)OAI: oai:DiVA.org:kth-139955DiVA: diva2:688215
Conference
2013 IEEE International Conference on Robotics and Automation, ICRA 2013; Karlsruhe, Germany, 6-10 May 2013
Note

QC 20140116

Available from: 2014-01-16 Created: 2014-01-16 Last updated: 2014-08-04Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Kragic, Danica

Search in DiVA

By author/editor
Song, DanKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Robotics

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 54 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf