Learning object, grasping and manipulation activities using hierarchical HMMs
2014 (English)In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 37, no 3, 317-331 p.Article in journal (Refereed) Published
This article presents a probabilistic algorithm for representing and learning complex manipulation activities performed by humans in everyday life. The work builds on the multi-level Hierarchical Hidden Markov Model (HHMM) framework which allows decomposition of longer-term complex manipulation activities into layers of abstraction whereby the building blocks can be represented by simpler action modules called action primitives. This way, human task knowledge can be synthesised in a compact, effective representation suitable, for instance, to be subsequently transferred to a robot for imitation. The main contribution is the use of a robust framework capable of dealing with the uncertainty or incomplete data inherent to these activities, and the ability to represent behaviours at multiple levels of abstraction for enhanced task generalisation. Activity data from 3D video sequencing of human manipulation of different objects handled in everyday life is used for evaluation. A comparison with a mixed generative-discriminative hybrid model HHMM/SVM (support vector machine) is also presented to add rigour in highlighting the benefit of the proposed approach against comparable state of the art techniques.
Place, publisher, year, edition, pages
2014. Vol. 37, no 3, 317-331 p.
Hierarchical Hidden Markov Model (HHMM), Action primitives, Grasping and manipulation, Human daily activities
Computer Vision and Robotics (Autonomous Systems)
IdentifiersURN: urn:nbn:se:kth:diva-150901DOI: 10.1007/s10514-014-9392-1ISI: 000340409000006ScopusID: 2-s2.0-84905755829OAI: oai:DiVA.org:kth-150901DiVA: diva2:746939
QC 201409152014-09-152014-09-112014-09-15Bibliographically approved