Monocular Real-Time 3D Articulated Hand Pose Estimation
2009 (English)In: 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09, 2009, 87-92 p.Conference paper (Refereed)
Markerless, vision based estimation of human hand pose over time is a prerequisite for a number of robotics applications, such as Learning by Demonstration (LbD), health monitoring, teleoperation, human-robot interaction. It has special interest in humanoid platforms, where the number of degrees of freedom makes conventional programming challenging. Our primary application is LbD in natural environments where the humanoid robot learns how to grasp and manipulate objects by observing a human performing a task. This paper presents a method for continuous vision based estimation of human hand pose. The method is non-parametric, performing a nearest neighbor search in a large database (100000 entries) of hand pose examples. The main contribution is a real time system, robust to partial occlusions and segmentation errors, that provides full hand pose recognition from markerless data. An additional contribution is the modeling of based on temporal consistency in hand pose, without explicitly tracking the hand in the high dimensional pose space. The pose representation is rich enough to enable a descriptive humanto-robotmapping. Experiments show the pose estimation to be more robust and accurate than a non-parametric method without temporal constraints.
Place, publisher, year, edition, pages
2009. 87-92 p.
Computer and Information Science
IdentifiersURN: urn:nbn:se:kth:diva-66468DOI: 10.1109/ICHR.2009.5379596ISBN: 978-1-4244-4588-2ISBN: 978-1-4244-4597-4OAI: oai:DiVA.org:kth-66468DiVA: diva2:484147
9th IEEE-RAS International Conference on Humanoid Robots, December 7-10, 2009 Paris, France
QC 201201272012-01-262012-01-262012-01-27Bibliographically approved