Representing actions with Kernels
2011 (English)In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, 2028-2035 p.Conference paper (Refereed)
A long standing research goal is to create robots capable of interacting with humans in dynamic environments.To realise this a robot needs to understand and interpret the underlying meaning and intentions of a human action through a model of its sensory data. The visual domain provides a rich description of the environment and data is readily available in most system through inexpensive cameras. However, such data is very high-dimensional and extremely redundant making modeling challenging.Recently there has been a significant interest in semantic modeling from visual stimuli. Even though results are encouraging available methods are unable to perform robustly in realworld scenarios.In this work we present a system for action modeling from visual data by proposing a new and principled interpretation for representing semantic information. The representation is integrated with a real-time segmentation. The method is robust and flexible making it applicable for modeling in a realistic interaction scenario which demands handling noisy observations and require real-time performance. We provide extensive evaluation and show significant improvements compared to the state-of-the-art.
Place, publisher, year, edition, pages
2011. 2028-2035 p.
, IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Computer Vision and Robotics (Autonomous Systems)
IdentifiersURN: urn:nbn:se:kth:diva-50663DOI: 10.1109/IROS.2011.6094567ISI: 000297477502058ScopusID: 2-s2.0-84455207416ISBN: 978-1-61284-454-1OAI: oai:DiVA.org:kth-50663DiVA: diva2:462369
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems September 25-30, 2011. San Francisco, CA, USA
QC 201112072011-12-072011-12-072012-04-03Bibliographically approved