Learning visual forward models to compensate for self-induced image motion
2014 (English)In: 23rd IEEE International Conference on Robot and Human Interactive Communication: IEEE RO-MAN, IEEE , 2014, 1110-1115 p.Conference paper (Refereed)
Predicting the sensory consequences of an agent's own actions is considered an important skill for intelligent behavior. In terms of vision, so-called visual forward models can be applied to learn such predictions. This is no trivial task given the high-dimensionality of sensory data and complex action spaces. In this work, we propose to learn the visual consequences of changes in pan and tilt of a robotic head using a visual forward model based on Gaussian processes and SURF correspondences. This is done without any assumptions on the kinematics of the system or requirements on calibration. The proposed method is compared to an earlier work using accumulator-based correspondences and Radial Basis function networks. We also show the feasibility of the proposed method for detection of independent motion using a moving camera system. By comparing the predicted and actual captured images, image motion due to the robot's own actions and motion caused by moving external objects can be distinguished. Results show the proposed method to be preferable from the earlier method in terms of both prediction errors and ability to detect independent motion.
Place, publisher, year, edition, pages
IEEE , 2014. 1110-1115 p.
IdentifiersURN: urn:nbn:se:kth:diva-158120DOI: 10.1109/ROMAN.2014.6926400ScopusID: 2-s2.0-84937605949ISBN: 978-1-4799-6763-6OAI: oai:DiVA.org:kth-158120DiVA: diva2:774382
23rd IEEE International Conference on Robot and Human Interactive Communication : IEEE RO-MAN : August 25-29, 2014, Edinburgh, Scotland, UK
QC 201504072014-12-222014-12-222016-04-15Bibliographically approved