Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning visual forward models to compensate for self-induced image motion
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0001-6738-9872
Wageningen University, The Netherlands.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-4266-6746
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0003-0579-3372
2014 (English)In: 23rd IEEE International Conference on Robot and Human Interactive Communication: IEEE RO-MAN, IEEE , 2014, 1110-1115 p.Conference paper, Published paper (Refereed)
Abstract [en]

Predicting the sensory consequences of an agent's own actions is considered an important skill for intelligent behavior. In terms of vision, so-called visual forward models can be applied to learn such predictions. This is no trivial task given the high-dimensionality of sensory data and complex action spaces. In this work, we propose to learn the visual consequences of changes in pan and tilt of a robotic head using a visual forward model based on Gaussian processes and SURF correspondences. This is done without any assumptions on the kinematics of the system or requirements on calibration. The proposed method is compared to an earlier work using accumulator-based correspondences and Radial Basis function networks. We also show the feasibility of the proposed method for detection of independent motion using a moving camera system. By comparing the predicted and actual captured images, image motion due to the robot's own actions and motion caused by moving external objects can be distinguished. Results show the proposed method to be preferable from the earlier method in terms of both prediction errors and ability to detect independent motion.

Place, publisher, year, edition, pages
IEEE , 2014. 1110-1115 p.
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-158120DOI: 10.1109/ROMAN.2014.6926400Scopus ID: 2-s2.0-84937605949ISBN: 978-1-4799-6763-6 (print)OAI: oai:DiVA.org:kth-158120DiVA: diva2:774382
Conference
23rd IEEE International Conference on Robot and Human Interactive Communication : IEEE RO-MAN : August 25-29, 2014, Edinburgh, Scotland, UK
Note

QC 20150407

Available from: 2014-12-22 Created: 2014-12-22 Last updated: 2016-04-15Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Ghadirzadeh, AliMaki, AtsutoBjörkman, Mårten

Search in DiVA

By author/editor
Ghadirzadeh, AliMaki, AtsutoBjörkman, Mårten
By organisation
Computer Vision and Active Perception, CVAP
Robotics

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 198 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf