Learning to Disambiguate Object Hypotheses through Self-Exploration
2014 (English)In: 14th IEEE-RAS International Conference onHumanoid Robots, IEEE Computer Society, 2014Conference paper (Refereed)
We present a probabilistic learning framework to form object hypotheses through interaction with the environment. A robot learns how to manipulate objects through pushing actions to identify how many objects are present in the scene. We use a segmentation system that initializes object hypotheses based on RGBD data and adopt a reinforcement approach to learn the relations between pushing actions and their effects on object segmentations. Trained models are used to generate actions that result in minimum number of pushes on object groups, until either object separation events are observed or it is ensured that there is only one object acted on. We provide baseline experiments that show that a policy based on reinforcement learning for action selection results in fewer pushes, than if pushing actions were selected randomly.
Place, publisher, year, edition, pages
IEEE Computer Society, 2014.
Anthropomorphic Robots, Reinforcement Learning, Action Selection, Object Groups, Object Segmentation, Object Separation, Policy-Based, Probabilistic Learning, Segmentation System
Computer Vision and Robotics (Autonomous Systems)
IdentifiersURN: urn:nbn:se:kth:diva-165630DOI: 10.1109/HUMANOIDS.2014.7041418ScopusID: 2-s2.0-84945190036ISBN: 978-147997174-9OAI: oai:DiVA.org:kth-165630DiVA: diva2:808725
14th IEEE-RAS International Conference on Humanoid Robots (Humanoids) November 18-20, 2014. Madrid, Spain
QC 201602032015-04-292015-04-292016-02-03Bibliographically approved