A sensorimotor reinforcement learning framework for physical human-robot interaction
2016 (English)In: IEEE International Conference on Intelligent Robots and Systems, IEEE, 2016, 2682-2688 p.Conference paper (Refereed)
Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty in the interaction is modeled using Gaussian processes (GP) to implement a forward model and an actionvalue function. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainty and equal role sharing between the partners.
Place, publisher, year, edition, pages
IEEE, 2016. 2682-2688 p.
Behavioral research, Intelligent robots, Reinforcement learning, Robots, Bayesian optimization, Forward modeling, Gaussian process, Human behaviors, Human-robot collaboration, Model learning, Optimal actions, Physical human-robot interactions, Human robot interaction
IdentifiersURN: urn:nbn:se:kth:diva-202121DOI: 10.1109/IROS.2016.7759417ISI: 000391921702127ScopusID: 2-s2.0-85006367922ISBN: 9781509037629 OAI: oai:DiVA.org:kth-202121DiVA: diva2:1077669
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016, 9 October 2016 through 14 October 2016
QC 201702282017-02-282017-02-282017-03-06Bibliographically approved