State-of-the-art robots are not yet fully equipped to automatically correct their policy when they encounter new situations during deployment. We argue that in common everyday robot tasks, failures may be resolved by knowledge that non-experts could provide. Our research aims to integrate elements of formal synthesis approaches into computational human-robot interaction to develop verifiable robots that can automatically correct their policy using non-expert feedback on the fly. Preliminary results from two online studies show that non-experts can indeed correct failures and that robots can use the feedback to automatically synthesize correction mechanisms to avoid failures.
Part of proceedings: ISBN 978-1-6654-0731-1
QC 20221216