Spontaneous spoken dialogues with the Furhat human-like robot head
2014 (English)In: HRI '14 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, Bielefeld, Germany, 2014, 326- p.Conference paper (Refereed)
We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is an anthropomorphic robot head that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations. The dialogue design is performed using the IrisTK  dialogue authoring toolkit developed at KTH. The system will also be able to perform a moderator in a quiz-game showing different strategies for regulating spoken situated interactions.
Place, publisher, year, edition, pages
Bielefeld, Germany, 2014. 326- p.
Human-Robot Interaction, Multiparty interaction, human-robot collaboration, Spoken dialog, Furhat robot, conversational man-agement.
Computer Science Language Technology (Computational Linguistics)
IdentifiersURN: urn:nbn:se:kth:diva-158150DOI: 10.1145/2559636.2559781OAI: oai:DiVA.org:kth-158150DiVA: diva2:774992
HRI'14 2014 ACM/IEEE international conference on Human-robot interaction, Bielefeld, Germany — March 03 - 06, 2014
tmh_import_14_12_30, tmh_id_3913. QC 201502032014-12-302014-12-302015-02-04Bibliographically approved