Spontaneous spoken dialogues with the Furhat human-like robot head
2014 (English)In: HRI '14 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, Bielefeld, Germany, 2014, p. 326-Conference paper, Published paper (Refereed)
Abstract [en]
We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is an anthropomorphic robot head that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations. The dialogue design is performed using the IrisTK [4] dialogue authoring toolkit developed at KTH. The system will also be able to perform a moderator in a quiz-game showing different strategies for regulating spoken situated interactions.
Place, publisher, year, edition, pages
Bielefeld, Germany, 2014. p. 326-
Keywords [en]
Human-Robot Interaction, Multiparty interaction, human-robot collaboration, Spoken dialog, Furhat robot, conversational man-agement.
National Category
Computer Sciences Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-158150DOI: 10.1145/2559636.2559781ISI: 000455229400135OAI: oai:DiVA.org:kth-158150DiVA, id: diva2:774992
Conference
HRI'14 2014 ACM/IEEE international conference on Human-robot interaction, Bielefeld, Germany — March 03 - 06, 2014
Note
tmh_import_14_12_30, tmh_id_3913. QC 20150203
2014-12-302014-12-302025-02-01Bibliographically approved