Multimodal Multiparty Social Interaction with the Furhat Head
2012 (English)Conference paper (Refereed)
We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.
Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2012. 293-294 p.
Multiparty interaction; Gaze; Gesture; Speech; Spoken dialog; Multimodal systems; Facial animation; Robot head; Furhat; Microphone Tracking
IdentifiersURN: urn:nbn:se:kth:diva-107015DOI: 10.1145/2388676.2388736ISI: 000321926300049ScopusID: 2-s2.0-84870224296OAI: oai:DiVA.org:kth-107015DiVA: diva2:574359
14th ACM International Conference on Multimodal Interaction, Santa Monica, CA
QC 201610192012-12-052012-12-052016-10-19Bibliographically approved