kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multimodal Multiparty Social Interaction with the Furhat Head
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0002-8579-1790
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0003-1399-6604
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0002-0861-8660
Show others and affiliations
2012 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2012. p. 293-294
Keywords [en]
Multiparty interaction; Gaze; Gesture; Speech; Spoken dialog; Multimodal systems; Facial animation; Robot head; Furhat; Microphone Tracking
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-107015DOI: 10.1145/2388676.2388736ISI: 000321926300049Scopus ID: 2-s2.0-84870224296OAI: oai:DiVA.org:kth-107015DiVA, id: diva2:574359
Conference
14th ACM International Conference on Multimodal Interaction, Santa Monica, CA
Note

QC 20161019

Available from: 2012-12-05 Created: 2012-12-05 Last updated: 2024-03-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Al Moubayed, SamerSkantze, GabrielBeskow, JonasStefanov, KalinGustafson, Joakim

Search in DiVA

By author/editor
Al Moubayed, SamerSkantze, GabrielBeskow, JonasStefanov, KalinGustafson, Joakim
By organisation
Speech Communication and Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 963 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf