Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The Furhat Back-Projected Humanoid Head-Lip Reading, Gaze And Multi-Party Interaction
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8579-1790
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-1399-6604
2013 (English)In: International Journal of Humanoid Robotics, ISSN 0219-8436, Vol. 10, no 1, 1350005- p.Article in journal (Refereed) Published
Abstract [en]

In this paper, we present Furhat - a back-projected human-like robot head using state-of-the art facial animation. Three experiments are presented where we investigate how the head might facilitate human - robot face-to-face interaction. First, we investigate how the animated lips increase the intelligibility of the spoken output, and compare this to an animated agent presented on a flat screen, as well as to a human face. Second, we investigate the accuracy of the perception of Furhat's gaze in a setting typical for situated interaction, where Furhat and a human are sitting around a table. The accuracy of the perception of Furhat's gaze is measured depending on eye design, head movement and viewing angle. Third, we investigate the turn-taking accuracy of Furhat in a multi-party interactive setting, as compared to an animated agent on a flat screen. We conclude with some observations from a public setting at a museum, where Furhat interacted with thousands of visitors in a multi-party interaction.

Place, publisher, year, edition, pages
2013. Vol. 10, no 1, 1350005- p.
Keyword [en]
Robot head, humanoid, android, facial animation, talking heads, gaze, Mona Lisa effect, avatar, dialog system, situated interaction, back-projection, gaze perception, Furhat, lip reading, multimodal interaction, multiparty interaction
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-122350DOI: 10.1142/S0219843613500059ISI: 000317311600002Scopus ID: 2-s2.0-84880650273OAI: oai:DiVA.org:kth-122350DiVA: diva2:622105
Note

QC 20130520

Available from: 2013-05-20 Created: 2013-05-20 Last updated: 2013-05-20Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Skantze, GabrielBeskow, Jonas

Search in DiVA

By author/editor
Al Moubayed, SamerSkantze, GabrielBeskow, Jonas
By organisation
Speech, Music and Hearing, TMH
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 155 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf