Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Multimodal Multiparty Social Interaction with the Furhat Head
KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.ORCID-id: 0000-0002-8579-1790
KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.ORCID-id: 0000-0003-1399-6604
KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.ORCID-id: 0000-0002-0861-8660
Vise andre og tillknytning
2012 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2012. s. 293-294
Emneord [en]
Multiparty interaction; Gaze; Gesture; Speech; Spoken dialog; Multimodal systems; Facial animation; Robot head; Furhat; Microphone Tracking
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-107015DOI: 10.1145/2388676.2388736ISI: 000321926300049Scopus ID: 2-s2.0-84870224296OAI: oai:DiVA.org:kth-107015DiVA, id: diva2:574359
Konferanse
14th ACM International Conference on Multimodal Interaction, Santa Monica, CA
Merknad

QC 20161019

Tilgjengelig fra: 2012-12-05 Laget: 2012-12-05 Sist oppdatert: 2018-01-12bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Personposter BETA

Skantze, GabrielBeskow, JonasStefanov, KalinGustafson, Joakim

Søk i DiVA

Av forfatter/redaktør
Al Moubayed, SamerSkantze, GabrielBeskow, JonasStefanov, KalinGustafson, Joakim
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 509 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf