SynFace - Verbal and Non-verbal Face Animation from Audio
2009 (English)In: Auditory-Visual Speech Processing 2009, AVSP 2009, The International Society for Computers and Their Applications (ISCA) , 2009Conference paper, Published paper (Refereed)
Abstract [en]
We give an overview of SynFace, a speech-driven face animation system originally developed for the needs of hard-of-hearing users of the telephone. For the 2009 LIPS challenge, SynFace includes not only articulatory motion but also non-verbal motion of gaze, eyebrows and head, triggered by detection of acoustic correlates of prominence and cues for interaction control. In perceptual evaluations, both verbal and non-verbal movmements have been found to have positive impact on word recognition scores.
Place, publisher, year, edition, pages
The International Society for Computers and Their Applications (ISCA) , 2009.
Keywords [en]
Animation, Acoustic correlates, Animation systems, Face animation, Hard of hearings, Interaction controls, Perceptual evaluation, Speech-driven face animation, Word recognition scores, Audition
National Category
Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-325328Scopus ID: 2-s2.0-85133440639OAI: oai:DiVA.org:kth-325328DiVA, id: diva2:1748770
Conference
2009 International Conference on Auditory-Visual Speech Processing, AVSP 2009, 10 September 2009 through 13 September 2009
Note
QC 20230404
2023-04-042023-04-042025-02-07Bibliographically approved