kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-7801-7617
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-1643-1054
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-9838-8848
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-1399-6604
2020 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 2, p. 487-496Article in journal (Refereed) Published
Abstract [en]

Automatic synthesis of realistic gestures promises to transform the fields of animation, avatars and communicative agents. In off-line applications, novel tools can alter the role of an animator to that of a director, who provides only high-level input for the desired animation; a learned network then translates these instructions into an appropriate sequence of body poses. In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters. In this paper we address some of the core issues towards these ends. By adapting a deep learning-based motion synthesis method called MoGlow, we propose a new generative model for generating state-of-the-art realistic speech-driven gesticulation. Owing to the probabilistic nature of the approach, our model can produce a battery of different, yet plausible, gestures given the same input speech signal. Just like humans, this gives a rich natural variation of motion. We additionally demonstrate the ability to exert directorial control over the output style, such as gesture level, speed, symmetry and spacial extent. Such control can be leveraged to convey a desired character personality or mood. We achieve all this without any manual annotation of the data. User studies evaluating upper-body gesticulation confirm that the generated motions are natural and well match the input speech. Our method scores above all prior systems and baselines on these measures, and comes close to the ratings of the original recorded motions. We furthermore find that we can accurately control gesticulation styles without unnecessarily compromising perceived naturalness. Finally, we also demonstrate an application of the same method to full-body gesticulation, including the synthesis of stepping motion and stance.

Place, publisher, year, edition, pages
Wiley , 2020. Vol. 39, no 2, p. 487-496
Keywords [en]
CCS Concepts, Computing methodologies, Motion capture, Animation, Neural networks, Gestures, Motion capture, Data-driven animation, Character control, Probabilistic models, WASP_publications
National Category
Computer Sciences Human Computer Interaction Natural Language Processing
Research subject
Human-computer Interaction; Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-279231DOI: 10.1111/cgf.13946ISI: 000548709600040Scopus ID: 2-s2.0-85087758857OAI: oai:DiVA.org:kth-279231DiVA, id: diva2:1499133
Conference
41st Annual Conference of the European-Association-for-Computer-Graphics (EUROGRAPHICS), MAY 25-29, 2020, Norrköping, SWEDEN
Funder
Swedish Research Council, 2018-05409Swedish Foundation for Strategic Research , RIT15-0107Knut and Alice Wallenberg Foundation, WASP
Note

QC 20211011

Available from: 2020-11-06 Created: 2020-11-06 Last updated: 2025-02-01Bibliographically approved

Open Access in DiVA

fulltext(9280 kB)1456 downloads
File information
File name FULLTEXT01.pdfFile size 9280 kBChecksum SHA-512
22c02c10867082009b0283da8374be127b467be6f6388b6570f32da6715da49c799d5c4fc9d2df179d903b29728366df95844694d76e876826f95c427a05489a
Type fulltextMimetype application/pdf
erratum(1288 kB)155 downloads
File information
File name FULLTEXT02.pdfFile size 1288 kBChecksum SHA-512
6ed2d42ba38e294c59efc44135816a5688d3cecde232b3e415b84f7b928b8e965b6b555b3f44aacc86dabc9901a5e2e1aff2eb08bf83966eb9b4a184b0e317b5
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusEurographics Digital Library collection, with supplementsFree full-text

Authority records

Alexanderson, SimonHenter, Gustav EjeKucherenko, TarasBeskow, Jonas

Search in DiVA

By author/editor
Alexanderson, SimonHenter, Gustav EjeKucherenko, TarasBeskow, Jonas
By organisation
Speech, Music and Hearing, TMHRobotics, Perception and Learning, RPL
In the same journal
Computer graphics forum (Print)
Computer SciencesHuman Computer InteractionNatural Language Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 1612 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 2156 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf