Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Preliminary cross-cultural evaluation of expressiveness in synthetic faces
KTH, Superseded Departments, Speech, Music and Hearing.ORCID iD: 0000-0003-1399-6604
KTH, Superseded Departments, Speech, Music and Hearing.
Show others and affiliations
2004 (English)In: Affective Dialogue Systems, Proceedings / [ed] Andre E, Dybkjaer L, Minker W, Heisterkamp P, Berlin: SPRINGER-VERLAG , 2004, 301-304 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper reports the results of a preliminary cross-evaluation experiment run in the framework of the European research project PF-Star(1), with the double I aim of evaluating the possibility of exchanging FAP data between the involved sites and assessing the-adequacy of the emotional facial gestures performed by talking heads. The results provide initial insights in the way people belonging to various cultures-react to natural and synthetic facial expressions produced in different cultural settings, and in the potentials and limits of FAP data exchange.

Place, publisher, year, edition, pages
Berlin: SPRINGER-VERLAG , 2004. 301-304 p.
Series
Lecture Notes In Computer Science, ISSN 0302-9743 ; 3068
Keyword [en]
Animation, Data acquisition, Data reduction, Face recognition, Image analysis, Productivity
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-6513ISI: 000222862900031Scopus ID: 2-s2.0-9444269296ISBN: 3-540-22143-3 (print)OAI: oai:DiVA.org:kth-6513DiVA: diva2:11247
Conference
Tutorial and Research Workshop on Affective Dialogue Systems Kloster Isree, GERMANY, JUN 14-16, 2004
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2011-10-31Bibliographically approved
In thesis
1. Expressiveness in virtual talking faces
Open this publication in new window or tab >>Expressiveness in virtual talking faces
2006 (English)Licentiate thesis, comprehensive summary (Other scientific)
Abstract [en]

In this thesis, different aspects concerning how to make synthetic talking faces more expressive have been studied. How can we collect data for the studies, how is the lip articulation affected by expressive speech, can the recorded data be used interchangeably in different face models, can we use eye movements in the agent for communicative purposes? The work of this thesis includes studies of these questions and also an experiment using a talking head as a complement to a targeted audio device, in order to increase the intelligibility of the speech.

The data collection described in the first paper resulted in two multimodal speech corpora. In the following analysis of the recorded data it could be stated that expressive modes strongly affect the speech articulation, although further studies are needed in order to acquire more quantitative results and to cover more phonemes and expressions as well as to be able to generalise the results to more than one individual.

When switching the files containing facial animation parameters (FAPs) between different face models (as well as research sites), some problematic issues were encountered despite the fact that both face models were created according to the MPEG-4 standard. The evaluation test of the implemented emotional expressions showed that best recognition results were obtained when the face model and FAP-file originated from the same site.

The perception experiment where a synthetic talking head was combined with a targeted audio, parametric loudspeaker showed that the virtual face augmented the intelligibility of speech, especially when the sound beam was directed slightly to the side of the listener i. e. at lower sound intesities.

In the experiment with eye gaze in a virtual talking head, the possibility of achieving mutual gaze with the observer was assessed. The results indicated that it is possible, but also pointed at some design features in the face model that need to be altered in order to achieve a better control of the perceived gaze direction.

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. 23 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2006:28
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-4210 (URN)978-91-7178-530-5 (ISBN)
Presentation
2006-12-18, Fantum, KTH, Lindstedtsvägen 24 plan 5, Stockholm, 15:00
Opponent
Supervisors
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2010-11-26Bibliographically approved

Open Access in DiVA

No full text

Scopus

Authority records BETA

Beskow, Jonas

Search in DiVA

By author/editor
Beskow, JonasCerrato, LoredanaNordstrand, MagnusSvanfeldt, Gunilla
By organisation
Speech, Music and Hearing
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 60 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf