Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Expressiveness in virtual talking faces
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
2006 (English)Licentiate thesis, comprehensive summary (Other scientific)
Abstract [en]

In this thesis, different aspects concerning how to make synthetic talking faces more expressive have been studied. How can we collect data for the studies, how is the lip articulation affected by expressive speech, can the recorded data be used interchangeably in different face models, can we use eye movements in the agent for communicative purposes? The work of this thesis includes studies of these questions and also an experiment using a talking head as a complement to a targeted audio device, in order to increase the intelligibility of the speech.

The data collection described in the first paper resulted in two multimodal speech corpora. In the following analysis of the recorded data it could be stated that expressive modes strongly affect the speech articulation, although further studies are needed in order to acquire more quantitative results and to cover more phonemes and expressions as well as to be able to generalise the results to more than one individual.

When switching the files containing facial animation parameters (FAPs) between different face models (as well as research sites), some problematic issues were encountered despite the fact that both face models were created according to the MPEG-4 standard. The evaluation test of the implemented emotional expressions showed that best recognition results were obtained when the face model and FAP-file originated from the same site.

The perception experiment where a synthetic talking head was combined with a targeted audio, parametric loudspeaker showed that the virtual face augmented the intelligibility of speech, especially when the sound beam was directed slightly to the side of the listener i. e. at lower sound intesities.

In the experiment with eye gaze in a virtual talking head, the possibility of achieving mutual gaze with the observer was assessed. The results indicated that it is possible, but also pointed at some design features in the face model that need to be altered in order to achieve a better control of the perceived gaze direction.

Place, publisher, year, edition, pages
Stockholm: KTH , 2006. , 23 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2006:28
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-4210ISBN: 978-91-7178-530-5 (print)OAI: oai:DiVA.org:kth-4210DiVA: diva2:11250
Presentation
2006-12-18, Fantum, KTH, Lindstedtsvägen 24 plan 5, Stockholm, 15:00
Opponent
Supervisors
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2010-11-26Bibliographically approved
List of papers
1. The Swedish PFs-Star Multimodal Corpora
Open this publication in new window or tab >>The Swedish PFs-Star Multimodal Corpora
Show others...
2004 (English)In: Proceedings of LREC Workshop on Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces, 2004, 34-37 p.Conference paper, Published paper (Refereed)
Abstract [en]

The aim of this paper is to present the multimodal speech corpora collected at KTH, in the framework of the European project PF-Star, and discuss some of the issues related to the analysis and implementation of human communicative and emotional visual correlates of speech in synthetic conversational agents. Two multimodal speech corpora have been collected by means of an opto-electronic system, which allows capturing the dynamics of emotional facial expressions with very high precision. The data has been evaluated through a classification test and the results show promising identification rates for the different acted emotions. These multimodal speech corpora will truly represent a valuable source to get more knowledge about how speech articulation and communicative gestures are affected by the expression of emotions.

Keyword
Multimodal corpora collection and analysis, visual correlates of emotional speech, facial animation
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-6511 (URN)
Conference
LREC Workshop on Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces, Lisboa 25 May 2004
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2012-03-22Bibliographically approved
2. Measurements of articulatory variation in expressive speech for set of Swedish vowels
Open this publication in new window or tab >>Measurements of articulatory variation in expressive speech for set of Swedish vowels
2004 (English)In: Speech Communication, ISSN 0167-6393, Vol. 44, no 1-4, 187-196 p.Article in journal (Refereed) Published
Abstract [en]

Facial gestures are used to convey e.g. emotions, dialogue states and conversational signals, which support us in the interpretation of other people's feelings and intentions. Synthesising this behaviour with an animated talking head would widen the possibilities of this intuitive interface. The dynamic characteristics of these facial gestures during speech affect articulation. Previously, articulation for neutral speech has been studied and implemented in animation rules. The results obtained in this study show how some articulatory parameters are affected by the influence of expressiveness in speech for a selection of Swedish vowels. Our focus has primarily been on attitudes and emotions conveying information that is intended to make an animated agent more "human-like". A multimodal corpus of acted expressive speech has been collected for this purpose.

Keyword
talking heads, expressive speech, facial gestures, articulation
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-6512 (URN)10.1016/j.specom.2004.09.003 (DOI)000226074500015 ()2-s2.0-10444267340 (Scopus ID)
Note
QC 20101126 QC 20110922. Workshop on Audio-Visual Speech Processing. St Jorioz, FRANCE. 2003 Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2012-03-22Bibliographically approved
3. Preliminary cross-cultural evaluation of expressiveness in synthetic faces
Open this publication in new window or tab >>Preliminary cross-cultural evaluation of expressiveness in synthetic faces
Show others...
2004 (English)In: Affective Dialogue Systems, Proceedings / [ed] Andre E, Dybkjaer L, Minker W, Heisterkamp P, Berlin: SPRINGER-VERLAG , 2004, 301-304 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper reports the results of a preliminary cross-evaluation experiment run in the framework of the European research project PF-Star(1), with the double I aim of evaluating the possibility of exchanging FAP data between the involved sites and assessing the-adequacy of the emotional facial gestures performed by talking heads. The results provide initial insights in the way people belonging to various cultures-react to natural and synthetic facial expressions produced in different cultural settings, and in the potentials and limits of FAP data exchange.

Place, publisher, year, edition, pages
Berlin: SPRINGER-VERLAG, 2004
Series
Lecture Notes In Computer Science, ISSN 0302-9743 ; 3068
Keyword
Animation, Data acquisition, Data reduction, Face recognition, Image analysis, Productivity
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-6513 (URN)000222862900031 ()2-s2.0-9444269296 (Scopus ID)3-540-22143-3 (ISBN)
Conference
Tutorial and Research Workshop on Affective Dialogue Systems Kloster Isree, GERMANY, JUN 14-16, 2004
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2011-10-31Bibliographically approved
4. Perception experiment combining a parametric loudspeaker and a synthetic talking head
Open this publication in new window or tab >>Perception experiment combining a parametric loudspeaker and a synthetic talking head
2005 (English)In: 9th European Conference on Speech Communication and Technology, 2005, 1721-1724 p.Conference paper, Published paper (Refereed)
Abstract [en]

By combining the technologies of targeted audio and talking heads, a perception experiment was performed. Unvoiced consonants in a vowel context produced using speech synthesis were to be identified. It was found that the talking head could eliminate some of the confusions between consonants that occurred when the face was not present. The study also gave the possibility to analyse distortions of the speech signal due to the targeted audio device.

Keyword
Acoustic signal processing, Audio systems, Distortion (waves), Loudspeakers, Sensory perception
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-6514 (URN)2-s2.0-33745222312 (Scopus ID)
Conference
9th European Conference on Speech Communication and Technology, Lisbon, 4 September 2005 through 8 September 2005
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2010-11-26Bibliographically approved
5. Artificial gaze. Perception experiment of eye gaze in synthetic face
Open this publication in new window or tab >>Artificial gaze. Perception experiment of eye gaze in synthetic face
2005 (English)In: Proceedings from the Second Nordic Conference on Multimodal Communication, 2005, 257-272 p.Conference paper, Published paper (Refereed)
Abstract [en]

The aim of this study is to investigate people's sensitivity to directional eye gaze, with the longterm goal of improving the naturalness of animated agents. Previous research within psychology have proven the importance of the gaze in social interactions, and should therefore be vital to implement in virtual agents . In order to test whether we have the appropriate parameters needed to correctly control gaze in the talking head, and to evaluate users' sensitivity to these parameters, a perception experiment was performed. The results show that it is possible to achieve a state where the subjects perceive that the agent looks them in the eyes, although it did not always occur when we had expected.

National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-6515 (URN)
Conference
The Second Nordic Conference on Multimodal Communication, Göteborg, April 7 - 8, 2005
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2010-11-26Bibliographically approved

Open Access in DiVA

fulltext(240 kB)483 downloads
File information
File name FULLTEXT01.pdfFile size 240 kBChecksum MD5
12e57db9960d545a01655aca0ab7bea02642d82ff0b5004182aff16273415a3fc4f52f07
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Svanfeldt, Gunilla
By organisation
Numerical Analysis and Computer Science, NADA
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar
Total: 483 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 530 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf