Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Artificial gaze. Perception experiment of eye gaze in synthetic face
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
2005 (English)In: Proceedings from the Second Nordic Conference on Multimodal Communication, 2005, 257-272 p.Conference paper, Published paper (Refereed)
Abstract [en]

The aim of this study is to investigate people's sensitivity to directional eye gaze, with the longterm goal of improving the naturalness of animated agents. Previous research within psychology have proven the importance of the gaze in social interactions, and should therefore be vital to implement in virtual agents . In order to test whether we have the appropriate parameters needed to correctly control gaze in the talking head, and to evaluate users' sensitivity to these parameters, a perception experiment was performed. The results show that it is possible to achieve a state where the subjects perceive that the agent looks them in the eyes, although it did not always occur when we had expected.

Place, publisher, year, edition, pages
2005. 257-272 p.
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-6515OAI: oai:DiVA.org:kth-6515DiVA: diva2:11249
Conference
The Second Nordic Conference on Multimodal Communication, Göteborg, April 7 - 8, 2005
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2010-11-26Bibliographically approved
In thesis
1. Expressiveness in virtual talking faces
Open this publication in new window or tab >>Expressiveness in virtual talking faces
2006 (English)Licentiate thesis, comprehensive summary (Other scientific)
Abstract [en]

In this thesis, different aspects concerning how to make synthetic talking faces more expressive have been studied. How can we collect data for the studies, how is the lip articulation affected by expressive speech, can the recorded data be used interchangeably in different face models, can we use eye movements in the agent for communicative purposes? The work of this thesis includes studies of these questions and also an experiment using a talking head as a complement to a targeted audio device, in order to increase the intelligibility of the speech.

The data collection described in the first paper resulted in two multimodal speech corpora. In the following analysis of the recorded data it could be stated that expressive modes strongly affect the speech articulation, although further studies are needed in order to acquire more quantitative results and to cover more phonemes and expressions as well as to be able to generalise the results to more than one individual.

When switching the files containing facial animation parameters (FAPs) between different face models (as well as research sites), some problematic issues were encountered despite the fact that both face models were created according to the MPEG-4 standard. The evaluation test of the implemented emotional expressions showed that best recognition results were obtained when the face model and FAP-file originated from the same site.

The perception experiment where a synthetic talking head was combined with a targeted audio, parametric loudspeaker showed that the virtual face augmented the intelligibility of speech, especially when the sound beam was directed slightly to the side of the listener i. e. at lower sound intesities.

In the experiment with eye gaze in a virtual talking head, the possibility of achieving mutual gaze with the observer was assessed. The results indicated that it is possible, but also pointed at some design features in the face model that need to be altered in order to achieve a better control of the perceived gaze direction.

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. 23 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2006:28
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-4210 (URN)978-91-7178-530-5 (ISBN)
Presentation
2006-12-18, Fantum, KTH, Lindstedtsvägen 24 plan 5, Stockholm, 15:00
Opponent
Supervisors
Note
QC 20101126Available from: 2006-12-06 Created: 2006-12-06 Last updated: 2010-11-26Bibliographically approved

Open Access in DiVA

No full text

Other links

Speech.KTH

Search in DiVA

By author/editor
Nordenberg, MikaelSvanfeldt, GunillaWik, Preben
By organisation
Speech, Music and Hearing, TMH
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 91 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf