Kinetic Data for Large-Scale Analysis and Modeling of Face-to-Face ConversationShow others and affiliations
2011 (English)In: Proceedings of International Conference on Audio-Visual Speech Processing 2011 / [ed] Salvi, G.; Beskow, J.; Engwall, O.; Al Moubayed, S., Stockholm: KTH Royal Institute of Technology, 2011, p. 103-106Conference paper, Published paper (Refereed)
Abstract [en]
Spoken face to face interaction is a rich and complex form of communication that includes a wide array of phenomena thatare not fully explored or understood. While there has been extensive studies on many aspects in face-to-face interaction, these are traditionally of a qualitative nature, relying on hand annotated corpora, typically rather limited in extent, which is a natural consequence of the labour intensive task of multimodal data annotation. In this paper we present a corpus of 60 hours of unrestricted Swedish face-to-face conversations recorded with audio, video and optical motion capture, and we describe a new project setting out to exploit primarily the kinetic data in this corpus in order to gain quantitative knowledge on humanface-to-face interaction.
Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2011. p. 103-106
Series
Proceedings of the International Conference on Audio-Visual Speech Processing, ISSN 1680-8908 ; 2011
Keywords [en]
motion capture, face-to-face conversation, multimodal corpus
National Category
Computer Sciences Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-52240Scopus ID: 2-s2.0-85133325348OAI: oai:DiVA.org:kth-52240DiVA, id: diva2:465536
Conference
International Conference on Audio-Visual Speech Processing 2011, Aug 31 - Sep 3, Volterra, Italy
Note
Part of proceedings: ISBN 978-91-7501-079-3, QC 20230404
2011-12-142011-12-142025-02-01Bibliographically approved