kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Kinetic Data for Large-Scale Analysis and Modeling of Face-to-Face Conversation
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0003-1399-6604
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-7801-7617
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0001-9327-9482
Show others and affiliations
2011 (English)In: Proceedings of International Conference on Audio-Visual Speech Processing 2011 / [ed] Salvi, G.; Beskow, J.; Engwall, O.; Al Moubayed, S., Stockholm: KTH Royal Institute of Technology, 2011, p. 103-106Conference paper, Published paper (Refereed)
Abstract [en]

Spoken face to face interaction is a rich and complex form of communication that includes a wide array of phenomena thatare not fully explored or understood. While there has been extensive studies on many aspects in face-to-face interaction, these are traditionally of a qualitative nature, relying on hand annotated corpora, typically rather limited in extent, which is a natural consequence of the labour intensive task of multimodal data annotation. In this paper we present a corpus of 60 hours of unrestricted Swedish face-to-face conversations recorded with audio, video and optical motion capture, and we describe a new project setting out to exploit primarily the kinetic data in this corpus in order to gain quantitative knowledge on humanface-to-face interaction.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2011. p. 103-106
Series
Proceedings of the International Conference on Audio-Visual Speech Processing, ISSN 1680-8908 ; 2011
Keywords [en]
motion capture, face-to-face conversation, multimodal corpus
National Category
Computer Sciences Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-52240Scopus ID: 2-s2.0-85133325348OAI: oai:DiVA.org:kth-52240DiVA, id: diva2:465536
Conference
International Conference on Audio-Visual Speech Processing 2011, Aug 31 - Sep 3, Volterra, Italy
Note

Part of proceedings: ISBN 978-91-7501-079-3, QC 20230404

Available from: 2011-12-14 Created: 2011-12-14 Last updated: 2025-02-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Scopushttp://www.speech.kth.se/prod/publications/files/3655.pdf

Authority records

Beskow, JonasAlexanderson, SimonAl Moubayed, SamerEdlund, JensHouse, David

Search in DiVA

By author/editor
Beskow, JonasAlexanderson, SimonAl Moubayed, SamerEdlund, JensHouse, David
By organisation
Speech Communication and TechnologySpeech, Music and Hearing, TMH
Computer SciencesNatural Language Processing

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 296 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf