Who will get the grant?: A multimodal corpus for the analysis of conversational behaviours in group interviews
2014 (English)In: UM3I 2014 - Proceedings of the 2014 ACM Workshop on Understanding and Modeling Multiparty, Multimodal Interactions, Co-located with ICMI 2014, Association for Computing Machinery (ACM), 2014, 27-32 p.Conference paper (Refereed)
In the last couple of years more and more multimodal corpora have been created. Recently many of these corpora have also included RGB-D sensors' data. However, there is to our knowledge no publicly available corpus, which combines accurate gaze-tracking, and high- quality audio recording for group discussions of varying dynamics. With a corpus that would fulfill these needs, it would be possible to investigate higher level constructs such as group involvement, individual engagement or rapport, which all require multimodal feature extraction. In the following paper we describe the design and recording of such a corpus and we provide some illustrative examples of how such a corpus might be exploited in the study of group dynamics.
Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2014. 27-32 p.
Corpus collection, Eye-gaze, Group dynamics, Involvement
Computer Science Language Technology (Computational Linguistics)
IdentifiersURN: urn:nbn:se:kth:diva-158171DOI: 10.1145/2666242.2666251ScopusID: 2-s2.0-84919344128ISBN: 978-145030652-2OAI: oai:DiVA.org:kth-158171DiVA: diva2:774972
ICMI 2014 Workshop on Understanding and Modeling Multiparty, Multimodal Interactions, UM3I 2014, Istanbul, Turkey, 16 November 2014
QC 201502032014-12-302014-12-302015-02-03Bibliographically approved