Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Multimodal Corpus for Mutual Gaze and Joint Attention in Multiparty Situated Interaction
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8874-6629
KTH.
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-7801-7617
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-3687-6189
Show others and affiliations
2018 (English)In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, 2018, p. 119-127Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a corpus of multiparty situated interaction where participants collaborated on moving virtual objects on a large touch screen. A moderator facilitated the discussion and directed the interaction. The corpus contains recordings of a variety of multimodal data, in that we captured speech, eye gaze and gesture data using a multisensory setup (wearable eye trackers, motion capture and audio/video). Furthermore, in the description of the multimodal corpus, we investigate four different types of social gaze: referential gaze, joint attention, mutual gaze and gaze aversion by both perspectives of a speaker and a listener. We annotated the groups’ object references during object manipulation tasks and analysed the group’s proportional referential eye-gaze with regards to the referent object. When investigating the distributions of gaze during and before referring expressions we could corroborate the differences in time between speakers’ and listeners’ eye gaze found in earlier studies. This corpus is of particular interest to researchers who are interested in social eye-gaze patterns in turn-taking and referring language in situated multi-party interaction.

Place, publisher, year, edition, pages
Paris, 2018. p. 119-127
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-230238Scopus ID: 2-s2.0-85059891166ISBN: 979-10-95546-00-9 (print)OAI: oai:DiVA.org:kth-230238DiVA, id: diva2:1217277
Conference
International Conference on Language Resources and Evaluation (LREC 2018)
Note

QC 20180614

Available from: 2018-06-13 Created: 2018-06-13 Last updated: 2019-02-19Bibliographically approved

Open Access in DiVA

fulltext(6731 kB)176 downloads
File information
File name FULLTEXT01.pdfFile size 6731 kBChecksum SHA-512
ecafaa2b11050ce3495e03f50f4bb71bffc8c4d60acb875ae9a95e9ba3615253258081d1e4ab76931c8b3f63050f6317691dd967c8461beb1340f29a59ca172a
Type fulltextMimetype application/pdf

Other links

Scopushttp://www.lrec-conf.org/proceedings/lrec2018/pdf/987.pdf

Authority records BETA

Kontogiorgos, DimosthenisAlexanderson, SimonJonell, PatrikOertel, CatharineBeskow, JonasSkantze, GabrielGustafson, Joakim

Search in DiVA

By author/editor
Kontogiorgos, DimosthenisAvramova, VanyaAlexanderson, SimonJonell, PatrikOertel, CatharineBeskow, JonasSkantze, GabrielGustafson, Joakim
By organisation
Speech, Music and Hearing, TMHKTH
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 176 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 344 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf