Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A multi-party multi-modal dataset for focus of visual attention in human-human and human-robot interaction
KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.ORCID iD: 0000-0003-1399-6604
2016 (English)In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 2016, p. 4440-4444Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
2016. p. 4440-4444
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-268359OAI: oai:DiVA.org:kth-268359DiVA, id: diva2:1394166
Conference
Tenth International Conference on Language Resources and Evaluation (LREC’16)
Available from: 2020-02-18 Created: 2020-02-18 Last updated: 2020-02-18

Open Access in DiVA

No full text in DiVA

Authority records BETA

Stefanov, KalinBeskow, Jonas

Search in DiVA

By author/editor
Stefanov, KalinBeskow, Jonas
By organisation
Speech, Music and Hearing
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 2 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf