Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
FARMI: A Framework for Recording Multi-Modal Interactions
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-3687-6189
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-1262-4876
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8874-6629
Show others and affiliations
2018 (English)In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris: European Language Resources Association, 2018, p. 3969-3974Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present (1) a processing architecture used to collect multi-modal sensor data, both for corpora collection and real-time processing, (2) an open-source implementation thereof and (3) a use-case where we deploy the architecture in a multi-party deception game, featuring six human players and one robot. The architecture is agnostic to the choice of hardware (e.g. microphones, cameras, etc.) and programming languages, although our implementation is mostly written in Python. In our use-case, different methods of capturing verbal and non-verbal cues from the participants were used. These were processed in real-time and used to inform the robot about the participants’ deceptive behaviour. The framework is of particular interest for researchers who are interested in the collection of multi-party, richly recorded corpora and the design of conversational systems. Moreover for researchers who are interested in human-robot interaction the available modules offer the possibility to easily create both autonomous and wizard-of-Oz interactions.

Place, publisher, year, edition, pages
Paris: European Language Resources Association, 2018. p. 3969-3974
National Category
Natural Sciences Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-230237ISBN: 979-10-95546-00-9 (print)OAI: oai:DiVA.org:kth-230237DiVA, id: diva2:1217276
Conference
The Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
Note

QC 20180618

Available from: 2018-06-13 Created: 2018-06-13 Last updated: 2018-06-18Bibliographically approved

Open Access in DiVA

fulltext(7720 kB)92 downloads
File information
File name FULLTEXT01.pdfFile size 7720 kBChecksum SHA-512
36b866d855bf159a100138d0cf10f7979e63f6dca6ff813b288a4a754d4a814aa05bc2ca4f2b2c768043a5243ac2bf87d8b107ddbd065c8b4ebc45f8c4fed18e
Type fulltextMimetype application/pdf

Authority records BETA

Jonell, PatrikMattias, BystedtPer, FallgrenKontogiorgos, DimosthenisDavid Aguas Lopes, JoséMalisz, ZofiaOertel, CatharineShore, Todd

Search in DiVA

By author/editor
Jonell, PatrikMattias, BystedtPer, FallgrenKontogiorgos, DimosthenisDavid Aguas Lopes, JoséMalisz, ZofiaOertel, CatharineShore, Todd
By organisation
Speech, Music and Hearing, TMH
Natural SciencesEngineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 92 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 397 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf