Change search
ReferencesLink to record
Permanent link

Direct link
Socially Aware Many-to-Machine Communication
Show others and affiliations
2012 (English)Conference paper (Other academic)
Abstract [en]

This reports describes the output of the project P5: Socially Aware Many-to-Machine Communication (M2M) at the eNTERFACE’12 workshop. In this project, we designed and implemented a new front-end for handling multi-user interaction in a dialog system. We exploit the Microsoft Kinect device for capturing multimodal input and extract some features describing user and face positions. These data are then analyzed in real-time to robustly detect speech and determine both who is speaking and whether the speech is directed to the system or not. This new front-end is integrated to the SEMAINE (Sustained Emotionally colored Machine-human Interaction using Nonverbal Expression) system. Furthermore, a multimodal corpus has been created, capturing all of the system inputs in two different scenarios involving human-human and human-computer interaction.

Place, publisher, year, edition, pages
National Category
Computer Science
URN: urn:nbn:se:kth:diva-165818OAI: diva2:808849
8th International Summer Workshop on Multimodal Interfaces, Metz, France

QC 20161017

Available from: 2015-04-29 Created: 2015-04-29 Last updated: 2016-10-17Bibliographically approved

Open Access in DiVA

No full text

Other links

Published version

Search in DiVA

By author/editor
Stefanov, Kalin
By organisation
School of Computer Science and Communication (CSC)
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 15 hits
ReferencesLink to record
Permanent link

Direct link