Change search
ReferencesLink to record
Permanent link

Direct link
Public Speaking Training with a Multimodal Interactive Virtual Audience Framework
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-0861-8660
2015 (English)In: ICMI '15 Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ACM Digital Library, 2015, 367-368 p.Conference paper (Refereed)
Abstract [en]

We have developed an interactive virtual audience platform for public speaking training. Users' public speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user's behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general public speaking performance vs specific behaviors).

Place, publisher, year, edition, pages
ACM Digital Library, 2015. 367-368 p.
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:kth:diva-180569DOI: 10.1145/2818346.2823294ISI: 000380609500058ScopusID: 2-s2.0-84959308165OAI: oai:DiVA.org:kth-180569DiVA: diva2:895439
Conference
17th ACM International Conference on Multimodal Interaction ICMI 2015,New York, NY
Note

QC 20160125

Available from: 2016-01-19 Created: 2016-01-19 Last updated: 2016-09-20Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Stefanov, Kalin
By organisation
Speech, Music and Hearing, TMH
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 13 hits
ReferencesLink to record
Permanent link

Direct link