Public Speaking Training with a Multimodal Interactive Virtual Audience Framework
2015 (English)In: ICMI '15 Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ACM Digital Library, 2015, 367-368 p.Conference paper (Refereed)
We have developed an interactive virtual audience platform for public speaking training. Users' public speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user's behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general public speaking performance vs specific behaviors).
Place, publisher, year, edition, pages
ACM Digital Library, 2015. 367-368 p.
IdentifiersURN: urn:nbn:se:kth:diva-180569DOI: 10.1145/2818346.2823294ISI: 000380609500058ScopusID: 2-s2.0-84959308165OAI: oai:DiVA.org:kth-180569DiVA: diva2:895439
17th ACM International Conference on Multimodal Interaction ICMI 2015,New York, NY
QC 201601252016-01-192016-01-192016-09-20Bibliographically approved