A virtual head driven by music expressivity
2007 (English)In: IEEE Transactions on Audio, Speech, and Language Processing, ISSN 1558-7916, E-ISSN 1558-7924, Vol. 15, no 6, 1833-1841 p.Article in journal (Refereed) Published
In this paper, we present a system that visualizes the expressive quality of a music performance using a virtual head. We provide a mapping through several parameter spaces: on the input side, we have elaborated a mapping between values of acoustic cues and emotion as well as expressivity parameters; on the output side, we propose a mapping between these parameters and the behaviors of the virtual head. This mapping ensures a coherency between the acoustic source and the animation of the virtual head. After presenting some background information on behavior expressivity of humans, we introduce our model of expressivity. We explain how we have elaborated the mapping between the acoustic and the behavior cues. Then, we describe the implementation of a working system that controls the behavior of a human-like head that varies depending on the emotional and acoustic characteristics of the musical execution. Finally, we present the tests we conducted to validate our mapping between the emotive content of the music performance and the expressivity parameters.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2007. Vol. 15, no 6, 1833-1841 p.
acoustic cues, emotion, expressivity, music, virtual agent, vocal expression, emotion, model, communication, channels, motion
Computer Science Computer Vision and Robotics (Autonomous Systems) Human Computer Interaction Psychology Music
IdentifiersURN: urn:nbn:se:kth:diva-16826DOI: 10.1109/tasl.2007.899256ISI: 000248351100008ScopusID: 2-s2.0-54949145792OAI: oai:DiVA.org:kth-16826DiVA: diva2:334869
QC 201005252010-08-052010-08-052016-07-22Bibliographically approved