Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A virtual head driven by music expressivity
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.ORCID iD: 0000-0002-3086-0322
2007 (English)In: IEEE Transactions on Audio, Speech, and Language Processing, ISSN 1558-7916, E-ISSN 1558-7924, Vol. 15, no 6, 1833-1841 p.Article in journal (Refereed) Published
Abstract [en]

In this paper, we present a system that visualizes the expressive quality of a music performance using a virtual head. We provide a mapping through several parameter spaces: on the input side, we have elaborated a mapping between values of acoustic cues and emotion as well as expressivity parameters; on the output side, we propose a mapping between these parameters and the behaviors of the virtual head. This mapping ensures a coherency between the acoustic source and the animation of the virtual head. After presenting some background information on behavior expressivity of humans, we introduce our model of expressivity. We explain how we have elaborated the mapping between the acoustic and the behavior cues. Then, we describe the implementation of a working system that controls the behavior of a human-like head that varies depending on the emotional and acoustic characteristics of the musical execution. Finally, we present the tests we conducted to validate our mapping between the emotive content of the music performance and the expressivity parameters.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2007. Vol. 15, no 6, 1833-1841 p.
Keyword [en]
acoustic cues, emotion, expressivity, music, virtual agent, vocal expression, emotion, model, communication, channels, motion
National Category
Computer Science Computer Vision and Robotics (Autonomous Systems) Human Computer Interaction Psychology Music
Identifiers
URN: urn:nbn:se:kth:diva-16826DOI: 10.1109/tasl.2007.899256ISI: 000248351100008Scopus ID: 2-s2.0-54949145792OAI: oai:DiVA.org:kth-16826DiVA: diva2:334869
Note

QC 20100525

Available from: 2010-08-05 Created: 2010-08-05 Last updated: 2017-12-12Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Bresin, Roberto

Search in DiVA

By author/editor
Bresin, Roberto
By organisation
Music Acoustics
In the same journal
IEEE Transactions on Audio, Speech, and Language Processing
Computer ScienceComputer Vision and Robotics (Autonomous Systems)Human Computer InteractionPsychologyMusic

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 96 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf