Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Interactive sonification of emotionally expressive gestures by means of music performance
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.ORCID iD: 0000-0002-8830-963X
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.ORCID iD: 0000-0002-3086-0322
2010 (English)In: Proceedings of ISon 2010, 3rd Interactive Sonification Workshop / [ed] Bresin, Roberto; Hermann, Thomas; Hunt, Andy, Stockholm, Sweden: KTH Royal Institute of Technology, 2010, 113-116 p.Conference paper, Published paper (Refereed)
Abstract [en]

This study presents a procedure for interactive sonification of emotionally expressive hand and arm gestures by affecting a musical performance in real-time. Three different mappings are described that translate accelerometer data to a set of parameters that control the expressiveness of the performance by affecting tempo, dynamics and articulation. The first two mappings, tested with a numberof subjects during a public event, are relatively simple and were designed by the authors using a top-down approach. According to user feedback, they were not intuitive and limited the usability of the software. A bottom-up approach was taken for the third mapping: a Classification Tree was trained with features extracted from gesture data from a number of test subject who were asked toexpress different emotions with their hand movements. A second set of data, where subjects were asked to make a gesture that corresponded to a piece of expressive music they just listened to, wereused to validate the model. The results were not particularly accurate, but reflected the small differences in the data and the ratings given by the subjects to the different performances they listened to.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2010. 113-116 p.
National Category
Computer Science Human Computer Interaction Music Psychology
Identifiers
URN: urn:nbn:se:kth:diva-52135OAI: oai:DiVA.org:kth-52135DiVA: diva2:465430
Conference
ISon 2010, 3rd Interactive Sonification Workshop, Stockholm, Sweden, April 7, 2010
Projects
SAMESOM
Note

tmh_import_11_12_14. QC 20111222

Available from: 2011-12-14 Created: 2011-12-14 Last updated: 2016-08-22Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Dubus, GaëlBresin, Roberto

Search in DiVA

By author/editor
Fabiani, MarcoDubus, GaëlBresin, Roberto
By organisation
Music Acoustics
Computer ScienceHuman Computer InteractionMusicPsychology

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 72 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf