Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
From vocal sketching to sound models by means of a sound-based musical transcription system
KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID. (Sound and Music Computing)
KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID. KMH Royal College of Music. (Sound and Music Computing)ORCID iD: 0000-0003-1239-6746
KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID. (Sound and Music Computing)ORCID iD: 0000-0002-4422-5223
KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID. (Sound and Music Computing)ORCID iD: 0000-0002-3086-0322
2019 (English)In: Proceedings of the 16th Sound and Music Computing Conference, Malaga, Spain, 2019, p. 1-7, article id S2.5Conference paper, Published paper (Refereed)
Abstract [en]

This paper explores how notation developed for the representation of sound-based musical structures could be used for the transcription of vocal sketches representing expressive robot movements. A mime actor initially produced expressive movements which were translated to a humanoid robot. The same actor was then asked to illustrate these movements using vocal sketching. The vocal sketches were transcribed by two composers using sound-based notation. The same composers later synthesized new sonic sketches from the annotated data. Different transcriptions and synthesized versions of these were compared in order to investigate how the audible outcome changes for different transcriptions and synthesis routines. This method provides a palette of sound models suitable for the sonification of expressive body movements.

Place, publisher, year, edition, pages
Malaga, Spain, 2019. p. 1-7, article id S2.5
Keywords [en]
voice sketching, sonification, robot, sound sonic interaction design, sound representation, sound transcription
National Category
Media and Communication Technology Music Human Computer Interaction Interaction Technologies Media Engineering Computer Vision and Robotics (Autonomous Systems)
Research subject
Media Technology; Human-computer Interaction; Art, Technology and Design
Identifiers
URN: urn:nbn:se:kth:diva-250785OAI: oai:DiVA.org:kth-250785DiVA, id: diva2:1313822
Conference
Sound and Music Computing Conference, Universidad de Malaga Malaga, Spain, May 28-31, 2019
Projects
SONAOThe Harmony of Noise
Funder
Swedish Research Council, 2017-03979NordForsk, 86892
Note

QC 20190819

Available from: 2019-05-06 Created: 2019-05-06 Last updated: 2019-08-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

http://smc2019.uma.es/articles/S2/S2_05_SMC2019_paper.pdf

Search in DiVA

By author/editor
Panariello, ClaudioSköld, SköldFrid, EmmaBresin, Roberto
By organisation
Media Technology and Interaction Design, MID
Media and Communication TechnologyMusicHuman Computer InteractionInteraction TechnologiesMedia EngineeringComputer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 99 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf