kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
From vocal sketching to sound models by means of a sound-based musical transcription system
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. (Sound and Music Computing)ORCID iD: 0000-0002-1244-881x
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal College of Music. (Sound and Music Computing)ORCID iD: 0000-0003-1239-6746
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. (Sound and Music Computing)ORCID iD: 0000-0002-4422-5223
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. (Sound and Music Computing)ORCID iD: 0000-0002-3086-0322
2019 (English)In: Proceedings of the Sound and Music Computing Conferences, CERN , 2019, p. 167-173Conference paper, Published paper (Refereed)
Abstract [en]

This paper explores how notation developed for the representation of sound-based musical structures could be used for the transcription of vocal sketches representing expressive robot movements. A mime actor initially produced expressive movements which were translated to a humanoid robot. The same actor was then asked to illustrate these movements using vocal sketching. The vocal sketches were transcribed by two composers using sound-based notation. The same composers later synthesized new sonic sketches from the annotated data. Different transcriptions and synthesized versions of these were compared in order to investigate how the audible outcome changes for different transcriptions and synthesis routines. This method provides a palette of sound models suitable for the sonification of expressive body movements.

Place, publisher, year, edition, pages
CERN , 2019. p. 167-173
Series
Proceedings of the Sound and Music Computing Conferences, ISSN 2518-3672
Keywords [en]
Computer programming, Computer science, Body movements, Humanoid robot, Musical structures, Musical transcription, Robot movements, Sonifications, Sound models, Anthropomorphic robots
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-274803Scopus ID: 2-s2.0-85084386218OAI: oai:DiVA.org:kth-274803DiVA, id: diva2:1445922
Conference
16th Sound and Music Computing Conference, SMC 2019, 28-31 May 2019, Malaga, Spain
Projects
SONAO
Note

QC 20210422

Available from: 2020-06-23 Created: 2020-06-23 Last updated: 2023-12-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusPublished fulltext

Authority records

Panariello, ClaudioMattias, SköldFrid, EmmaBresin, Roberto

Search in DiVA

By author/editor
Panariello, ClaudioMattias, SköldFrid, EmmaBresin, Roberto
By organisation
Media Technology and Interaction Design, MID
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 232 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf