Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
From Jigs and Reels to Schottisar och Polskor: Generating Scandinavian-like Folk Music with Deep Recurrent Networks
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
Show others and affiliations
2019 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The use of recurrent neural networks for modeling and generating music has been shown to be quite effective for compact, textual transcriptions of traditional music from Ireland and the UK. We explore how well these models perform for textual transcriptions of traditional music from Scandinavia. This type of music has characteristics that are similar to and different from that of Irish music, e.g., mode, rhythm, and structure. We investigate the effects of different architectures and training regimens, and evaluate the resulting models using three methods: a comparison of statistics between real and generated transcriptions, an appraisal of generated transcriptions via a semi-structured interview with an expert in Swedish folk music, and an ex- ercise conducted with students of Scandinavian folk music. We find that some of our models can generate new tran- scriptions sharing characteristics with Scandinavian folk music, but which often lack the simplicity of real transcrip- tions. One of our models has been implemented online at http://www.folkrnn.org for anyone to try.

Place, publisher, year, edition, pages
2019.
Keywords [en]
music generation, machine learning
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-248982OAI: oai:DiVA.org:kth-248982DiVA, id: diva2:1303669
Conference
The 16th Sound & Music Computing Conference, Malaga, Spain, 28-31 May 2019
Note

QC 20190502

Available from: 2019-04-10 Created: 2019-04-10 Last updated: 2019-05-02Bibliographically approved

Open Access in DiVA

fulltext(989 kB)32 downloads
File information
File name FULLTEXT01.pdfFile size 989 kBChecksum SHA-512
321913684febfcc0866c4c98e2746c97d597bd6653740f13b29c25e22b519b80a3a94053aea76e58c2deb7f784680d013adc8d8ba2da8c81f4089bc0e7cf7c93
Type fulltextMimetype application/pdf

Other links

Conference

Authority records BETA

Hallström, EricMossmyr, SimonSturm, BobVegeborn, VictorWedin, Jonas

Search in DiVA

By author/editor
Hallström, EricMossmyr, SimonSturm, BobVegeborn, VictorWedin, Jonas
By organisation
Speech, Music and Hearing, TMH
Other Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 32 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 165 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf