Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Toward a computational model of expression in music performance: The GERM model
Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Samhällsvetenskapliga fakulteten, Institutionen för psykologi..
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-2926-6518
KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. (Sound and Music Computing)ORCID iD: 0000-0002-3086-0322
2002 (English)In: Musicae scientiae, ISSN 1029-8649, E-ISSN 2045-4147, Vol. Special Issue 2001-2002, 63-122 p.Article in journal (Refereed) Published
Abstract [en]

This article presents a computational model of expression in music performance: The GERM model. The purpose of the GERM model is to (a) describe the principal sources of variability in music performance, (b) emphasize the need to integrate different aspects of performance in a common model, and (c) provide some preliminaries (germ = a basis from which a thing may develop) for a computational model that simulates the different aspects. Drawing on previous research on performance, we propose that that performance expression derives from four main sources of variability: (1) Generative Rules, which function to convey the generative structure in a musical manner (e.g., Clarke, 1988; Sundberg, 1988); (2) Emotional Expression, which is governed by the performer’s expressive intention (e.g., Juslin, 1997a); (3) Random Variations, which reflect internal timekeeper variance and motor delay variance (e.g., Gilden, 2001; Wing & Kristofferson, 1973); and (4) Movement Principles, which prescribe that certain features of the performance are shaped in accordance with biological motion (e.g., Shove & Repp, 1995). A preliminary version of the GERM model was implemented by means of computer synthesis. Synthesized performances were evaluated by musically trained participants in a listening test. The results from the test support a decomposition of expression in terms of the GERM model. Implications for future research on music performance are discussed.

Place, publisher, year, edition, pages
2002. Vol. Special Issue 2001-2002, 63-122 p.
Keyword [en]
music performance, computational modeling, sound synthesis, emotion, expression
National Category
Media and Communication Technology
Research subject
Media Technology
Identifiers
URN: urn:nbn:se:kth:diva-206990DOI: 10.1177/10298649020050S104OAI: oai:DiVA.org:kth-206990DiVA: diva2:1094928
Note

QCR 20170517

Available from: 2017-05-11 Created: 2017-05-11 Last updated: 2017-05-17Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Friberg, AndersBresin, Roberto
By organisation
Speech, Music and Hearing, TMHMedia Technology and Interaction Design, MID
In the same journal
Musicae scientiae
Media and Communication Technology

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 2 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf