Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Computational Model of Immanent Accent Salience in Tonal Music
Centre for Systematic Musicology, University of Graz, Graz, Austria.
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-2926-6518
Centre for Systematic Musicology, University of Graz, Graz, Austria.
2019 (English)In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 10, no 317, p. 1-19Article in journal (Refereed) Published
Abstract [en]

Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments, respectively involving 239 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62. When different accent categories of accents were combined together, correlations were higher than for separate categories (r = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2019. Vol. 10, no 317, p. 1-19
Keywords [en]
immanent accents, salience, music expression, music analysis, computational modeling
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer Science; Art, Technology and Design
Identifiers
URN: urn:nbn:se:kth:diva-247966DOI: 10.3389/fpsyg.2019.00317ISI: 000462826400001PubMedID: 30984047Scopus ID: 2-s2.0-85065134169OAI: oai:DiVA.org:kth-247966DiVA, id: diva2:1300808
Note

QC 20190423

Available from: 2019-03-29 Created: 2019-03-29 Last updated: 2019-05-16Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopushttps://doi.org/10.3389/fpsyg.2019.00317

Search in DiVA

By author/editor
Friberg, Anders
By organisation
Speech, Music and Hearing, TMH
In the same journal
Frontiers in Psychology
Other Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 323 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf