Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Using perceptually defined music features in music information retrieval
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.ORCID iD: 0000-0003-2926-6518
Hanover University of Music, Germany.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
Show others and affiliations
2014 (English)Manuscript (preprint) (Other academic)
Abstract [en]

In this study, the notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, in order to understand the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The selected perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic (MIDI) and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance up to 90%; (3) the perceptual features could only to a limited extent be modeled using existing audio features. The results also clearly indicated that a small number of dedicated features were superior to a 'brute force' model using a large number of general audio features.

Place, publisher, year, edition, pages
2014.
National Category
Computer Science Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-158159OAI: oai:DiVA.org:kth-158159DiVA: diva2:774983
Note

QC 20150331. tmh_import_14_12_30, tmh_id_3929

Available from: 2014-12-30 Created: 2014-12-30 Last updated: 2015-03-31Bibliographically approved

Open Access in DiVA

No full text

Other links

Full-text in arXiv

Authority records BETA

Friberg, Anders

Search in DiVA

By author/editor
Friberg, AndersSchoonderwaldt, ErwinHedblad, AntonFabiani, MarcoElowsson, Anders
By organisation
Music Acoustics
Computer ScienceLanguage Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 39 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf