Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rule-based expressive modifications of tempo in polyphonic audio recordings
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-2926-6518
2008 (English)In: COMPUTER MUSIC MODELING AND RETRIEVAL: SENSE OF SOUNDS     / [ed] KronlandMartinet R; Ystad S; Jensen K, BERLIN: SPRINGER-VERLAG , 2008, Vol. 4969, 288-302 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes a few aspects of a system for expressive, rule-based modifications of audio recordings regarding tempo, dynamics and articulation. The input audio signal is first aligned with a score containing extra information on how to modify a performance. The signal is then transformed into the time-frequency domain. Each played tone is identified using partial tracking and the score information. Articulation and dynamics are changed by modifying the length and content of the partial tracks. The focus here is on the tempo modification which is done using a combination of time frequency techniques and phase reconstruction. Preliminary results indicate that the accuracy of the tempo modification is in average 8.2 ms when comparing Inter Onset Intervals in the resulting signal with the desired ones. Possible applications of such a system are in music pedagogy, basic perception research as well as interactive music systems.

Place, publisher, year, edition, pages
BERLIN: SPRINGER-VERLAG , 2008. Vol. 4969, 288-302 p.
Series
LECTURE NOTES IN COMPUTER SCIENCE, ISSN 0302-9743 ; 4969
Keyword [en]
automatic music performance, performance rules, analysis-synthesis, time scale modification, audio signal processing
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-25031ISI: 000258121900020Scopus ID: 2-s2.0-50349097096ISBN: 978-3-540-85034-2 (print)OAI: oai:DiVA.org:kth-25031DiVA: diva2:355175
Conference
4th International Symposium on Computer Music Modeling and Retrieval Copenhagen, DENMARK, AUG 27-31, 2007 Aalborg Univ; CNRS, Lab Mecan & Acoust; French Natl Res Agcy; Re New, Digital Arts Forum
Note
QC 20101006Available from: 2010-10-06 Created: 2010-10-06 Last updated: 2010-10-06Bibliographically approved

Open Access in DiVA

No full text

Scopus

Authority records BETA

Friberg, Anders

Search in DiVA

By author/editor
Fabiani, MarcoFriberg, Anders
By organisation
Speech, Music and Hearing, TMH
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 24 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf