Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Musical punctuation on the microlevel: Automatic identification and performance of small melodic units
KTH, Superseded Departments, Speech, Music and Hearing.
KTH, Superseded Departments, Speech, Music and Hearing.ORCID iD: 0000-0002-3086-0322
KTH, Superseded Departments, Speech, Music and Hearing.
1998 (English)In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 27, no 3, 271-292 p.Article in journal (Refereed) Published
Abstract [en]

In this investigation we use the term musical punctuation for the marking of melodic structure by commas inserted at the boundaries that separate small structural units. Two models are presented that automatically try to locate the positions of such commas. They both use the score as the input and operate with a short context of maximally five notes. The first model is based on a set of subrules. One group of subrules mark possible comma positions, each provided with a weight value. Another group alters or removes these weight values according to different conditions. The second model is an artificial neural network using a similar input as that used by the rule system. The commas proposed by either model are realized in terms of micropauses and of small lengthenings of interonset durations. The models are evaluated by using a set of 52 musical excerpts, which were marked with punctuations according to the preference of an expert performer.

Place, publisher, year, edition, pages
1998. Vol. 27, no 3, 271-292 p.
Keyword [en]
MODEL
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-12920ISI: 000077650100005OAI: oai:DiVA.org:kth-12920DiVA: diva2:319626
Note
QC 20100519Available from: 2010-05-19 Created: 2010-05-19 Last updated: 2017-12-12Bibliographically approved
In thesis
1. Virtual virtuosity
Open this publication in new window or tab >>Virtual virtuosity
2000 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

This dissertation presents research in the field ofautomatic music performance with a special focus on piano.

A system is proposed for automatic music performance, basedon artificial neural networks (ANNs). A complex,ecological-predictive ANN was designed thatlistensto the last played note,predictsthe performance of the next note,looksthree notes ahead in the score, and plays thecurrent tone. This system was able to learn a professionalpianist's performance style at the structural micro-level. In alistening test, performances by the ANN were judged clearlybetter than deadpan performances and slightly better thanperformances obtained with generative rules.

The behavior of an ANN was compared with that of a symbolicrule system with respect to musical punctuation at themicro-level. The rule system mostly gave better results, butsome segmentation principles of an expert musician were onlygeneralized by the ANN.

Measurements of professional pianists' performances revealedinteresting properties in the articulation of notes markedstaccatoandlegatoin the score. Performances were recorded on agrand piano connected to a computer.Staccatowas realized by a micropause of about 60% ofthe inter-onset-interval (IOI) whilelegatowas realized by keeping two keys depressedsimultaneously; the relative key overlap time was dependent ofIOI: the larger the IOI, the shorter the relative overlap. Themagnitudes of these effects changed with the pianists' coloringof their performances and with the pitch contour. Theseregularities were modeled in a set of rules for articulation inautomatic piano music performance.

Emotional coloring of performances was realized by means ofmacro-rules implemented in the Director Musices performancesystem. These macro-rules are groups of rules that werecombined such that they reflected previous observations onmusical expression of specific emotions. Six emotions weresimulated. A listening test revealed that listeners were ableto recognize the intended emotional colorings.

In addition, some possible future applications are discussedin the fields of automatic music performance, music education,automatic music analysis, virtual reality and soundsynthesis.

Place, publisher, year, edition, pages
Stockholm: KTH, 2000. ix, 32 p.
Series
Trita-TMH, 2000:9
Keyword
music, performance, expression, interpretation, piano, automatic, artificial neural networks
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-3049 (URN)91-7170-643-7 (ISBN)
Public defence
2000-12-01, 00:00 (English)
Note
QC 20100518Available from: 2000-11-29 Created: 2000-11-29 Last updated: 2010-05-19Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Bresin, Roberto

Search in DiVA

By author/editor
Bresin, Roberto
By organisation
Speech, Music and Hearing
In the same journal
Journal of New Music Research
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 110 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf