Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Artificial neural networks based models for automatic performance of musical scores
KTH, Superseded Departments, Speech, Music and Hearing.ORCID iD: 0000-0002-3086-0322
1998 (English)In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 27, no 3, 239-270 p.Article in journal (Refereed) Published
Abstract [en]

This article briefly summarises the author's research on automatic performance, started at CSC (Centro di Sonologia Computazionale, University of Padua) and continued at TMH-KTH (Speech, Music Hearing Department at the Royal Institute of Technology, Stockholm). The focus is on the evolution of the architecture of an artificial neural networks (ANNs) framework, from the first simple model, able to learn the KTH performance rules, to the final one, that accurately simulates the style of a real pianist performer, including time and loudness deviations. The task was to analyse and synthesise the performance process of a professional pianist, playing on a Disklavier. An automatic analysis extracts all performance parameters of the pianist, starting from the KTH rule system. The system possesses good generalisation properties: applying the same ANN, it is possible to perform different scores in the performing style used for the training of the networks. Brief descriptions of the program Melodia and of the two Java applets Japer and Jalisper are given in the Appendix. In Melodia, developed at the CSC, the user can run either rules or ANNs, and study their different effects. Japer and Jalisper, developed at TMH, implement in real time on the web the performance rules developed at TMH plus new features achieved by using ANNs.

Place, publisher, year, edition, pages
1998. Vol. 27, no 3, 239-270 p.
Keyword [en]
Rules, Computer Science, Interdisciplinary Applications; Music
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-12919ISI: 000077650100004OAI: oai:DiVA.org:kth-12919DiVA: diva2:319624
Note
QC 20100519Available from: 2010-05-19 Created: 2010-05-19 Last updated: 2017-12-12Bibliographically approved
In thesis
1. Virtual virtuosity
Open this publication in new window or tab >>Virtual virtuosity
2000 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

This dissertation presents research in the field ofautomatic music performance with a special focus on piano.

A system is proposed for automatic music performance, basedon artificial neural networks (ANNs). A complex,ecological-predictive ANN was designed thatlistensto the last played note,predictsthe performance of the next note,looksthree notes ahead in the score, and plays thecurrent tone. This system was able to learn a professionalpianist's performance style at the structural micro-level. In alistening test, performances by the ANN were judged clearlybetter than deadpan performances and slightly better thanperformances obtained with generative rules.

The behavior of an ANN was compared with that of a symbolicrule system with respect to musical punctuation at themicro-level. The rule system mostly gave better results, butsome segmentation principles of an expert musician were onlygeneralized by the ANN.

Measurements of professional pianists' performances revealedinteresting properties in the articulation of notes markedstaccatoandlegatoin the score. Performances were recorded on agrand piano connected to a computer.Staccatowas realized by a micropause of about 60% ofthe inter-onset-interval (IOI) whilelegatowas realized by keeping two keys depressedsimultaneously; the relative key overlap time was dependent ofIOI: the larger the IOI, the shorter the relative overlap. Themagnitudes of these effects changed with the pianists' coloringof their performances and with the pitch contour. Theseregularities were modeled in a set of rules for articulation inautomatic piano music performance.

Emotional coloring of performances was realized by means ofmacro-rules implemented in the Director Musices performancesystem. These macro-rules are groups of rules that werecombined such that they reflected previous observations onmusical expression of specific emotions. Six emotions weresimulated. A listening test revealed that listeners were ableto recognize the intended emotional colorings.

In addition, some possible future applications are discussedin the fields of automatic music performance, music education,automatic music analysis, virtual reality and soundsynthesis.

Place, publisher, year, edition, pages
Stockholm: KTH, 2000. ix, 32 p.
Series
Trita-TMH, 2000:9
Keyword
music, performance, expression, interpretation, piano, automatic, artificial neural networks
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-3049 (URN)91-7170-643-7 (ISBN)
Public defence
2000-12-01, 00:00 (English)
Note
QC 20100518Available from: 2000-11-29 Created: 2000-11-29 Last updated: 2010-05-19Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Bresin, Roberto

Search in DiVA

By author/editor
Bresin, Roberto
By organisation
Speech, Music and Hearing
In the same journal
Journal of New Music Research
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 117 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf