Ändra sökning
Avgränsa sökresultatet
1 - 17 av 17
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Askenfelt, Anders
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Special issue: Selected papers from the Stockholm Music Acoustics Conference - Introduction2004Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 33, nr 3, s. 185-187Artikel i tidskrift (Övrigt vetenskapligt)
  • 2.
    Bresin, Roberto
    KTH, Tidigare Institutioner                               , Tal, musik och hörsel.
    Artificial neural networks based models for automatic performance of musical scores1998Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 27, nr 3, s. 239-270Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article briefly summarises the author's research on automatic performance, started at CSC (Centro di Sonologia Computazionale, University of Padua) and continued at TMH-KTH (Speech, Music Hearing Department at the Royal Institute of Technology, Stockholm). The focus is on the evolution of the architecture of an artificial neural networks (ANNs) framework, from the first simple model, able to learn the KTH performance rules, to the final one, that accurately simulates the style of a real pianist performer, including time and loudness deviations. The task was to analyse and synthesise the performance process of a professional pianist, playing on a Disklavier. An automatic analysis extracts all performance parameters of the pianist, starting from the KTH rule system. The system possesses good generalisation properties: applying the same ANN, it is possible to perform different scores in the performing style used for the training of the networks. Brief descriptions of the program Melodia and of the two Java applets Japer and Jalisper are given in the Appendix. In Melodia, developed at the CSC, the user can run either rules or ANNs, and study their different effects. Japer and Jalisper, developed at TMH, implement in real time on the web the performance rules developed at TMH plus new features achieved by using ANNs.

  • 3.
    Bresin, Roberto
    et al.
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Battel, Giovanni Umberto
    Articulation strategies in expressive piano performance - Analysis of legato, staccato, and repeated notes in performances of the Andante movement of Mozart's Sonata in G major (K 545)2000Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 29, nr 3, s. 211-224Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Articulation strategies applied by pianists in expressive performances of the same core are analysed. Measurements of key overlap time and its relation to the inter-onset-interval are collected for notes marked legato and staccato in the first sixteen bars of the Andante movement of W.A. Mozart's Piano Sonata in G major, K 545. Five pianists played the piece nine times. First, they played in a wa that they considered "optimal". In the remaining eight performances they were asked to represent different expressive characters, as specified in terms of different adjectives. Legato,staccato, and repeated notes articulation applied by the right hand were examined by means of statistical analysis. Although the results varied considerably between pianists, some trends could be observed. The pianists generally used similar strategies in the rendering intended to represent different expressive characters. legato was played with a key overlap ratio that depended on the inter-onset-interval (IOI). Staccato tones had approximate duration of 40% of the IOI. Repeated notes were played with a duration of about 60% of the IOI. The results seem useful as a basis for articulation rules in grammars for automatic piano performance.

  • 4. Camurri, A.
    et al.
    De Poli, G.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Leman, M.
    Volpe, G.
    The MEGA project: Analysis and synthesis of multisensory expressive gesture in performing art applications2005Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 34, nr 1, s. 5-21Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article presents a survey of the research work carried out within the framework of the European Union-IST project MEGA (Multisensory Expressive Gesture Applications, November 2000-October 2003; www. megaproject.org). First, the article introduces a layered conceptual framework for analysis and synthesis of expressive gesture. Such a framework represents the main methodological foundation upon which the MEGA project built its own research. A brief overview of the achievements of research in expressive gesture analysis and synthesis is then provided: these are the outcomes of some experiments that were carried out in order to investigate specific aspects of expressive gestural communication. The work resulted in the design and development of a collection of software libraries integrated in the MEGA System Environment (MEGASE) based on the EyesWeb open platform (www. eyesweb.org).

  • 5.
    Dahl, Sofia
    KTH, Tidigare Institutioner                               , Tal, musik och hörsel.
    The playing of an accent: Preliminary observations from temporal and kinematic analysis of percussionists2000Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 29, nr 3, s. 225-233Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The movements and timing when playing an interleaved accent in drumming were studied for three professionals and one amateur. The movement analysis showed that the subjects prepared for the accented stroke by raising the drumstick up to a greater height. The movement strategies used, however, differed widely in appearance.

    The timing analysis showed two basic features, a slow change in tempo over a longer time span ("drift"), and a short ter variation between adjacent intervals ("flutter"). Cyclic patterns, with every fourth interval prolonged, could be seen in the flutter. The lengthening of the interval, beginning with the accented stroke, seems to be a common way for the player to give the accent more emphasis. A listening test was performed to investigate if these cyclic patterns conveyed information to a listener about the grouping of the strokes. Listeners identified sequences where the magnitude of the inter-onset interval fluctuations were large during the cyclic patterns.

  • 6.
    Elblaus, Ludvig
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Hansen, Kjetil Falkenberg
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Unander-Scharin, Carl
    university College of Opera, Sweden.
    Artistically directed prototyping in development and in practice2012Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 41, nr 4, s. 377-387Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The use of technology in artistic contexts presents interestingchallenges regarding the processes in which engineers, artists andperformers work together. The artistic intent and goals of the participantsare relevant both when shaping the development practice, and in definingand refining the role of technology in practice. In this paper wepresent strategies for structuring the development process, basedon iterative design and participatory design. The concepts are describedin theory and examples are given of how they have been successfullyapplied. The cases make heavy use of different types of prototypingand this practice is also discussed. The development cases all relateto a single artifact, a gestural voice processing instrument calledThe Throat. This artifact has been in use since it was developed,and from that experience, three cases are presented. The focus ofthese cases is on how artistic vision through practice can recontextualizetechnology, and, without rebuilding it, redefine it and give it anew role to play.

  • 7.
    Friberg, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Ahlback, Sven
    Recognition of the Main Melody in a Polyphonic Symbolic Score using Perceptual Knowledge2009Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 38, nr 2, s. 155-169Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    It is in many cases easy for a human to identify the main melodic theme when listening to a music example. Melodic properties have been studied in several research projects, however, the differences between properties of the melody and properties of the accompaniment (non-melodic) voices have not been addressed until recently. A set of features relating to basic low-level statistical measures were selected considering general perceptual aspects. A new 'narrative' measure was designed intended to capture the amount of new unique material in each voice. The features were applied to a set of scores consisting of about 250 polyphonic ringtones consisting of MIDI versions of contemporary pop songs. All tracks were annotated into categories such as melody and accompaniment. Both multiple regression and support vector machines were applied on either the features directly or on a Gaussian transformation of the features. The resulting models predicted the correct melody in about 90% of the cases using a set of eight features. The results emphasize context as an important factor for determining the main melody. A previous version of the system has been used in a commercial system for modifying ring tones.

  • 8.
    Friberg, Anders
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Bresin, R.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, L.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Sundberg, J.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Musical punctuation on the microlevel: Automatic identification and performance of small melodic units1998Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 27, nr 3, s. 271-292Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this investigation we use the term musical punctuation for the marking of melodic structure by commas inserted at the boundaries that separate small structural units. Two models are presented that automatically try to locate the positions of such commas. They both use the score as the input and operate with a short context of maximally five notes. The first model is based on a set of subrules. One group of subrules mark possible comma positions, each provided with a weight value. Another group alters or removes these weight values according to different conditions. The second model is an artificial neural network using a similar input as that used by the rule system. The commas proposed by either model are realized in terms of micropauses and of small lengthenings of interonset durations. The models are evaluated by using a set of 52 musical excerpts, which were marked with punctuations according to the preference of an expert performer. * Sound examples are available in the JNMR Electronic Appendix (EA), which can be found on the WWW at http://www.swets.nl/jnmr/jnmr.html

  • 9.
    Friberg, Anders
    et al.
    KTH, Tidigare Institutioner                               , Tal, musik och hörsel.
    Bresin, Roberto
    KTH, Tidigare Institutioner                               , Tal, musik och hörsel.
    Fryden, Lars
    Sundberg, Johan
    KTH, Tidigare Institutioner                               , Tal, musik och hörsel.
    Musical punctuation on the microlevel: Automatic identification and performance of small melodic units1998Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 27, nr 3, s. 271-292Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this investigation we use the term musical punctuation for the marking of melodic structure by commas inserted at the boundaries that separate small structural units. Two models are presented that automatically try to locate the positions of such commas. They both use the score as the input and operate with a short context of maximally five notes. The first model is based on a set of subrules. One group of subrules mark possible comma positions, each provided with a weight value. Another group alters or removes these weight values according to different conditions. The second model is an artificial neural network using a similar input as that used by the rule system. The commas proposed by either model are realized in terms of micropauses and of small lengthenings of interonset durations. The models are evaluated by using a set of 52 musical excerpts, which were marked with punctuations according to the preference of an expert performer.

  • 10.
    Friberg, Anders
    et al.
    KTH, Tidigare Institutioner                               , Talöverföring och musikakustik.
    Sundberg, J.
    Fryden, L.
    Music from motion: Sound level envelopes of tones expressing human locomotion2000Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 29, nr 3, s. 199-210Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The common association of music with motion was investigated in a direct way. Could the original motion quality of different gaits be transferred to music and be perceived by a listener? Measurements of the ground reaction force by the foot during different gaits were transferred to sound by using the vertical force curve as sound level envelopes for tones played at different tempi. Three listening experiments assesses the motion quality of the resulting stimuli. In the first experiment, where the listeners were asked to freely describe the tones, 25% of answers were direct references to motion; such answers were more frequent at faster tempi. In the second experiment, where the listeners were asked to describe the motion quality, about half of the answers directly related to motion could be classified as belonging to one of the categories of dancing, jumping, running, walking, or stumbling. Most gait patterns were clearly classified as belonging to one of these categories, independent of presentation tempo. In the third experiment, the listeners were asked to rate the stimuli on 24 adjective scales. A factor analysis yielded four factors that could be interpreted as Swift vs. Solemn (factor 1), Graceful vs. Stamping (factor 2), Limping vs. Forceful (factor 3), and Springy (factor 4, no contrasting adjective). The results from the three experiments were consistent and indicated that each tone (corresponding to a particular gait) could clearly be categorised in terms of motion.

  • 11.
    Hansen, Kjetil Falkenberg
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    The basics of scratching2002Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 31, nr 4, s. 357-365Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article deals with the popular and rarely studied art form of manipulating a vinyl record by rhythmically dragging and pushing it, commonly labelled “scratching.” With sufficient practice, a Disc Jockey (DJ) can have great control over the sound produced and treat the turntable as an expressive musical instrument. Even though a digital-based model of scratching might seem preferable to the vulnerable vinyl record, and such models are being manufactured today, the acoustical behaviour of the scratch has not been formally studied until now. To gain information of this behaviour, a DJ was asked to perform some typical scratching patterns. These common playing techniques and the corresponding sounds have been analysed. Since the focus of the article is on the basics of how the instrument works, an overview on standardized equipment and alternative equipment is also given.

  • 12.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Dravins, Christina
    Riga Stradiņs University, Latvia.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Active Listening and Expressive Communication for Children with Hearing Loss Using Getatable Environments for Creativity2012Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 41, nr 4, s. 365-375Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper describes a system for accommodating active listening for persons with hearing aids or cochlear implants, with a special focus on children at an early stage of cognitive development and with additional physical disabilities. A system called the Soundscraper is proposed and consists of a software part in Pure data and a hardware part using an Arduino microcontroller with a combination of sensors. For both the software and hardware development it was important to always ensure that the system was flexible enough to cater for the very different conditions that are characteristic of the intended user group.The Soundscraper has been tested with 25 children with good results. An increased attention span was reported, as well as positively surprising reactions from children where the caregivers were unsure whether they could hear at all. The sound synthesis methods, the gesture sensors and the employed parameter mapping were all simple, but they provided a controllable and sufficiently complex sound environment even with limited interaction. A possible future outcome of the application is the adoption of long-time analysis of sound preferences as opposed to traditional audiological investigations.

  • 13.
    Holzapfel, André
    Bogaziçi University, Turkey.
    Relation between surface rhythm and rhythmic modes in Turkish makam music2015Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 44, nr 1, s. 25-38Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Sounds in a piece of music form rhythmic patterns on the surface of a music signal, and in a metered piece these patterns stand in some relation to the underlying rhythmic mode or meter. In this paper, we investigate how the surface rhythm is related to the usul, which are the rhythmic modes in compositions of Turkish makam music. On a large corpus of notations of vocal pieces in short usul we observe the ways notes are distributed in relation to the usul. We observe differences in these distributions between Turkish makam and Eurogenetic music, which imply a less accentuated stratification of meter in Turkish makam music. We observe changes in rhythmic style between two composers who represent two different historical periods in Turkish makam music, a result that adds to previous observations on changes in style of Turkish makam music throughout the centuries. We demonstrate that rhythmic aspects in Turkish makam music can be considered as the outcome of a generative model, and conduct style comparisons in a Bayesian statistical framework.

  • 14. Lindström, E.
    et al.
    Camurri, A.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Volpe, G.
    Rinman, M. L.
    Affect, attitude and evaluation of multisensory performances2005Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 34, nr 1, s. 69-86Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The EU-IST project MEGA (Multisensory Expressive Gesture Applications; see www.megaproject.org) addresses innovative technologies for multimodal interactive systems in artistic scenarios. Basic research on expressive communication in music, gesture and dance has been focused by EU-IST-funded European researchers in psychology, technology and computer engineering. The output from this cooperation with artists has also revealed ideas and innovations for applications in social, artistic and communicative entertainment. However, even the most careful basic research and computer engineering could never estimate the real efficiency and benefit of such new expressive applications. The purpose of this article, therefore, is to get feedback from the audience and the artists/ performers/ players at three public MEGA events: the interactive music concert Allegoria dell'opinione verbale, a dance performance by Groove Machine and a public game (Ghost in the Cave). General attitude, perceived communication/affect and practical efficiency were evaluated by questionnaires. Results showed that: (a) the performers were able to control the expressive output within the application, (b) the audience was positive to each event,

  • 15. Serra, Xavier
    et al.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Camurri, Antonio
    Sound and music computing: Challenges and strategies2007Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 36, nr 3, s. 185-190Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Based on the current context of the Sound and Music Computing (SMC) field, the state of the art in research and the open issues that have been identified and described in other articles of this journal issue, in this article we make a step forward and try to identify the broad SMC challenges and we propose strategies with which to tackle them. On the research side we identify a clear need for designing better sound objects and environments and for promoting research to understand, model, and improve human interaction with sound and music. In the education domain we feel the need for better training our multidisciplinary researchers and to make sure that they can contribute to the multicultural society we live in. There is also a clear need for improving the transferring of the knowledge and technologies generated by our community. Finally we claim that the SMC field should be very much concerned with its social context and that a number of current social concerns should be addressed. We accompany each of these challenges with strategies that should help researchers, educators and policy makers take specific actions to advance in the proposed SMC roadmap.

  • 16.
    Sturm, Bob
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Ben-Tal, Oded
    Kingston University, UK.
    Monaghan, Úna
    Cambridge University, UK.
    Collins, Nick
    Durham University, UK.
    Herremans, Dorien
    University of Technology and Design, Singapore.
    Chew, Elaine
    Queen Mary University of London, UK.
    Hadjeres, Gäetan
    Sony CSL, Paris.
    Deruty, Emmanuel
    Sony CSL, Paris.
    Pachet, François
    Spotify, Paris.
    Machine Learning Research that Matters for Music Creation: A Case StudyIngår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Research applying machine learning to music modeling and generation typically proposes model architectures, training methods and datasets, and gauges system performance using quantitative measures like sequence likelihoods and/or qualitative listening tests. Rarely does such work explicitly question and analyse its usefulness for and impact on real-world practitioners, and then build on those outcomes to inform the development and application of machine learning. This article attempts to do these things for machine learning applied to music creation. Together with practitioners, we develop and use several applications of machine learning for music creation, and present a public concert of the results. We reflect on the entire experience to arrive at several ways of advancing these and similar applications of machine learning to music creation.

  • 17. Sundberg, J.
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner                               , Talöverföring och musikakustik.
    Bresin, Roberto
    KTH, Tidigare Institutioner                               , Talöverföring och musikakustik.
    Attempts to reproduce a pianist's expressive timing with director musices performance rules2003Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 32, nr 3, s. 317-325Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The Director Musices generative grammar of music performance is a system of context dependent rules that automatically introduces expressive deviation in performances of input score files. A number of these rule concern timing. In this investigation the ability of such rules to reproduce a professional pianist's timing deviations from nominal note inter-onset-intervals is examined. Rules affecting tone inter-onset-intervals were first tested one by one for the various sections of the excerpt, and then in combinations. Results were evaluated in terms-of the correlation between the deviations made by the pianist and by the rule system. It is found that rules reflecting the phrase structure produced high correlations in some sections. On the other hand, some rules failed to produce significant correlation with the pianist's deviations, and thus seemed irrelevant to the particular performance analysed. It is concluded that phrasing was a prominent principle in this performance and that rule combinations have to change between sections in order to match this pianist's deviations.

1 - 17 av 17
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf