Ändra sökning
Avgränsa sökresultatet
123 1 - 50 av 146
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1. Addessi, A. R.
    et al.
    Anelli, F.
    Benghi, D.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Corrigendum: Child-computer interaction at the beginner stage of music learning: Effects of reflexive interaction on children's musical improvisation [Front. Psychol.8 (2017)(65)]. Doi: 10.3389/fpsyg.2017.000652017Ingår i: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 8, nr MAR, artikel-id 399Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A corrigendum on Corrigendum: Child-Computer Interaction at the Beginner Stage of Music Learning: Effects of Reflexive Interaction on Children's Musical Improvisation by Addessi, A. R., Anelli, F., Benghi, D., and Friberg, A. (2017). Front. Psychol. 8:65. doi: 10.3389/fpsyg.2017.00065 In the original article, there was an error. "she plays C3" was used instead of "it plays C3." A correction has been made to Observation and Theoretical Framework of Reflexive Interaction, paragraph 3: The little girl plays two consecutive notes, C2 and A2, and then stops to wait for the response of the system. The system responds by repeating the same notes. The child then play a single note, G2, and the system responds with a single note but this time introduces a variation: it plays C3, thus introducing a higher register. The girl, following the change introduced by the system, moves toward the higher register and plays a variant of the initial pattern, namely: D2-A2-E2-C3, and introduces a particular rhythm pattern. This "reflexive" event marks the beginning of a dialogue based on repetition and variation: the rhythmic-melodic pattern will be repeated and varied by both the system and the child in consecutive exchanges, until acquiring the form of a complete musical phrase. At some point in the dialogue, the child begins to accompany the system's response with arm movements synchronized with the rhythmic-melodic patterns, creating a kind of music-motor composition. In addition, EG1 and EG2 are incorrectly referred to within the text. A correction has been made to Duet Task, sub-section Results for Each Evaluative Criterion of the Duet Task, paragraph Reflexive Interaction: The data of Reflexive Interaction show that the EG2 obtained the highest score (4.17), followed by the CG (3.33) and the EG1 (2.61); see Table 6 and Figure 7. The difference between EG2, which only use the system with reflexive interaction, and EG1, which did not use the system with reflexive interaction, is significant (p = 0.043). Therefore, it could be said that the use of MIROR-Impro can enhance the use of the reflexive behaviors: mirroring, turn-taking, and co-regulation. We observed a statistically significant correlation between the Reflexive Interaction and the total score (r = 0.937; p < 0.01), and all other evaluative criteria, with correlations ranging from r = 0.87 (p < 0.01) for Musical Quality to r = 0.92 (p < 0.01) for Musical Organization. Thus, the higher the children's use of reflexive interaction, the better their results in each criterion and in the ability to improvise. This result can support the hypothesis that reflexive interaction is a fundamental component of musical improvised dialog. Instead, although the differences between the CG and the Experimental Groups 1 and 2 indicate that the use of the MIROR Impro appears to be "necessary" (CG > EG1) and "sufficient" (CG < EG2) to improve the ability to improvise, we cannot generalize these results because the results are not statistically significant (t-test, comparing CG and EG1: p = 0.388; CG and EG2: p = 0.285). Finally, due to the resolution of Figures 5-9 being low, they have been replaced with new figures with a higher resolution. The corrected Figures, Figures 5-9 appear below. The authors apologize for these errors and state that these do not change the scientific conclusions of the article in any way.

  • 2. Addessi, Anna Rita
    et al.
    Anelli, Filomena
    Benghi, Diber
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Child-Computer Interaction at the Beginner Stage of Music Learning: Effects of Reflexive Interaction on Children's Musical Improvisation2017Ingår i: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 8, artikel-id 65Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article childrens musical improvisation is investigated through the reflexive interaction paradigm. We used a particular system, the MIROR-Impro, implemented in the framework of the MIROR project (EC-FP7), which is able to reply to the child playing a keyboard by a reflexive output, mirroring (with repetitions and variations) her/his inputs. The study was conducted in a public primary school, with 47 children, aged 6-7. The experimental design used the convergence procedure, based on three sample groups allowing us to verify if the reflexive interaction using the MIROR-Impro is necessary and/or sufficient to improve the childrens abilities to improvise. The following conditions were used as independent variables: to play only the keyboard, the keyboard with the MIROR-Impro but with not-reflexive reply, the keyboard with the MIROR-Impro with reflexive reply. As dependent variables we estimated the childrens ability to improvise in solos, and in duets. Each child carried out a training program consisting of 5 weekly individual 12 min sessions. The control group played the complete package of independent variables; Experimental Group 1 played the keyboard and the keyboard with the MIROR-Impro with not-reflexive reply; Experimental Group 2 played only the keyboard with the reflexive system. One week after, the children were asked to improvise a musical piece on the keyboard alone (Solo task), and in pairs with a friend (Duet task). Three independent judges assessed the Solo and the Duet tasks by means of a grid based on the TAI-Test for Ability to Improvise rating scale. The EG2, which trained only with the reflexive system, reached the highest average results and the difference with EG1, which did not used the reflexive system, is statistically significant when the children improvise in a duet. The results indicate that in the sample of participants the reflexive interaction alone could be sufficient to increase the improvisational skills, and necessary when they improvise in duets. However, these results are in general not statistically significant. The correlation between Reflexive Interaction and the ability to improvise is statistically significant. The results are discussed on the light of the recent literature in neuroscience and music education.

  • 3. Bellec, G.
    et al.
    Elowsson, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Wolff, D.
    Weyde, T.
    A social network integrated game experiment to relate tapping to speed perception and explore rhythm reproduction2013Ingår i: Proceedings of the Sound and Music Computing Conference 2013, 2013, s. 19-26Konferensbidrag (Refereegranskat)
    Abstract [en]

    During recent years, games with a purpose (GWAPs) have become increasingly popular for studying human behaviour [1–4]. However, no standardised method for web-based game experiments has been proposed so far. We present here our approach comprising an extended version of the CaSimIR social game framework [5] for data collection, mini-games for tempo and rhythm tapping, and an initial analysis of the data collected so far. The game presented here is part of the Spot The Odd Song Out game, which is freely available for use on Facebook and on the Web 1 .We present the GWAP method in some detail and a preliminary analysis of data collected. We relate the tapping data to perceptual ratings obtained in previous work. The results suggest that the tapped tempo data collected in a GWAP can be used to predict perceived speed. I toned down the above statement as I understand from the results section that our data are not as good as When averagingthe rhythmic performances of a group of 10 players in the second experiment, the tapping frequency shows a pattern that corresponds to the time signature of the music played. Our experience shows that more effort in design and during runtime is required than in a traditional experiment. Our experiment is still running and available on line.

  • 4.
    Bisesi, Erica
    et al.
    Centre for Systematic Musicology, University of Graz, Graz, Austria.
    Friberg, Anders
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Parncutt, Richard
    Centre for Systematic Musicology, University of Graz, Graz, Austria.
    A Computational Model of Immanent Accent Salience in Tonal Music2019Ingår i: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 10, nr 317, s. 1-19Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments, respectively involving 239 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62. When different accent categories of accents were combined together, correlations were higher than for separate categories (r = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events.

  • 5. Bisesi, Erica
    et al.
    Parncutt, Richard
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    An accent-based approach to performance rendering: Music theory meets music psychology2011Ingår i: Proceedings of ISPS 2011 / [ed] Aaron Williamon, Darryl Edwards, and Lee Bartel, Utrecht: the European Association of Conservatoires (AEC) , 2011, s. 27-32Konferensbidrag (Refereegranskat)
    Abstract [en]

    Accents are local events that attract a listener’s attention and are either evident from the score (immanent) or added by the performer (performed). Immanent accents are associated with grouping, meter, melody, and harmony. In piano music, performed accents involve changes in timing, dynamics, articulation, and pedaling; they vary in amplitude, form, and duration. Performers tend to “bring out” immanent accents by means of performed accents, which attracts the listener’s attention to them. We are mathematically modeling timing and dynamics near immanent accents in a selection of Chopin Preludes using an extended version of Director Musices(DM),a software package for automatic rendering of expressive performance. We are developing DM in a new direction, which allows us to relate expressive features of a performance not only to global or intermediate structural properties, but also accounting for local events.

  • 6.
    Bresin, R.
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Emotional expression in music performance: synthesis and decoding1998Ingår i: TMH-QPSR, Vol. 39, nr 4, s. 085-094Artikel i tidskrift (Övrigt vetenskapligt)
  • 7.
    Bresin, R.
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Synthesis and decoding of emotionally expressive music performance1999Ingår i: Proceedings of the IEEE 1999 Systems, Man and Cybernetics Conference - SMC’99, 1999, Vol. 4, s. 317-322Konferensbidrag (Refereegranskat)
    Abstract [en]

    A recently developed application of Director Musices (DM) is presented. The DM is a rule-based software tool for automatic music performance developed at the Speech Music and Hearing Dept. at the Royal Institute of Technology, Stockholm. It is written in Common Lisp and is available both for Windows and Macintosh. It is demonstrated that particular combinations of rules defined in the DM can be used for synthesizing performances that differ in emotional quality. Different performances of two pieces of music were synthesized so as to elicit listeners’ associations to six different emotions (fear, anger, happiness, sadness, tenderness, and solemnity). Performance rules and their parameters were selected so as to match previous findings about emotional aspects of music performance. Variations of the performance variables IOI (Inter-Onset Interval), OOI (Offset-Onset Interval) and L (Sound Level) are presented for each rule-setup. In a forced-choice listening test 20 listeners were asked to classify the performances with respect to emotions. The results showed that the listeners, with very few exceptions, recognized the intended emotions correctly. This shows that a proper selection of rules and rule parameters in DM can indeed produce a wide variety of meaningful, emotional performances, even extending the scope of the original rule definition

  • 8.
    Bresin, Roberto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Askenfelt, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Hansen, Kjetil
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Ternström, Sten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Sound and Music Computing at KTH2012Ingår i: Trita-TMH, ISSN 1104-5787, Vol. 52, nr 1, s. 33-35Artikel i tidskrift (Övrigt vetenskapligt)
    Abstract [en]

    The SMC Sound and Music Computing group at KTH (formerly the Music Acoustics group) is part of the Department of Speech Music and Hearing, School of Computer Science and Communication. In this short report we present the current status of the group mainly focusing on its research.

  • 9.
    Bresin, Roberto
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    A multimedia environment for interactive music performance1997Ingår i: TMH-QPSR, Vol. 38, nr 2-3, s. 029-032Artikel i tidskrift (Övrigt vetenskapligt)
    Abstract [en]

    We propose a music performance tool based on the Java programming language. This software runs in any Java applet viewer (i.e. a WWW browser) and interacts with the local Midi equipment by mean of a multi-task software module for Midi applications (MidiShare). Two main ideas are at the base of our project: one is to realise an easy, intuitive, hardware and software independent tool for performance, and the other is to achieve an easier development of the tool itself. At the moment there are two projects under development: a system based only on a Java applet, called Japer (Java performer), and a hybrid system based on a Java user interface and a Lisp kernel for the development of the performance tools. In this paper, the first of the two projects is presented.

  • 10.
    Bresin, Roberto
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel. KTH, Tidigare Institutioner (före 2005), Talöverföring och musikakustik.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    A multimedia environment for interactive music performance1997Ingår i: Proceedings of KANSEI - The Technology of Emotion, AIMI International Workshop, 1997, s. 64-67Konferensbidrag (Refereegranskat)
  • 11.
    Bresin, Roberto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Emotion rendering in music: Range and characteristic values of seven musical variables2011Ingår i: Cortex, ISSN 0010-9452, E-ISSN 1973-8102, Vol. 47, nr 9, s. 1068-1081Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 x 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than stimuli without performance variations.

  • 12.
    Bresin, Roberto
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Emotional coloring of computer controlled music performance2000Ingår i: Computer music journal, ISSN 0148-9267, E-ISSN 1531-5169, Vol. 24, nr 4, s. 44-61Artikel i tidskrift (Refereegranskat)
  • 13.
    Bresin, Roberto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Evaluation of computer systems for expressive music performance2013Ingår i: Guide to Computing for Expressive Music Performance / [ed] Kirke, Alexis; Miranda, Eduardo R., Springer, 2013, s. 181-203Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    In this chapter, we review and summarize different methods for the evaluation of CSEMPs. The main categories of evaluation methods are (1) comparisons with measurements from real performances, (2) listening experiments, and (3) production experiments. Listening experiments can be of different types. For example, in some experiments, subjects may be asked to rate a particular expressive characteristic (such as the emotion conveyed or the overall expression) or to rate the effect of a particular acoustic cue. In production experiments, subjects actively manipulate system parameters to achieve a target performance. Measures for estimating the difference between performances are discussed in relation to the objectives of the model and the objectives of the evaluation. There will be also a section with a presentation and discussion of the Rencon (Performance Rendering Contest). Rencon is a contest for comparing the expressive musical performances of the same score generated by different CSEMPs. Practical examples from previous works are presented, commented on, and analysed.

  • 14.
    Bresin, Roberto
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Expressive musical icons2001Ingår i: Proceedings of the International Conference on Auditory Display - ICAD 2001, 2001, s. 141-143Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent research on the analysis and synthesis of music performance has resulted in tools for the control of the expressive content in automatic music performance [1]. These results can be relevant for applications other than performance of music by a computer. In this work it is presented how the techniques for enhancing the expressive character in music performance can be used also in the design of sound logos, in the control of synthesis algorithms, and for achieving better ringing tones in mobile phones. 

  • 15.
    Bresin, Roberto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Influence of Acoustic Cues on the Expressive Performance of Music2008Ingår i: Proceedings of the 10th International Conference on Music Perception and Cognition, Sapporo, Japan, 2008Konferensbidrag (Refereegranskat)
  • 16.
    Bresin, Roberto
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Rule-based emotional colouring of music performance2000Ingår i: Proceedings of the International Computer Music Conference - ICMC 2000 / [ed] Zannos, I., San Francisco: ICMA , 2000, s. 364-367Konferensbidrag (Refereegranskat)
  • 17.
    Bresin, Roberto
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Software tools for musical expression.2000Ingår i: Proceedings of the InternationalComputer Music Conference 2000 / [ed] Zannos, Ioannis, San Francisco, USA: Computer Music Association , 2000, s. 499-502Konferensbidrag (Refereegranskat)
  • 18.
    Bresin, Roberto
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Dahl, Sofia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Toward a new model for sound control2001Ingår i: Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 200 / [ed] Fernström, M., Brazil, E., & Marshall, M., 2001, s. 45-49Konferensbidrag (Refereegranskat)
    Abstract [en]

    The control of sound synthesis is a well-known problem. This is particularly true if the sounds are generated with physical modeling techniques that typically need specification of numerous control parameters. In the present work outcomes from studies on automatic music performance are used for tackling this problem. 

  • 19.
    Bresin, Roberto
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Sundberg, Johan
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Director musices: The KTH performance rules system2002Ingår i: Proceedings of SIGMUS-46, Information Processing Society of Japan, , 2002, s. 43-48Konferensbidrag (Refereegranskat)
    Abstract [en]

    Director Musices is a program that transforms notated scores into musical performances. It implements the performance rules emerging from research projects at the Royal Institute of Technology (KTH). Rules in the program model performance aspects such as phrasing, articulation, and intonation, and they operate on performance variables such as tone, inter-onset duration, amplitude, and pitch. By manipulating rule parameters, the user can act as a metaperformer controlling different feature of the performance, leaving the technical execution to the computer. Different interpretations of the same piece can easily be obtained. Features of Director Musices include MIDI file input and output, rule palettes, graphical display of all performance variables (along with the notation), and userdefined performance rules. The program is implemented in Common Lisp and is available free as a stand-alone application both for Macintosh and Windows platforms. Further information, including music examples, publications, and the program itself, is located online at http://www.speech.kth.se/music/performance. This paper is a revised and updated version of a previous paper published in the Computer Music Journal in year 2000 that was mainly written by Anders Friberg (Friberg, Colombo, Frydén and Sundberg, 2000). 

  • 20. Camurri, A.
    et al.
    De Poli, G.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Leman, M.
    Volpe, G.
    The MEGA project: Analysis and synthesis of multisensory expressive gesture in performing art applications2005Ingår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 34, nr 1, s. 5-21Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article presents a survey of the research work carried out within the framework of the European Union-IST project MEGA (Multisensory Expressive Gesture Applications, November 2000-October 2003; www. megaproject.org). First, the article introduces a layered conceptual framework for analysis and synthesis of expressive gesture. Such a framework represents the main methodological foundation upon which the MEGA project built its own research. A brief overview of the achievements of research in expressive gesture analysis and synthesis is then provided: these are the outcomes of some experiments that were carried out in order to investigate specific aspects of expressive gestural communication. The work resulted in the design and development of a collection of software libraries integrated in the MEGA System Environment (MEGASE) based on the EyesWeb open platform (www. eyesweb.org).

  • 21. Canazza, S.
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Rodà, A.
    Zanon, P.
    Expressive Director: a system for the real-time control of music performance synthesis2003Ingår i: Proc of SMAC 2003, Stockholm Music Acoustics Conference / [ed] R. Bresin, 2003, Vol. 2, s. 521-524Konferensbidrag (Refereegranskat)
    Abstract [en]

    The Expressive Director is a system allowing real-time control of music performance synthesis, in particular regarding expressive and emotional aspects. It allows a user to interact in real time, for example, changing the emotional intent from happy to sad or from a romantic expressive style to a neutral while it is playing. The Expressive Director was designed in order to merge the expressiveness model developed at CSC and at KTH. The control of the synthesis can be obtained using a two-dimensional space (called “Control Space”) in which the mouse pointer can be moved by the user from an expressive intention to another continuously. Depending on the position, the system applies suitable expressive deviations profiles. The Control Space can be made so as to represent the Valence-Arousal space from music psychology research.

  • 22.
    Carlson, Rolf
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Granström, Björn
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Sundberg, Johan
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Speech and music performance. Parallels and contrasts1987Ingår i: STL-QPSR, Vol. 28, nr 4, s. 007-023Artikel i tidskrift (Övrigt vetenskapligt)
  • 23.
    Carlson, Rolf
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Granström, Björn
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Sundberg, Johan
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Speech and music performance: parallels and contrasts1989Ingår i: Contemporary Music Review, ISSN 0749-4467, E-ISSN 1477-2256, Vol. 4, s. 389-402Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Speech and music performance are two important systems for interhuman communication by means of acoustic signals. These signals must be adapted to the human perceptual and cognitive systems. Hence a comparitive analysis of speech and music performances is likely to shed light on these systems, particularly regarding basic requirements for acoustic communication. Two computer programs are compared, one for text-to-speech conversion and one for note-to-tone conversion. Similarities are found in the need for placing emphasis on unexpected elements, for increasing the dissimilarities between different categories, and for flagging structural constituents. Similarities are also found in the code chosen for conveying this information, e.g. emphasis by lengthening and constituent marking by final lengthening. 

  • 24.
    Dahl, S.
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    What can the body movements reveal about a musician’s emotional intention?2003Ingår i: Proc of SMAC 03, Stockholm Music Acoustics Conference, 2003, Vol. 2, s. 599-602Konferensbidrag (Refereegranskat)
    Abstract [en]

    Music has an intimate relationship with motion in several aspects. Obviously, movements are required to play an instrument but musicians move also their bodies in a way not directly related to note production. In order to explore to what extent emotional intentions can be conveyed through musicians’ movements only, video recordings of a marimba player performing the same piece with the intentions Happy, Sad, Angry and Fearful, were recorded. 20 observers watched the video clips, without sound, and rated both the perceived emotional content as well as movement cues. The videos were presented in four viewing conditions, showing different parts of the player. The observers’ ratings for the intended emotions showed that the intentions Happiness, Sadness and Anger were well communicated, while Fear was not. The identification of the intended emotion was only slightly influenced by the viewing condition, although in some cases the head was important. The movement ratings indicate that there are cues that the observer use to distinguish between intentions, similar to the cues found for audio signals in music performance. Anger was characterized by large, fast, uneven, and jerky movements; Happy by large and somewhat fast movements, Sadness by small, slow, even and smooth movements.

  • 25. Dahl, Sofia
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Expressiveness of a marimba player’s body movements2004Ingår i: TMH-QPSR, ISSN 1104-5787, TMH-QPSR, Vol. 46, nr 1, s. 075-086Artikel i tidskrift (Övrigt vetenskapligt)
    Abstract [en]

    Musicians often make gestures and move their bodies expressing their musical intention. This visual information provides a separate channel of communication to the listener. In order to explore to what extent emotional intentions can be conveyed through musicians’ movements, video recordings were made of a marimba player performing the same piece with four different intentions, Happy, Sad, Angry and Fearful. Twenty subjects were asked to rate the silent video clips with respect to perceived emotional content and movement qualities. The video clips were presented in different viewing conditions, showing different parts of the player. The results showed that the intentions Happiness, Sadness and Anger were well communicated, while Fear was not. The identification of the intended emotion was only slightly influenced by viewing condition. The movement ratings indicated that there were cues that the observers used to distinguish between intentions, similar to cues found for audio signals in music performance.

  • 26.
    Dahl, Sofia
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Expressiveness of musician's body movements in performances on marimba2004Ingår i: Gesture-Based Communication in Human-Computer Interaction / [ed] Camurri, A.; Volpe, G., Genoa: Springer Verlag , 2004, s. 479-486Konferensbidrag (Refereegranskat)
    Abstract [en]

    To explore to what extent emotional intentions can be conveyed through musicians’ movements, video recordings were made of amarimba player performing the same piece with the intentions Happy, Sad, Angry and Fearful. 20 subjects were presented video clips, without sound, and asked to rate both the perceived emotional content as well as the movement qualities. The video clips were presented in different conditions, showing the player to different extent. The observers’ ratings forthe intended emotions confirmed that the intentions Happiness, Sadness and Anger were well communicated, while Fear was not. Identification of the intended emotion was only slightly influenced by the viewing condition. The movement ratings indicated that there were cues that the observers used to distinguish between intentions, similar to cues found for audio signals in music performance.

  • 27.
    Dahl, Sofia
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Visual perception of expressiveness in musicians' body movements2007Ingår i: Music perception, ISSN 0730-7829, E-ISSN 1533-8312, Vol. 24, nr 5, s. 433-454Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    MUSICIANS OFTEN MAKE GESTURES and move their bodies expressing a musical intention. In order to explore to what extent emotional intentions can be conveyed through musicians' movements, participants watched and rated silent video clips of musicians performing the emotional intentions Happy, Sad, Angry, and Fearful. In the first experiment participants rated emotional expression and movement character of marimba performances. The results showed that the intentions Happiness, Sadness, and Anger were well communicated, whereas Fear was not. Showing selected parts of the player only slightly influenced the identification of the intended emotion. In the second experiment participants rated the same emotional intentions and movement character for performances on bassoon and soprano saxophone. The ratings from the second experiment confirmed that Fear was not communicated whereas Happiness, Sadness, and Anger were recognized. The rated movement cues were similar in the two experiments and were analogous to their audio counterpart in music performance.

  • 28.
    Eerola, Tuomas
    et al.
    Department of Music, University of Jyväskylä, Jyväskylä, Finland .
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Emotional expression in music: Contribution, linearity, and additivity of primary musical cues2013Ingår i: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 4, s. 487-Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77-89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0-8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music.

  • 29. Einarsson, Anna
    et al.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Using Singing Voice Vibrato as a Control Parameter in a Chamber Oper2015Ingår i: Using Singing Voice Vibrato as a Control Parameter in a Chamber Opera: Proceedings of International Computer Music Conference (ICMC),, International Computer Music Association, 2015Konferensbidrag (Refereegranskat)
    Abstract [sv]

    Even though a vast number of tools exist for real time voice analyses, only a limited number of them focus specifically on singing voice, and even less on features seen from a perceptual viewpoint. This paper presents a first step towards a multi-feature analysis-tool of the singing voice in a composer-researcher collaboration. A new method is used for extracting the vibrato extent of a singer, which is then mapped to a sound generation module. This was applied in the chamber opera Ps! I will be home soon. The experiences of the singer performing the part with the vibrato-detection were collected qualitatively and analyzed through Interpretative Phenomenological Analyses (IPA). The results revealed some interesting mixed feelings of both comfort and uncertainty in the interactive setup.

  • 30.
    Elowsson, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Algorithmic Composition of Popular Music2012Ingår i: Proceedings of the 12th International Conference on Music Perception and Cognition and the 8th Triennial Conference of the European Society for the Cognitive Sciences of Music / [ed] Emilios Cambouropoulos, Costas Tsourgas, Panayotis Mavromatis, Costas Pastiadis, 2012, s. 276-285Konferensbidrag (Refereegranskat)
    Abstract [en]

    Human  composers  have  used  formal  rules  for  centuries  to  compose music, and an algorithmic composer – composing without the aid of human intervention – can be seen as an extension of this technique. An algorithmic  composer  of  popular  music  (a  computer  program)  has been  created  with  the  aim  to  get  a  better  understanding  of  how  the composition process can be formalized and at the same time to get a better  understanding  of  popular  music  in  general.  With  the  aid  of statistical  findings  a  theoretical  framework  for  relevant  methods  are presented.  The concept of Global Joint Accent Structure is introduced, as a way of understanding how melody and rhythm interact to help the listener   form   expectations  about   future   events. Methods  of  the program   are   presented   with   references   to   supporting   statistical findings. The  algorithmic  composer  creates a  rhythmic  foundation (drums), a chord progression, a phrase structure and at last the melody. The main focus has been the composition of the melody. The melodic generation  is  based  on  ten  different  musical  aspects  which  are described. The resulting output was evaluated in a formal listening test where 14  computer  compositions  were  compared  with  21  human compositions. Results indicate a slightly lower score for the computer compositions but the differences were statistically insignificant.

  • 31.
    Elowsson, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Long-term Average Spectrum in Popular Music and its Relation to the Level of the Percussion2017Ingår i: AES 142nd Convention, Berlin, Germany, 2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    The spectral distribution of music audio has an important influence on listener perception, but large-scale charac- terizations are lacking. Therefore, the long-term average spectrum (LTAS) was analyzed for a large dataset of popular music. The mean LTAS was computed, visualized, and then approximated with two quadratic fittings. The fittings were subsequently used to derive the spectrum slope. By applying harmonic/percussive source sepa- ration, the relationship between LTAS and percussive prominence was investigated. A clear relationship was found; tracks with more percussion have a relatively higher LTAS in the bass and high frequencies. We show how this relationship can be used to improve targets in automatic equalization. Furthermore, we assert that variations in LTAS between genres is mainly a side-effect of percussive prominence.

  • 32.
    Elowsson, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Modeling the perception of tempo2015Ingår i: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 137, nr 6, s. 3163-3177Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A system is proposed in which rhythmic representations are used to model the perception of tempo in music. The system can be understood as a five-layered model, where representations are transformed into higher-level abstractions in each layer. First, source separation is applied (Audio Level), onsets are detected (Onset Level), and interonset relationships are analyzed (Interonset Level). Then, several high-level representations of rhythm are computed (Rhythm Level). The periodicity of the music is modeled by the cepstroid vector-the periodicity of an interonset interval (IOI)-histogram. The pulse strength for plausible beat length candidates is defined by computing the magnitudes in different IOI histograms. The speed of the music is modeled as a continuous function on the basis of the idea that such a function corresponds to the underlying perceptual phenomena, and it seems to effectively reduce octave errors. By combining the rhythmic representations in a logistic regression framework, the tempo of the music is finally computed (Tempo Level). The results are the highest reported in a formal benchmarking test (2006-2013), with a P-Score of 0.857. Furthermore, the highest results so far are reported for two widely adopted test sets, with an Acc1 of 77.3% and 93.0% for the Songs and Ballroom datasets.

  • 33.
    Elowsson, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Modelling Perception of Speed in Music Audio2013Ingår i: Proceedings of the Sound and Music Computing Conference 2013, 2013, s. 735-741Konferensbidrag (Refereegranskat)
    Abstract [en]

    One of the major parameters in music is the overall speed of a musical performance. Speed is often associated with tempo, but other factors such as note density (onsets per second) seem to be important as well. In this study, a computational model of speed in music audio has been developed using a custom set of rhythmic features. The original audio is first separated into a harmonic part and a percussive part and onsets are extracted separately from the different layers. The characteristics of each onset are determined based on frequency content as well as perceptual salience using a clustering approach. Using these separated onsets a set of eight features including a tempo estimation are defined which are specifically designed for modelling perceived speed. In a previous study 20 listeners rated the speed of 100 ringtones consisting mainly of popular songs, which had been converted from MIDI to audio. The ratings were used in linear regression and PLS regression in order to evaluate the validity of the model as well as to find appropriate features. The computed audio features were able to explain about 90 % of the variability in listener ratings.

  • 34.
    Elowsson, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Predicting the perception of performed dynamics in music audio with ensemble learning2017Ingår i: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 141, nr 3, s. 2224-2242Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    By varying the dynamics in a musical performance, the musician can convey structure and different expressions. Spectral properties of most musical instruments change in a complex way with the performed dynamics, but dedicated audio features for modeling the parameter are lacking. In this study, feature extraction methods were developed to capture relevant attributes related to spectral characteristics and spectral fluctuations, the latter through a sectional spectral flux. Previously, ground truths ratings of performed dynamics had been collected by asking listeners to rate how soft/loud the musicians played in a set of audio files. The ratings, averaged over subjects, were used to train three different machine learning models, using the audio features developed for the study as input. The highest result was produced from an ensemble of multilayer perceptrons with an R2 of 0.84. This result seems to be close to the upper bound, given the estimated uncertainty of the ground truth data. The result is well above that of individual human listeners of the previous listening experiment, and on par with the performance achieved from the average rating of six listeners. Features were analyzed with a factorial design, which highlighted the importance of source separation in the feature extraction.

  • 35.
    Elowsson, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Madison, Guy
    Paulin, Johan
    Modelling the Speed of Music Using Features from Harmonic/Percussive Separated Audio2013Ingår i: Proceedings of the 14th International Society for Music Information Retrieval Conference, 2013, s. 481-486Konferensbidrag (Refereegranskat)
    Abstract [en]

    One of the major parameters in music is the overall speed of a musical performance. In this study, a computational model of speed in music audio has been developed using a custom set of rhythmic features. Speed is often associ-ated with tempo, but as shown in this study, factors such as note density (onsets per second) and spectral flux are important as well. The original audio was first separated into a harmonic part and a percussive part and the fea-tures were extracted separately from the different layers. In previous studies, listeners had rated the speed of 136 songs, and the ratings were used in a regression to evalu-ate the validity of the model as well as to find appropriate features. The final models, consisting of 5 or 8 features, were able to explain about 90% of the variation in the training set, with little or no degradation for the test set.

  • 36.
    Elowsson, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Schön, Ragnar
    KTH.
    Höglund, Matts
    KTH.
    Zea, Elías
    KTH, Skolan för teknikvetenskap (SCI), Farkost och flyg, MWL Marcus Wallenberg Laboratoriet.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Estimation of vocal duration in monaural mixtures2014Ingår i: Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos, National and Kapodistrian University of Athens , 2014, s. 1172-1177Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this study, the task of vocal duration estimation in monaural music mixtures is explored. We show how presently available algorithms for source separation and predominant f0 estimation can be used as a front end from which features can be extracted. A large set of features is presented, devised to connect different vocal cues to the presence of vocals. Two main cues are utilized; the voice is neither stable in pitch nor in timbre. We evaluate the performance of the model by estimating the length of the vocal regions of the mixtures. To facilitate this, a new set of annotations to a widely adopted data set is developed and made available to the community. The proposed model is able to explain about 78 % of the variance in vocal region length. In a classification task, where the excerpts are classified as either vocal or non-vocal, the model has an accuracy of about 0.94.

  • 37.
    Fabiani, Marco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    A prototype system for rule-based expressive modifications of audio recordings2007Ingår i: Proc. of the Int. Symp. on Performance Science 2007, Porto, Portugal: AEC (European Conservatories Association) , 2007, s. 355-360Konferensbidrag (Refereegranskat)
    Abstract [en]

    A prototype system is described that aims to modify a musical recording in an expressive way using a set of performance rules controlling tempo, sound level and articulation. The audio signal is aligned with an enhanced score file containing performance rules information. A time-frequency transformation is applied, and the peaks in the spectrogram, representing the harmonics of each tone, are tracked and associated with the corresponding note in the score. New values for tempo, note lengths and sound levels are computed based on rules and user decisions. The spectrogram is modified by adding, subtracting and scaling spectral peaks to change the original tone’s length and sound level. For tempo variations, a time scale modification algorithm is integrated in the time domain re-synthesis process. The prototype is developed in Matlab. An intuitive GUI is provided that allows the user to choose parameters, listen and visualize the audio signals involved and perform the modifications. Experiments have been performed on monophonic and simple polyphonic recordings of classical music for piano and guitar.

  • 38.
    Fabiani, Marco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Expressive modifications of musical audio recordings: preliminary results2007Ingår i: Proc. of the 2007 Int. Computer Music Conf. (ICMC07), Copenhagen, Denmark: The International Computer Music Association and Re:New , 2007, s. 21-24Konferensbidrag (Refereegranskat)
    Abstract [en]

    A system is described that aims to modify the performance of a musical recording (classical music) by changing the basic performance parameters tempo, sound level and tone duration. The input audio file is aligned with the corresponding score, which also contains extra information defining rule-based modifications of these parameters. The signal is decomposed using analysis-synthesis techniques to separate and modify each tone independently. The user can control the performance by changing the quantity of performance rules or by directly modifying the parameters values. A prototype Matlab implementation of the system performs expressive tempo and articulation modifications of monophonic and simple polyphonic audio recordings.

  • 39.
    Fabiani, Marco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Influence of pitch, loudness, and timbre on the perception of instrument dynamics2011Ingår i: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 130, nr 4, s. EL193-EL199Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The effect of variations in pitch, loudness, and timbre on the perception of the dynamics of isolated instrumental tones is investigated. A full factorial design was used in a listening experiment. The subjects were asked to indicate the perceived dynamics of each stimulus on a scale from pianissimo to fortissimo. Statistical analysis showed that for the instruments included (i.e., clarinet, flute, piano, trumpet, and violin) timbre and loudness had equally large effects, while pitch was relevant mostly for the first three. The results confirmed our hypothesis that loudness alone is not a reliable estimate of the dynamics of musical tones.

  • 40.
    Fabiani, Marco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Rule-based expressive modifications of tempo in polyphonic audio recordings2008Ingår i: COMPUTER MUSIC MODELING AND RETRIEVAL: SENSE OF SOUNDS     / [ed] KronlandMartinet R; Ystad S; Jensen K, BERLIN: SPRINGER-VERLAG , 2008, Vol. 4969, s. 288-302Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper describes a few aspects of a system for expressive, rule-based modifications of audio recordings regarding tempo, dynamics and articulation. The input audio signal is first aligned with a score containing extra information on how to modify a performance. The signal is then transformed into the time-frequency domain. Each played tone is identified using partial tracking and the score information. Articulation and dynamics are changed by modifying the length and content of the partial tracks. The focus here is on the tempo modification which is done using a combination of time frequency techniques and phase reconstruction. Preliminary results indicate that the accuracy of the tempo modification is in average 8.2 ms when comparing Inter Onset Intervals in the resulting signal with the desired ones. Possible applications of such a system are in music pedagogy, basic perception research as well as interactive music systems.

  • 41.
    Fabiani, Marco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Systems for Interactive Control of Computer Generated Music Performance2013Ingår i: Guide to Computing for Expressive Music Performance / [ed] Kirke, A., & Miranda, E., Springer Berlin/Heidelberg, 2013, s. 49-73Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    This chapter is a literature survey of systems for real-time interactive control of automatic expressive music performance. A classification is proposed based on two initial design choices: the music material to interact with (i.e., MIDI or audio recordings) and the type of control (i.e., direct control of the low-level parameters such as tempo, intensity, and instrument balance or mapping from high-level parameters, such as emotions, to low-level parameters). Their pros and cons are briefly discussed. Then, a generic approach to interactive control is presented, comprising four steps: control data collection and analysis, mapping from control data to performance parameters, modification of the music material, and audiovisual feedback synthesis. Several systems are then described, focusing on different technical and expressive aspects. For many of the surveyed systems, a formal evaluation is missing. Possible methods for the evaluation of such systems are finally discussed.

  • 42. Finkel, Sebastian
    et al.
    Veit, Ralf
    Lotze, Martin
    Friberg, Anders
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Vuust, Peter
    Soekadar, Surjo
    Birbaumer, Niels
    Kleber, Boris
    Intermittent theta burst stimulation over right somatosensory larynx cortex enhances vocal pitch‐regulation in nonsingers2019Ingår i: Human Brain Mapping, ISSN 1065-9471, E-ISSN 1097-0193Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    While the significance of auditory cortical regions for the development and maintenance of speech motor coordination is well established, the contribution of somatosensory brain areas to learned vocalizations such as singing is less well understood. To address these mechanisms, we applied intermittent theta burst stimulation (iTBS), a facilitatory repetitive transcranial magnetic stimulation (rTMS) protocol, over right somatosensory larynx cortex (S1) and a nonvocal dorsal S1 control area in participants without singing experience. A pitch‐matching singing task was performed before and after iTBS to assess corresponding effects on vocal pitch regulation. When participants could monitor auditory feedback from their own voice during singing (Experiment I), no difference in pitch‐matching performance was found between iTBS sessions. However, when auditory feedback was masked with noise (Experiment II), only larynx‐S1 iTBS enhanced pitch accuracy (50–250 ms after sound onset) and pitch stability (>250 ms after sound onset until the end). Results indicate that somatosensory feedback plays a dominant role in vocal pitch regulation when acoustic feedback is masked. The acoustic changes moreover suggest that right larynx‐S1 stimulation affected the preparation and involuntary regulation of vocal pitch accuracy, and that kinesthetic‐proprioceptive processes play a role in the voluntary control of pitch stability in nonsingers. Together, these data provide evidence for a causal involvement of right larynx‐S1 in vocal pitch regulation during singing.

  • 43.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    A fuzzy analyzer of emotional expression in music performance and body motion2005Ingår i: Proceedings of Music and Music Science, Stockholm 2004 / [ed] Brunson, W.; Sundberg, J., 2005Konferensbidrag (Refereegranskat)
  • 44.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    A Quantitative Rule System for Musical Performance1995Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    A rule system is described that translates an input score file to a musical performance. The rules model different principles of interpretation used by real musicians, such as phrasing, punctuation, harmonic and melodic structure, micro timing, accents, intonation, and final ritard. These rules have been applied primarily to Western classical music but also to contemporary music, folk music and jazz. The rules consider mainly melodic aspects, i. e., they look primarily at pitch and duration relations, disregarding repetitive rhythmic patterns. A complete description and discussion of each rule is presented. The effect of each rule applied to a music example is demonstrated on the CD-ROM. A complete implementation is found in the program Director Musices, also included on the CD-ROM.

    The smallest deviations that can be perceived in a musical performance, i. e., the JND, was measured in three experiments. In one experiment the JND for displacement of a single tone in an isochronous sequence was found to be 6 ms for short tones and 2.5% for tones longer than 250 ms. In two other experiments the JND for rule-generated deviations was measured. Rather similar values were found despite different musical situations, provided that the deviations were expressed in terms of the maximum span, MS. This is a measure of a parameter's maximum deviation from a deadpan performance in a specific music excerpt. The JND values obtained were typically 3-4 times higher than the corresponding JNDs previously observed in psychophysical experiments.

    Evaluation, i. e. the testing of the generality of the rules and the principles they reflect, has been carried out using four different methods: (1) listening tests with fixed quantities, (2) preference tests where each subject adjusted the rule quantity, (3) tracing of the rules in measured performances, and (4) matching of rule quantities to measured performances. The results confirmed the validity of many rules and suggested later realized modifications of others.

    Music is often described by means of motion words. The origin of such analogies was pursued in three experiments. The force envelope of the foot while walking or dancing was transferred to sound level envelopes of tones. Sequences of such tones, repeated at different tempi were perceived by expert listeners as possessing motion character, particularly when presented at the original walking tempo. Also, some of the character of the original walking or dancing could be mediated to the listeners by means of these tone sequences. These results suggest that the musical expressivity might be increased in rule-generated performances if rules are implemented which reflect locomotion patterns.

  • 45.
    Friberg, Anders
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Commentary on Polak How short is the shortest metric subdivision?2017Ingår i: Empirical Musicology Review, ISSN 1559-5749, E-ISSN 1559-5749, Vol. 12, nr 3-4, s. 227-228Artikel i tidskrift (Övrigt vetenskapligt)
    Abstract [en]

    This commentary relates to the target paper by Polak on the shortest metric subdivision by presenting measurements on West-African drum music. It provides new evidence that the perceptual lower limit of tone duration is within the range 80-100 ms. Using fairly basic measurement techniques in combination with a musical analysis of the content, the original results in this study represents a valuable addition to the literature. Considering the relevance for music listening, further research would be valuable for determining and understanding the nature of this perceptual limit.

  • 46.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Digital audio emotions: An overview of computer analysis and synthesis of emotions in music2008Ingår i: Proc. of the 11th Int. Conference on Digital Audio Effects (DAFx-08), Espoo, Finland, 2008, s. 1-6Konferensbidrag (Refereegranskat)
    Abstract [en]

    The research in emotions and music has increased substantially recently. Emotional expression is one of the most important aspects of music and has been shown to be reliably communicated to the listener given a restricted set of emotion categories. From the results it is evident that automatic analysis and synthesis systems can be constructed. In this paper general aspects are discussed with respect to analysis and synthesis of emotional expression and prototype applications are described.

  • 47.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Generative Rules for Music Performance: A Formal Description of a Rule System1991Ingår i: Computer music journal, ISSN 0148-9267, E-ISSN 1531-5169, Vol. 15, nr 2, s. 56-71Artikel i tidskrift (Refereegranskat)
  • 48.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Home conducting: Control the overall musical expression with gestures2005Ingår i: Proceedings of the 2005 International Computer Music Conference, San Francisco: International Computer Music Association , 2005, s. 479-482Konferensbidrag (Refereegranskat)
    Abstract [en]

    In previous computer systems for "conductingOa score, the control is usually limited to tempo and overall dynamics. We suggest a home conducting system allowing an indirect control of the expressive musical details on the note level. In this system, the expressive content of human gestures is mapped into semantic expressive descriptions. These descriptions are then mapped to performance rule parameters using a real time version of the KTH rule system for music performance. The resulting system is intuitive and easy to use also for people lacking formal musical education, making it a tool for the listener rather than the professional performer.

  • 49.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Matching the rule parameters of PHRASE ARCH to performances of "Träumerei": a preliminary study1995Ingår i: STL-QPSR, Vol. 36, nr 2-3, s. 063-070Artikel i tidskrift (Övrigt vetenskapligt)
    Abstract [en]

    In music performance a basic principle is the marking of phrases, which often seems to be achieved by means of tone durations. In our grammar for music performance the rule PHRASE ARCH, has been formulated to model this eflect. A technical description of the rule is presented. Also presented is an attempt to match the diflerent parameters of this rule to the duration data fiom 28 performances of the Jirst 9 bars of Robert Schumann 's Traumerei as measured by Bruno Repp (1992). The optimisation was based on relative duration measured in percent. On average 44% of the total variation was accounted for by PHRASE ARCH. The discrepancies were mostly at the note level and were mostly associated with small musical gestures.

  • 50.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Matching the rule parameters of Phrase arch to performances of “Träumerei”: A preliminary study1995Ingår i: Proceedings of the KTH symposium on Grammars for music performance May 27,1995, 1995, s. 37-44Konferensbidrag (Övrigt vetenskapligt)
123 1 - 50 av 146
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf