Change search
Refine search result
12 1 - 50 of 88
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bjurling, Johan
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Timing in piano music: Testing a model of melody lead2008In: Proc. of the 10th International Conference on Music Perception and Cognition, Sapporo, Japan, 2008Conference paper (Refereed)
  • 2.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Real-time visualization of musical expression2004In: Proceedings of Network of Excellence HUMAINE Workshop "From Signals to Signs of Emotion and Vice Versa", Santorini, Greece, Institute of Communication and Computer Systems, National Technical University of Athens, 2004, p. 19-23Conference paper (Refereed)
    Abstract [en]

    A system for real-time feedback of expressive music performance is presented.The feedback is provided by using a graphical interface where acoustic cues arepresented in an intuitive fashion. The graphical interface presents on the computerscreen a three-dimensional object with continuously changing shape, size,position, and colour. Some of the acoustic cues were associated with the shape ofthe object, others with its position. For instance, articulation was associated withshape, staccato corresponded to an angular shape and legato to a rounded shape.The emotional expression resulting from the combination of cues was mapped interms of the colour of the object (e.g., sadness/blue). To determine which colourswere most suitable for respective emotion, a test was run. Subjects rated how welleach of 8 colours corresponds to each of 12 music performances expressingdifferent emotions.

  • 3.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    What is the color of that music performance?2005In: Proceedings of the International Computer Music Conference - ICMC 2005, Barcelona, 2005, p. 367-370Conference paper (Refereed)
    Abstract [en]

    The representation of expressivity in music is still a fairlyunexplored field. Alternative ways of representing musicalinformation are necessary when providing feedback onemotion expression in music such as in real-time tools formusic education, or in the display of large music databases.One possible solution could be a graphical non-verbal representationof expressivity in music performance using coloras index of emotion. To determine which colors aremost suitable for an emotional expression, a test was run.Subjects rated how well each of 8 colors and their 3 nuancescorresponds to each of 12 music performances expressingdifferent emotions. Performances were playedby professional musicians with 3 instruments, saxophone,guitar, and piano. Results show that subjects associateddifferent hues to different emotions. Also, dark colorswere associated to music in minor tonality and light colorsto music in major tonality. Correspondence betweenspectrum energy and color hue are preliminary discussed.

  • 4.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Favero, Federico
    KTH, School of Architecture and the Built Environment (ABE).
    Annersten, Lars
    Musikverket.
    Berner, David
    Musikverket.
    Morreale, Fabio
    Queen Mary University of London.
    SOUND FOREST/LJUDSKOGEN: A LARGE-SCALE STRING-BASED INTERACTIVE MUSICAL INSTRUMENT2016In: Sound and Music Computing 2016, SMC Sound&Music Computing NETWORK , 2016, p. 79-84Conference paper (Refereed)
    Abstract [en]

     In this paper we present a string-based, interactive, largescale installation for a new museum dedicated to performing arts, Scenkonstmuseet, which will be inaugurated in 2017 in Stockholm, Sweden. The installation will occupy an entire room that measures 10x5 meters. We aim to create a digital musical instrument (DMI) that facilitates intuitive musical interaction, thereby enabling visitors to quickly start creating music either alone or together. The interface should be able to serve as a pedagogical tool; visitors should be able to learn about concepts related to music and music making by interacting with the DMI. Since the lifespan of the installation will be approximately five years, one main concern is to create an experience that will encourage visitors to return to the museum for continued instrument exploration. In other words, the DMI should be designed to facilitate long-term engagement. Finally, an important aspect in the design of the installation is that the DMI should be accessible and provide a rich experience for all museum visitors, regardless of age or abilities.

  • 5.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Emotion rendering in music: Range and characteristic values of seven musical variables2011In: Cortex, ISSN 0010-9452, E-ISSN 1973-8102, Vol. 47, no 9, p. 1068-1081Article in journal (Refereed)
    Abstract [en]

    Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 x 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than stimuli without performance variations.

  • 6.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Influence of Acoustic Cues on the Expressive Performance of Music2008In: Proceedings of the 10th International Conference on Music Perception and Cognition, Sapporo, Japan, 2008Conference paper (Refereed)
  • 7.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Hansen, Kjetil Falkenberg
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Karjalainen, Matti
    Helsinki University of Technology.
    Mäki-Patola, Teemu
    Helsinki University of Technology.
    Kanerva, Aki
    Helsinki University of Technology.
    Huovilainen, Antti
    Helsinki University of Technology.
    Jordá, Sergi
    University Pompeu Fabra.
    Kaltenbrunner, Martin
    University Pompeu Fabra.
    Geiger, Günter
    University Pompeu Fabra.
    Bencina, Ross
    University Pompeu Fabra.
    de Götzen, Amalia
    University of Padua.
    Rocchesso, Davide
    IUAV University of Venice.
    Controlling sound production2008In: Sound to Sense, Sense to Sound: A state of the art in Sound and Music Computing / [ed] Polotti, Pietro; Rocchesso, Davide, Berlin: Logos Verlag , 2008, p. 447-486Chapter in book (Refereed)
  • 8.
    Burger, Birgitta
    et al.
    Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music, University of Jyväskylä.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Communication of Musical Expression by Means of Mobile Robot Gestures2010In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 3, no 1, p. 109-118Article in journal (Refereed)
    Abstract [en]

    We developed a robotic system that can behave in an emotional way. A 3-wheeled simple robot with limited degrees of freedom was designed. Our goal was to make the robot displaying emotions in music performance by performing expressive movements. These movements have been compiled and programmed based on literature about emotion in music, musicians’ movements in expressive performances, and object shapes that convey different emotional intentions. The emotions happiness, anger, and sadness have been implemented in this way. General results from behavioral experiments show that emotional intentions can be synthesized, displayed and communicated by an artificial creature, also in constrained circumstances.

  • 9. Camurri, A.
    et al.
    Bevilacqua, F.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Maestre, E.
    Penttinen, H.
    Seppänen, J.
    Välimäki, V.
    Volpe, G.
    Warusfel, O.
    Embodied music listening and making in context-aware mobile applications: the EU-ICT SAME Project2009Conference paper (Refereed)
  • 10.
    Camurri, Antonio
    et al.
    University of Genova.
    Volpe, Gualtiero
    University of Genova.
    Vinet, Hugues
    IRCAM, Paris.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Maestre, Esteban
    Universitat Pompeu Fabra, Barcelona.
    Llop, Jordi
    Universitat Pompeu Fabra, Barcelona.
    Kleimola, Jari
    Oksanen, Sami
    Välimäki, Vesa
    Seppanen, Jarno
    User-centric context-aware mobile applications for embodied music listening2009In: User Centric Media / [ed] Akan, Ozgur; Bellavista, Paolo; Cao, Jiannong; Dressler, Falko; Ferrari, Domenico; Gerla, Mario; Kobayashi, Hisashi; Palazzo, Sergio; Sahni, Sartaj; Shen, Xuemin (Sherman); Stan, Mircea; Xiaohua, Jia; Zomaya, Albert; Coulson, Geoffrey; Daras, Petros; Ibarra, Oscar Mayora, Heidelberg: Springer Berlin , 2009, p. 21-30Chapter in book (Refereed)
    Abstract [en]

    This paper surveys a collection of sample applications for networked user-centric context-aware embodied music listening. The applications have been designed and developed in the framework of the EU-ICT Project SAME (www.sameproject.eu) and have been presented at Agora Festival (IRCAM, Paris, France) in June 2009. All of them address in different ways the concept of embodied, active listening to music, i.e., enabling listeners to interactively operate in real-time on the music content by means of their movements and gestures as captured by mobile devices. In the occasion of the Agora Festival the applications have also been evaluated by both expert and non-expert users

  • 11.
    Castellano, Ginevra
    et al.
    InfoMus Lab, DIST, University of Genova.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Camurri, Antonio
    InfoMus Lab, DIST, University of Genova.
    Volpe, Gualtiero
    InfoMus Lab, DIST, University of Genova.
    Expressive Control of Music and Visual Media by Full-Body Movement2007In: Proceedings of the 7th International Conference on New Interfaces for Musical Expression, NIME '07, New York, NY, USA: ACM Press, 2007, p. 390-391Conference paper (Refereed)
    Abstract [en]

    In this paper we describe a system which allows users to use their full-body for controlling in real-time the generation of an expressive audio-visual feedback. The system extracts expressive motion features from the user’s full-body movements and gestures. The values of these motion features are mapped both onto acoustic parameters for the real-time expressive rendering ofa piece of music, and onto real-time generated visual feedback projected on a screen in front of the user.

  • 12. Castellano, Ginevra
    et al.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Camurri, Antonio
    Volpe, Gualtiero
    User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements2007In: Affective Computing and Intelligent Interaction / [ed] Paiva, Ana; Prada, Rui; Picard, Rosalind W., Berlin / Heidelberg: Springer Berlin/Heidelberg, 2007, p. 501-510Chapter in book (Refereed)
    Abstract [en]

    In this paper we describe a system allowing users to express themselves through their full-body movement and gesture and to control in real-time the generation of an audio-visual feedback. The systems analyses in real-time the user’s full-body movement and gesture, extracts expressive motion features and maps the values of the expressive motion features onto real-time control of acoustic parameters for rendering a music performance. At the same time, a visual feedback generated in real-time is projected on a screen in front of the users with their coloured silhouette, depending on the emotion their movement communicates. Human movement analysis and visual feedback generation were done with the EyesWeb software platform and the music performance rendering with pDM. Evaluation tests were done with human participants to test the usability of the interface and the effectiveness of the design.

  • 13.
    Dahl, Sofia
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bevilacqua, Frédéric
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Clayton, Martin
    Leante, Laura
    Poggi, Isabella
    Rasamimanana, Nicolas
    Gestures in performance2009In: Musical Gestures: Sound, Movement, and Meaning / [ed] Godøy, Rolf Inge; Leman, Marc, New York: Routledge , 2009, p. 36-68Chapter in book (Refereed)
    Abstract [en]

    We experience and understand the world, including music, through body movement–when we hear something, we are able to make sense of it by relating it to our body movements, or form an image in our minds of body movements. Musical Gestures is a collection of essays that explore the relationship between sound and movement. It takes an interdisciplinary approach to the fundamental issues of this subject, drawing on ideas, theories and methods from disciplines such as musicology, music perception, human movement science, cognitive psychology, and computer science.

  • 14.
    Eerola, Tuomas
    et al.
    Department of Music, University of Jyväskylä, Jyväskylä, Finland .
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Emotional expression in music: Contribution, linearity, and additivity of primary musical cues2013In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 4, p. 487-Article in journal (Refereed)
    Abstract [en]

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77-89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0-8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music.

  • 15.
    Elblaus, Ludvig
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Unander-Scharin, Åsa
    Unander-Scharin, Carl
    Uncanny Materialities: Digital Strategies for Staging Supernatural Themes Drawn from Medieval Ballads2017In: Leonardo music journal, ISSN 0961-1215, E-ISSN 1531-4812, Vol. 27, p. 62-66Article in journal (Refereed)
    Abstract [en]

    In the medieval tradition of ballads, a recurring theme is that of transformation. In a staged concert for chamber orchestra, singers and dancers called Varelser och Ballader (Beings and Ballads), we explored this theme using ballads coupled with contemporary poetry and new music. The performance made use of custom-made digital musical instruments, using video analysis and large-scale physical interfaces for transformative purposes. In this article, we describe the piece itself as well as how uncanny qualities of the digital were used to emphasize eerie themes of transformation and deception by the supernatural beings found in the medieval ballads.

  • 16.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Interactive sonification of emotionally expressive gestures by means of music performance2010In: Proceedings of ISon 2010, 3rd Interactive Sonification Workshop / [ed] Bresin, Roberto; Hermann, Thomas; Hunt, Andy, Stockholm, Sweden: KTH Royal Institute of Technology, 2010, p. 113-116Conference paper (Refereed)
    Abstract [en]

    This study presents a procedure for interactive sonification of emotionally expressive hand and arm gestures by affecting a musical performance in real-time. Three different mappings are described that translate accelerometer data to a set of parameters that control the expressiveness of the performance by affecting tempo, dynamics and articulation. The first two mappings, tested with a numberof subjects during a public event, are relatively simple and were designed by the authors using a top-down approach. According to user feedback, they were not intuitive and limited the usability of the software. A bottom-up approach was taken for the third mapping: a Classification Tree was trained with features extracted from gesture data from a number of test subject who were asked toexpress different emotions with their hand movements. A second set of data, where subjects were asked to make a gesture that corresponded to a piece of expressive music they just listened to, wereused to validate the model. The results were not particularly accurate, but reflected the small differences in the data and the ratings given by the subjects to the different performances they listened to.

  • 17.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Systems for Interactive Control of Computer Generated Music Performance2013In: Guide to Computing for Expressive Music Performance / [ed] Kirke, A., & Miranda, E., Springer Berlin/Heidelberg, 2013, p. 49-73Chapter in book (Refereed)
    Abstract [en]

    This chapter is a literature survey of systems for real-time interactive control of automatic expressive music performance. A classification is proposed based on two initial design choices: the music material to interact with (i.e., MIDI or audio recordings) and the type of control (i.e., direct control of the low-level parameters such as tempo, intensity, and instrument balance or mapping from high-level parameters, such as emotions, to low-level parameters). Their pros and cons are briefly discussed. Then, a generic approach to interactive control is presented, comprising four steps: control data collection and analysis, mapping from control data to performance parameters, modification of the music material, and audiovisual feedback synthesis. Several systems are then described, focusing on different technical and expressive aspects. For many of the surveyed systems, a formal evaluation is missing. Possible methods for the evaluation of such systems are finally discussed.

  • 18.
    Friberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Battel, G. U.
    Structural Communication2011In: The Science & Psychology of Music Performance: Creative Strategies for Teaching and Learning, Oxford University Press, 2011Chapter in book (Refereed)
    Abstract [en]

    Variations in timing and dynamics play an essential role in music performance. This is easily shown by having a computer perform a classical piece exactly as written in the score. The result is dull and will probably not affect us in any positive manner, although there may be plenty of potentially beautiful passages in the score. A musician can, by changing the performance of a piece, totally change its emotional character, for example, from sad to happy. How is this possible, and what are the basic techniques used to accomplish such a change? The key is how the musical structure is communicated. Therefore, a good understanding of structure - whether theoretic or intuitive - is a prerequisite for a convincing musical performance. This chapter surveys the basic principles and techniques that musicians use to convey and project music structure, focusing on auditory communication.

  • 19.
    Friberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Real-time control of music performance2008In: Sound to Sense - Sense to Sound: A state of the art in Sound and Music Computing / [ed] Polotti, Pietro; Rocchesso, Davide, Berlin: Logos Verlag , 2008, p. 279-302Chapter in book (Refereed)
  • 20.
    Friberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Sundberg, Johan
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Overview of the KTH rule system for musical performance2006In: Advances in Cognitive Psychology, ISSN 1895-1171, E-ISSN 1895-1171, Vol. 2, no 2-3, p. 145-161Article in journal (Refereed)
    Abstract [en]

    The KTH rule system models performance principles used by musicians when performing a musical score, within the realm of Western classical, jazz and popular music. An overview is given of the major rules involving phrasing, micro-level timing, metrical patterns and grooves, articulation, tonal tension, intonation, ensemble timing, and performance noise. By using selections of rules and rule quantities, semantic descriptions such as emotional expressions can be modeled. A recent real-time implementation provides the means for controlling the expressive character of the music. The communicative purpose and meaning of the resulting performance variations are discussed as well as limitations and future improvements.

  • 21. Giordano, Bruno
    et al.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Walking and playing: What's the origin of emotional expressiveness in music?2006In: Proceedings of the 9th International Conference on Music Perception & Cognition (ICMPC9), Bologna/Italy, August 22-26 2006 / [ed] Baroni, M.; Addessi, A. R.; Caterina, R.; Costa, M., Bologna: Bononia University Press, 2006, p. 436-Conference paper (Refereed)
  • 22. Goebl, W.
    et al.
    Bresin, R.
    Measurement and reproduction accuracy of computer-controlled grand pianos2003In: Proceedings of SMAC 03, Stockholm Music Acoustics Conference, 2003, Vol. 1, p. 155-158Conference paper (Refereed)
  • 23. Goebl, W.
    et al.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Galembo, A.
    Once again: The perception of piano touch and tone: Can touch audibly change piano sound independently of intensity?2004In: Proceedings of the International Symposium on Musical Acoustics, March 31st to April 3rd 2004 (ISMA2004), Nara, Japan, Nara, Japan: The Acoustical Society of Japan, CD-ROM , 2004, p. 332-335Conference paper (Refereed)
    Abstract [en]

    This study addresses the old question of whether the timbreof isolated piano tones can be audibly varied independentlyof their hammer velocities—only through thetype of touch. A large amount of single piano tones wereplayed with two prototypical types of touch: depressingthe keys with the finger initially resting on the key surface(pressed), and hitting the keys from a certain distanceabove (struck). Musicians were asked to identify the typeof touch of the recorded samples, in a first block with allattack noises before the tone onsets included, in a secondblock without them. Half of the listeners could correctlyidentify significantly more tones than chance in the firstblock (up to 86% accuracy), but no one in block 2. Thosewho heard no difference tended to give struck ratings forlouder tones in both blocks.

  • 24. Goebl, W.
    et al.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Galembo, A.
    Touch and temporal behavior of grand piano actions2005In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 118, no 2, p. 1154-1165Article in journal (Refereed)
    Abstract [en]

    This study investigated the temporal behavior of grand piano actions from different manufacturers under different touch conditions and dynamic levels. An experimental setup consisting of accelerometers and a calibrated microphone was used to capture key and hammer movements, as well as the sound signal. Five selected keys were played by pianists with two types of touch (pressed touch versus struck touch) over the entire dynamic range. Discrete measurements were extracted from the accelerometer data for each of the over 2300 recorded tones (e.g., finger-key, hammer-string, and key bottom contact times, maximum hammer velocity). Travel times of the hammer (from finger-key to hammer-string) as a function of maximum hammer velocity varied clearly between the two types of touch, but only slightly between pianos. A travel time approximation used in earlier work [Goebl W., (2001). J. Acoust. Soc. Am. 110, 563-572] derived from a computer-controlled piano was verified. Constant temporal behavior over type of touch and low compression properties of the parts of the action (reflected in key bottom contact times) were hypothesized to be indicators for instrumental quality.

  • 25. Gramming, Patricia
    et al.
    Sundberg, Johan
    Ternström, Sten
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Leanderson, Rolf
    Perkins, William H.
    Relationship between changes in voice pitch and loudness1988In: Journal of Voice, ISSN 0892-1997, E-ISSN 1873-4588, Vol. 2, no 2, p. 118-126Article in journal (Refereed)
    Abstract [en]

    Summary Changes in mean fundamental frequency accompanying changes in loudness of phonation are analyzed in 9 professional singers, 9 nonsingers, and 10 male and 10 female patients suffering from vocal functional dysfunction. The subjects read discursive texts with noise in earphones, and some also at voluntarily varied vocal loudness. The healthy subjects phonated as softly and as loudly as possible at various fundamental frequencies throughout their pitch ranges, and the resulting mean phonetograms are compared. Mean pitch was found to increase by about half-semitones per decibel sound level. Grossly, the subject groups gave similar results, although the singers changed voice pitch more than the nonsingers. The voice pitch changes may be explained as passive results of changes of subglottal pressure required for the sound level variation.

  • 26. Hansen, K. F.
    et al.
    Bresin, R.
    DJ scratching performance techniques: Analysis and synthesis2003In: Proc. Stockholm Music Acoustics Conference, 2003, Vol. 2, p. 693-696Conference paper (Refereed)
    Abstract [en]

    Scratching is a popular way of making music, turning the DJ into a musician. Normally scratching is done using a vinyl record, a turntable and a mixer. Vinyl manipulation is built up by a number of specialized techniques that have been analysed in a previous study. The present study has two main objectives. First is to better understand and model turntable scratching as performed by DJs. Second is to design a gesture controller for physical sound models, i.e. models of friction sounds. We attached sensors to a DJ equipment set-up. Then a DJ was asked to perform typical scratch gestures both isolated and in a musical context, i.e. as in a real performance. He also was asked to play with different emotions: sad, angry, happy and fearful. A model of the techniques used by the DJ was built based on the analysis of the collected data. The implementation of the model has been done in pd. The Radio Baton, with specially adapted gesture controllers, has been used for controlling the model. The system has been played by professional DJs in concerts.

  • 27.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, Superseded Departments, Speech, Music and Hearing.
    Analysis of a genuine scratch performance2004In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 2915, p. 477-478Article in journal (Refereed)
    Abstract [en]

    The art form of manipulating vinyl records done by disc jockeys (DJs) is called scratching, and has become very popular since its start in the seventies. Since then turntables are commonly used as expressive musical instruments in several musical genres. This phenomenon has had a serious impact on the instrument-making industry, as the sales of turntables and related equipment have boosted. Despite of this, the acoustics of scratching has been barely studied until now. In this paper, we illustrate the complexity of scratching by measuring the gestures of one DJ during a performance. The analysis of these measurements is important to consider in the design of a scratch model.

  • 28.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Mapping strategies in DJ scratching2006In: Proc. of the Conference on New Interfaces for Musical Expression, IRCAM , 2006, p. 188-191Conference paper (Refereed)
    Abstract [en]

    For 30 years Disc Jockeys have been expressing their musical ideas with scratching. Unlike many other popular instruments, the equipment used for scratching is not built as one single unit, and it was not intended to be a musical instrument. This paper gives an overview of how DJs use their turntable, vinyl record and audio mixer in junction to produce scratch music. Their gestural input to the instrument is explained by looking at the mapping principles between the controller parameters and the audio output parameters. Implications are discussed for the design of new interfaces with examples of recent innovations and experiments in the field.

  • 29.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    The Skipproof virtual turntable for high-level control of scratching2010In: Computer music journal, ISSN 0148-9267, E-ISSN 1531-5169, Vol. 34, no 2, p. 39-50Article in journal (Refereed)
    Abstract [en]

    A background on scratching and disc jockey (DJ) interfaces is presented, Skipproof application is described, performance situations where Skipproof is used are presented, and current implementations and possible future uses of Skipproof are discussed. DJing has grown from record players, turntables, and vinyl records to the use of product catalog of commercial physical controllers with other sound formats and sequencer-based interfaces with non-real-time interaction. Skipproof provides the main functionality of a turntable and a mixer, allowing a user to play different sound samples and alter the speed and amplitude manually. Skipproof is used in GUI and visual feedback, sensor and parameter mapping, and audio. The use of radio Baton as the turntable controller in a public performance featuring the Skipproof software showed problems due to the lack of beat synchronization of the scratch techniques and the impossibility of setting a general tempo.

  • 30.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Verbal Description of DJ Recordings2008In: Proc. of the 10th International Conference on Music Perception and Cognition, Sapporo, 2008, p. 20-Conference paper (Refereed)
    Abstract [en]

    In a recent pilot study, DJs were asked to perform the same composition using different intended emotional expression (happiness, sadness etc). In a successive test, these intentions could not be matched by listeners' judgement. One possible explanation is that DJs have a different vocabulary when describing expressivityin their performances. We designed an experiment to understand how DJs and listeners describe the music. The experiment was aimed at identifying a set of descriptors used mainly with scratch music, but possibly also with other genres. In a web questionnaire, subjects were presented with sound stimuli from scratch music recordings. Each participant described the music with words, phrases and terms in a free labelling task. The resulting list of responses was analyzed in several steps and condensed to a set of about 10 labels. Important differences were found between describing scratch music and other Western genres such as pop, jazz or classical music. For instance, labels such as cocky, cool, amusement and skilled were common. These specific labels seem mediated from the characteristic hip-hop culture. The experiment offered some explanation to the problem of verbally describing expressive scratch music. The set of labels found can be used for further experiments, for example when instructing DJs in performances.

  • 31.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Describing the emotional content of hip-hop DJ recordings2008In: The Neurosciences and Music III, Montreal: New York Academy of Sciences, 2008, p. 565-Conference paper (Refereed)
  • 32.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Principles for expressing emotional content in turntable scratching2006In: Proc. 9th International Conference on Music Perception & Cognition / [ed] Baroni, M.; Addessi, A. R.; Caterina, R.; Costa, M., Bologna: Bonomia University Press , 2006, p. 532-533Conference paper (Refereed)
    Abstract [en]

    Background: Scratching is a novel musical style that introduces the turntable as a musical instrument. Sounds are generated by moving vinyl records with one or two hands on the turntable and controlling amplitude with the crossfader with one hand. With this instrument mapping, complex gestural combinations that produce unique 'tones' can be achieved. These combinations have established a repertoire of playing techniques, and musicians (or DJs) know how to perform most of them. Scratching is normally not a melodically based style of music. It is very hard to produce tones with discrete and constant pitch. The sound is always strongly dependent on the source material on the record, and its timbre is not controllable in any ordinary way. However, tones can be made to sound different by varying the speed of the gesture and thereby creating pitch modulations. Consequently timing and rhythm remain as important candidates for expressive playing when compared to conventional musical instruments, and with the additional possibility to modulate the pitch.Aims: The experiment presented aims to identify acoustical features that carry emotional content in turntable scratching performances, and to find relationships with how music is expressed with other instruments. An overall aim is to investigate why scratching is growing in popularity even if it a priori seems ineffective as an expressive interface.Method: A number of performances by experienced DJs were recorded. Speed of the record, mixer amplitude and the generated sounds were measured. The analysis focuses on finding the underlying principles for expressive playing by examining musician's gestures and the musical performance. The found principles are compared to corresponding methods for expressing emotional intentions used for other instruments.Results: The data analysis is not completed yet. The results will give an indication of which acoustical features DJs use to play expressively on their instrument with musically limited possibilities. Preliminary results show that the principles for expressive playing are in accordance with current research on expression.Conclusions: The results present some important features in turntable scratching that may help explain why it remains a popular instrument despite its rather unsatisfactory playability both melodically and rhythmically.

  • 33.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Analysis of the acoustics and playing strategies of turntable scratching2011In: Acta Acoustica united with Acustica, ISSN 1610-1928, E-ISSN 1861-9959, Vol. 97, no 2, p. 303-314Article in journal (Refereed)
    Abstract [en]

    Scratching performed by a DJ (disk jockey) is a skillful style of playingthe turntable with complex musical output. This study focuses on the description of some of the acoustical parameters and playing strategies of typical scratch improvisations, and how these parameters typically are used for expressive performance. Three professional DJs were instructed to express different emotions through improvisations, and both audio and gesturaldata were recorded. Feature extraction and analysis of the recordings are based on a combination of audio and gestural data, instrument characteristics, and playing techniques. The acoustical and performance parameters extracted from the recordings give a first approximation on the functional ranges within which DJs normally play. Results from the analysis show that parameters which are important for other solo instrument performances, suchas pitch, have less influence in scratching. Both differences and commonalities between the DJs’ playing styles were found. Impact that the findings of this work may have on constructing models for scratch performances arediscussed.

  • 34.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Hiraga, Rumi
    Industrial Technology Department, Tsukuba University of Technology.
    The Effects of Musical Experience and Hearing Loss on Solving an Audio-Based Gaming Task2017In: Applied Sciences, ISSN 2076-3417, Vol. 7, no 12, article id 1278Article in journal (Refereed)
    Abstract [en]

    We conducted an experiment using a purposefully designed audio-based game called the Music Puzzle with Japanese university students with different levels of hearing acuity and experience with music in order to determine the effects of these factors on solving such games. A group of hearing-impaired students (n = 12) was compared with two hearing control groups with the additional characteristic of having high (n = 12) or low (n = 12) engagement in musical activities. The game was played with three sound sets or modes; speech, music, and a mix of the two. The results showed that people with hearing loss had longer processing times for sounds when playing the game. Solving the game task in the speech mode was found particularly difficult for the group with hearing loss, and while they found the game difficult in general, they expressed a fondness for the game and a preference for music. Participants with less musical experience showed difficulties in playing the game with musical material. We were able to explain the impacts of hearing acuity and musical experience; furthermore, we can promote this kind of tool as a viable way to train hearing by focused listening to sound, particularly with music.

  • 35. Hiraga, Rumi
    et al.
    Bresin, Roberto
    KTH, Superseded Departments, Speech Transmission and Music Acoustics.
    Hirata, Keiji
    Katayose, Haruhiro
    Rencon 2004: Turing Test for Musical Expression2004In: Proceedings of the 4th international conference on New interfaces for musical expression / [ed] Lyons, Michael J., Hamamatsu, Shizuoka, Japan: National University of Singapore , 2004, p. 120-123Conference paper (Refereed)
    Abstract [en]

    Rencon is an annual international event that started in 2002.It has roles of (1) pursuing evaluation methods for systemswhose output includes subjective issues, and (2) providinga forum for researches of several fields related to musicalexpression. In the past, Rencon was held as a workshop associated with a musical contest that provided a forum forpresenting and discussing the latest research in automaticperformance rendering. This year we introduce new evaluation methods of performance expression to Rencon: a TuringTest and a Gnirut Test, which is a reverse Turing Test, forperformance expression. We have opened a section of thecontests to any instruments and genre of music, includingsynthesized human voices.

  • 36. Hiraga, Rumi
    et al.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Katayose, Haruhiro
    Rencon 20052006In: Proceeding of the 20th Annual Conference of the Japanese Society for Artficial Intelligence, 2006, p. 1D2-1Conference paper (Refereed)
    Abstract [en]

    Contest for performance rendering systems, Rencon, was held concurrently with the panel session entitled "Software Tools for Expressive Music Performance" in International Computer Music Conference (ICMC) 2005. In this paper, we describe the contest and the panel session. The contest consisted of the compulsory section where Mozart's Minuette KV 1 (1e) was the compulsory music. The contest winner was decided according to the voting prior to the panel session. Five panelists in the panel session introduced Rencon and research on expressive music performance. The panel session was exciting by the active discussion with the full audiences.

  • 37.
    Holzapfel, André
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    A corpus study on rhythmic modes in Turkish makam music and their interaction with meter2015In: Proceedings of the 15. Congress of the Society for Music Theory, 2015Conference paper (Refereed)
  • 38. Istok, E.
    et al.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Huotilainen, M.
    Tervaniemi, M.
    Expressive timing facilitates the processing of phrase boundaries in music: Evidence from the event-related potential2012In: International Journal of Psychophysiology, ISSN 0167-8760, E-ISSN 1872-7697, Vol. 85, no 3, p. 403-404Article in journal (Refereed)
  • 39. Istók, E.
    et al.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Huotilainen, M.
    Tervaniemi, M.
    Expressive Timing Facilitates the Neural Processing of Phrase Boundaries in Music: Evidence from Event-Related Potentials2013In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 8, no 1, p. e55150-Article in journal (Refereed)
    Abstract [en]

    The organization of sound into meaningful units is fundamental to the processing of auditory information such as speech and music. In expressive music performance, structural units or phrases may become particularly distinguishable through subtle timing variations highlighting musical phrase boundaries. As such, expressive timing may support the successful parsing of otherwise continuous musical material. By means of the event-related potential technique (ERP), we investigated whether expressive timing modulates the neural processing of musical phrases. Musicians and laymen listened to short atonal scale-like melodies that were presented either isochronously (deadpan) or with expressive timing cues emphasizing the melodies' two-phrase structure. Melodies were presented in an active and a passive condition. Expressive timing facilitated the processing of phrase boundaries as indicated by decreased N2b amplitude and enhanced P3a amplitude for target phrase boundaries and larger P2 amplitude for non-target boundaries. When timing cues were lacking, task demands increased especially for laymen as reflected by reduced P3a amplitude. In line, the N2b occurred earlier for musicians in both conditions indicating general faster target detection compared to laymen. Importantly, the elicitation of a P3a-like response to phrase boundaries marked by a pitch leap during passive exposure suggests that expressive timing information is automatically encoded and may lead to an involuntary allocation of attention towards significant events within a melody. We conclude that subtle timing variations in music performance prepare the listener for musical key events by directing and guiding attention towards their occurrences. That is, expressive timing facilitates the structuring and parsing of continuous musical material even when the auditory input is unattended.

  • 40. Laukka, P.
    et al.
    Juslin, P. N.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    A dimensional approach to vocal expression of emotion2005In: Cognition & Emotion, ISSN 0269-9931, E-ISSN 1464-0600, Vol. 19, no 5, p. 633-653Article in journal (Refereed)
    Abstract [en]

    This study explored a dimensional approach to vocal expression of emotion. Actors vocally portrayed emotions (anger, disgust, fear, happiness, sadness) with weak and strong emotion intensity. Listeners (30 university students and 6 speech experts) rated each portrayal on four emotion dimensions (activation, valence, potency, emotion intensity). The portrayals were also acoustically analysed with respect to 20 vocal cues (e.g., speech rate, voice intensity, fundamental frequency, spectral energy distribution). The results showed that: (a) there were distinct patterns of ratings of activation, valence, and potency for the different emotions; (b) all four emotion dimensions were correlated with several vocal cues; (c) listeners' ratings could be successfully predicted from the vocal cues for all dimensions except valence; and (d) the intensity dimension was positively correlated with the activation dimension in the listeners' ratings.

  • 41.
    Lindborg, PerMagnus
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. Nanyang Technological University, Singapore.
    About TreeTorika: Rhetoric, CAAC and Mao2008In: OM Composer’s Book #2 / [ed] Bresson, J., Agon C. & Assayag G., Paris, France: Éditions Delatour France / IRCAM - Centre Pompidou , 2008, p. 95-116Chapter in book (Refereed)
    Abstract [en]

    This chapter examines computer assisted analysis and composition (CAAC) techniquesin relation to the composition of my piece TreeTorika for chamber orchestra. I describemethods for analysing the musical features of a recording of a speech by Mao Zedong,in order to extract compositional material such as global form, melody, harmony andrhythm, and for developing rhythmic material. The first part focuses on large scalesegmentation, melody transcription, quantification and quantization. Automatic tran-scription of the voice was discarded in favour of an aural method using tools in Amadeusand Max/MSP. The data was processed in OpenMusic to optimise the accuracy and read-ability of the notation. The harmonic context was derived from the transcribed melodyand from AudioSculpt partial tracking and chord-sequence analyses. The second partof this chapter describes one aspect of computer assisted composition, that is the useof the rhythm constraint library in OpenMusic to develop polyrhythmic textures. Theflexibility of these techniques allowed the computer to assist me in all but the final phasesof the work. In addition, attention is given to the artistic and political implications ofusing recordings of such a disputed public figure as Mao.

  • 42.
    Lindborg, PerMagnus
    Nanyang Technological University, Singapore.
    Editorial - Special Issue on Sound Art and Interactivity in Singapore: SI13 and More2014In: eContact!, Vol. 16, no 2Article in journal (Other academic)
    Abstract [en]

    The SI13 NTU/ADM Symposium on Sound and Interactivity in Singapore provided a meeting point for local researchers, artists, scholars and students working creatively with sound and interactivity, as well as the foundation for an issue exploring sound and interactivity in the Southeast Asian country.Figure 1. Snapshots from the SI13 exhibition, which could be visited throughout the symposium from 14–16 November 2013. [Click image to enlarge] The School of Art Design Media of Singapore’s Nanyang Technological University hosted the Symposium on Sound and Interactivity from 14–16 November 2013. A total of 15 artworks and 14 papers were selected by a review committee for presentation by 24 active participants during the three-day symposium. While all but four of the participants are residents of the island, they represent seventeen different countries, thus reflecting the cosmopolitan nature of Singapore in general and of sound artists and researchers in particular. 1[1. See the SI13 website for more information.]Thanks to funding from Nanyang’s CLASS conference scheme, Roger T. Dean (MARCS Institute, University of New South Wales, Australia) and Diemo Schwarz (IRCAM, France) could be invited as Keynote Speakers; they also performed in the concert that opened the symposium, and contributed to the exhibition.It is a pleasure to collaborate with eContact! in presenting a broad collection of articles emanating from this event, and to use these as a basis for an overview of sound art and related activities in Singapore. Eleven texts from the SI13 Proceedings have been edited for this issue. Joining them are two texts originally written for the catalogue of the “Sound: Latitudes and Attitudes” exhibition held at Singapore’s Institute of Contemporary Arts (7 February – 16 March 2014). Finally, in the guise of a “community report” on sound art activities in Singapore, I have contributed a “constructed multilogue” created from interviews with three sound art colleagues.

  • 43.
    Lindborg, PerMagnus
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. Nanyang Technological University, Singapore.
    Interactive Sonification of Weather Data for The Locust Wrath, a Multimedia Dance Performance2016In: Leonardo: Journal of the International Society for the Arts, Sciences and Technology, ISSN 0024-094X, E-ISSN 1530-9282Article in journal (Refereed)
    Abstract [en]

    To work flexibly with the sound design for The Locust Wrath, a multimedia dance performance on the topic of climate change, we developed a software for interactive sonification of climate data. An open-ended approach to parameter mapping allowed tweaking and improvisation during rehearsals, resulting in a large range of musical expression. The sonifications represented weather systems pushing through South-East Asia in complex patterns. The climate was rendered as a piece of electroacoustic music, whose compositional form - gesture, timbre, intensity, harmony, spatiality - was determined by the data. The article discusses aspects of aesthetic sonification, reports the process of developing the present work, and contextualises the design decisions within theories of crossmodal perception and listening modes.

  • 44.
    Lindborg, PerMagnus
    Université de Paris IV Sorbonne.
    Le dialogue musicien-machine : Aspects des systèmes d'interactivité musicale2003Licentiate thesis, monograph (Other academic)
    Abstract [fr]

    Ce texte a comme sujet la confluence entre la création musicale et les sciences cognitives. Le but principal du travail a été de faire de la reconnaissance sur le terrain. Le présent texte est donc forcément incomplet, et ne servira que de point de départ pour une recherche substantielle.

    J’ai choisi comme thématique l’interactivité musicale, qui sera définie comme le dialogue musicien–machine. Je vais tenter d’approcher ce phénomène par multiples chemins, qui se superposent. Le thème restera au centre, et autour de lui, j’esquisserai sa relation avec plusieurs faits et phénomènes liés, en particulier : les langages naturels et formels, la question de l’interface à la création, l’intelligence artificielle, et les notions de mémoire et de sens. Ces approches mises ensemble constitueront l’étude des aspects des systèmes d’interactivité.

    Le vaste sujet de l’interactivité musicale est incrusté dans l’histoire de la musique d’ordinateur, une histoire qui date déjà d’un demi-siècle au moins. Par conséquent il sera nécessaire de cerner le cœur du sujet et de parcourir des cercles concentriques ou en spirale, pour gagner des connaissances qui nous permettent de comprendre mieux le phénomène. La procédure est un peu comme quand on observe une étoile avec l’œil nu : si on la regarde tout droit elle disparaît… La rétine est plus sensible à la lumière dans les côtés. Le texte est donc fatalement un collage consistant de plusieurs études d’envergure limitée. Malgré cela, il faut respecter les aspects importants propres au sujet, essayer d’esquiver le superflu et faire le plus possible de liens. La recherche est guidée par trois thématiques. Quelle est la matière, en d’autres termes, les composants et les processus qui constituent le système de proprement dit, utilisé dans la situation de performance musicale ? Deuxièmement, quelle est la relation entre recherche cognitive et outils technologiques à disposition ? Troisièmement, quelles implications est-ce que les technologies ont eues et auront d’autant plus à l’avenir sur la créativité musicale ?

    Depuis plusieurs années, les concepts qui sous-tiennent ce texte ont influencé mon travail de compositeur et performeur. J’ai fait des expériences en la matière au travers d’œuvres employant des dispositifs électroacoustiques de configuration variable : “Beda+” (1995), “Tusalava” (1999), “Leçons pour un apprenti sourd-muet” (1998-9), “gin/gub” (2000), “Manifest”[1] (2000), “Project Time”[2] (2001), “sxfxs” (2001), “Extra Quality” (2001-2), ”D!sturbances 350–500”[3]… Ces morceaux de musique sont nés d'une curiosité pour le fondement théorique de la cognition et le fonctionnement du cerveau humain. En particulier, je me suis consacré à analyser la situation de jeu dans laquelle a lieu un échange d’informations et d’initiatives musicales entre musicien et machine, qui agissent sur un degré équivalent de participation dans un système complexe. J’éprouve que cette situation ludique peut également servir d’outil de recherche ; elle est un peu comme un laboratoire, ou un banc d’essai, pour tester des hypothèses, qu’elles soient des propos limités à la musique, ou bien plus étendues, élargissant vers des terrains inhabituels.

    Étant compositeur, j’ai essayé de rendre l’étude ni trop limitée, ni strictement descriptive. J’ai ressenti le besoin d’analyser des travaux contemporains, ayant des composants scientifiques : les trois projets étudiés sont effectivement en cours de développement. Il s’agissait dans cette étude de capter plutôt leur raison d’être que de montrer leurs formes respectives dans un état finalisé, qui de toute façon n’est pas leur destin. Si la musicologie se contentait de démontrer des structures dans des œuvres de répertoire connues depuis longtemps, ou si elle s’enfermait dans un académisme technocrate développant des modèles n’expliquant que des choses qui sont évidentes pour les musiciens, alors elle souffrirait d’anémie. En proposant une hypothèse, elle doit comporter des aspects prédictifs. Ce serait encore mieux si des modèles développés en support à l’hypothèse étaient facilement accessibles et pouvaient servir au développement de nouveaux outils innovants. Cela est souhaitable, non seulement pour stimuler la production créative, mais également pour aider à mieux comprendre le fonctionnement de la créativité lui-même.

    L’activité musicale, au sens général, pour ceux qui la produisent autant que pour ceux qui l’apprécient, est un exercice essentiellement non-verbal dont le but est l’émergence d'une compréhension de la créativité humaine d’un ordre autre que verbal ou écrit. En étudiant la créativité, et surtout sa formalisation, ne risquerait-on pas de la dénaturer ? Peut-être la créativité ne risque-t-elle pas de s’effondrer dans la recherche ? Que restera-t-il de la création musicale le jour où une machine aura composé une œuvre capable d’émouvoir les auditeurs ignorant tout de son mode de fabrication ? Néanmoins, en suivant l’appel de William Faulkner, “kill your darlings”, espérons transcender la créativité telle qu’on la connaît et aller vers des pays musicaux inouïs.

  • 45.
    Lindborg, PerMagnus
    Nanyang Technological University, Singapore.
    Leçons : an Approach to a System for Machine Learning, Improvisation and Music Performance2003In: Computer Music Modeling and Retreival: International Symposium, CMMR 2003, Springer-Verlag New York, 2003Chapter in book (Refereed)
    Abstract [en]

    This paper aims at describing an approach to the music performancesituation as a laboratory for investigating interactivity. I would like to present“Leçons pour un apprenti sourd-muet” 1, where the basic idea isthat of two improvisers, a saxophonist and a computer, engaged in a seriesof musical questions and responses. The situation is inspired fromthe Japanese shakuhachi tradition, where imitating the master performeris a prime element in the apprentice’s learning process. Through listeningand imitation, the computer’s responses get closer to that of its master foreach turn. In this sense, the computer’s playing emanates from the saxophonist’sphrases and the interactivity in “Leçons” takes place on thelevel of the composition.

  • 46.
    Lindborg, PerMagnus
    Nanyang Technological University, Singapore.
    Reflections on aspects of music interactivity in performance situations2008In: eContact!, Vol. 10, no 4Article in journal (Refereed)
    Abstract [en]

    Music interactivity is a sub-field of human-computer interaction studies. Interactive situations have different degree of structural openness and musical “ludicity” or playfulness. Discussing music seems inherently impossible since it is essentially a non-verbal activity. Music can produce an understanding (or at least prepare for an understanding) of creativity that is of an order neither verbal nor written. A human listener might perceive beauty to be of this kind in a particular music. But can machine-generated music be considered creative and if so, wherein lies the creativity? What are the conceptual limits of notions such as instrument, computer and machine? A work of interactive music might be more pertinently described by the processes involved than by one or several instanciations. While humans spontaneously deal with multiple process descriptions (verbal, visual, kinetic…) and are very good at synthesising, the computer is limited to handling processes describable in a formal language such as computer code. But if the code can be considered a score, does it not make a musician out of the computer? As tools for creative stimulus, composers have created musical systems employing artificial intelligence in different forms since the dawn of computer music. A large part of music interactivity research concerns interface design, which involves ergonomics and traditional instrument maker concepts. I will show examples of how I work with interactivity in my compositions, from straight-forward applications as composition tools to more complex artistic work.

  • 47.
    Lindborg, PerMagnus
    Nanyang Technological University, Singapore.
    Singapore Voices: An interactive installation about languages to (re)(dis)cover the intergenerational distance2011In: National Academy of Screen and Sound, ISSN 1833-0538Article in journal (Refereed)
    Abstract [en]

    Singapore Voices is an interactive installation, integrating sound and image in aseries of touch-sensitive displays. Each display shows the portrait of an elderly person,standing with the hand turned outwards, as if saying: “I built this nation”. Two displayscan be seen in Figure 1 below. When the visitor touches the hand or shoulder, they heara recording of the speaker’s voice. Chances are that the visitor will not be able tounderstand the language spoken, but she or he will indeed grasp much of all that is, in amanner of speaking, “outside” of the words - elements of prosody such as phrasing andspeech rhythm, but also voice colour that may hint at the emotional state of the person.Then there is coughing, laughing, a hand clap and so forth. Such paralingual elements ofvocal communication are extremely important and furthermore, their meaning is quite universal.

  • 48.
    Lindborg, PerMagnus
    Nanyang Technological University, Singapore.
    Sound Art Singapore: Conversation with Pete Kellock, Zul Mahmod and Mark Wong2014In: eContact!, Vol. 16, no 2Article in journal (Refereed)
    Abstract [en]

    This paper is a “constructed multilogue” oriented around a set of questions about sound art in Singapore. I have lived here since 2007 and felt that a “community report” should aim to probe recent history deeper than what I could possibly do on my own, in order to give a rich perspective of what is happening here today. I was very happy when Pete Kellock, Zul Mahmod and Mark Wong agreed to be interviewed. Each has a long-time involvement in the Singapore sound scene, in a different capacity. Pete is an electroacoustic music composer who has worked in research and entrepreneurship, and is a founder of muvee technologies. Zul is a multimedia artist and performer who has developed a rich personal expression, mixing sonic electronics, sculpture and robotics in playful ways. Mark is a writer and sound artist who has followed Singapore’s experimental scenes closely since the 1990s.

    I sent the three of them a letter containing a range of observations I had made (which may or may not be entirely accurate) and questions (admittedly thorny and intended to provoke), including the following:

    The geographical location and Singapore’s historic reason-to-be as a trading post has instilled a sense of ephemerality — people come and go, ideas and traditions too — as well as a need to develop contacts with the exterior. The arts scene in general seems to be largely a reflection of whatever the current trading priorities demand. In what way does the current local sound art reflect the larger forces within Singaporean society? Since art is mostly orally traded, how are its traditions nurtured and developed?

    Around 2010, the Government seems to have indicated a new task for cultural workers, including sound artists and musicians: to define — create or discover, stitch-up or steal — a “Singapore identity”. The Singapore Art Festival shut down two years while the think tanks were brewing. Will this funnel taxpayer money and (more importantly) peoples’ attention towards folkloristic or museal music, rather than to radical and/or intellectual sound art? At the same time, there is considerable commercial pressure to subsume music / sound listening into an experiential, multimodal, game-like and socially mediated lifestyle product. Are commercialization and identity-seeking two sides of the same coin — one side inflation-prone, and the other a possible counterfeit? Is there room for a “pure listening experience”, for example to electroacoustic music? Or is the future of sound art ineluctably intertwined with sculptural and visual elements?

    Different kinds of creative people involved in sound art are entrepreneurs, programmers, academics, educators, curators and journalists. Which institutions nurture talent and bring audiences to meet new experiences? Where are the hothouses for developing ideas, craft, artistry, innovation and business?

    The interviews, loosely structured around these themes, were made in January and February 2014. Our conversations often took unexpected turns (mostly for the better). I diligently transcribed the recordings, and each interviewee made corrections and additions, before we gently nudged spoken language a little closer to prose. I then brought out a pair of big scissors and a large pot of coffee, and made a cut-out collage, weaving the texts into the multilogue that follows. The idea has been to create an illusion of four people conversing with each other under the same roof. Deceit or not, at the very least, we all live and work on the same small island, somewhere in the deep southeast. I hope you will enjoy reading Sound Art Singapore.

  • 49.
    Lindborg, PerMagnus
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. Nanyang Technological University, Singapore.
    Sound perception and design in multimodal environments2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This dissertation is about sound in context. Since sensory processing is inherently multimodal, research in sound is necessarily multidisciplinary. The present work has been guided by principles of systematicity, ecological validity, complementarity of  methods, and integration of science and art. The main tools to investigate the mediating relationship of people and environment through sound have been empiricism and psychophysics. Four of the seven included papers focus on perception. In paper A, urban soundscapes were reproduced in a 3D installation. Analysis of results from an experiment revealed correlations between acoustic features and physiological indicators of stress and relaxation. Paper B evaluated soundscapes of different type. Perceived quality was predicted not only by psychoacoustic descriptors but also personality traits. Sound reproduction quality was manipulated in paper D, causing two effects on source localisation which were explained by spatial and semantic crossmodal correspondences. Crossmodal correspondence was central in paper C, a study of colour association with music. A response interface employing CIE Lab colour space, a novelty in music emotion research, was developed. A mixed method approach supported an emotion mediation hypothesis, evidenced in regression models and participant interviews. Three papers focus on design. Field surveys and acoustic measurements were carried out in restaurants. Paper E charted relations between acoustic, physical, and perceptual features, focussing on designable elements and materials. This investigation was pursued in Paper F where a taxonomy of sound sources was developed. Analysis of questionnaire data revealed perceptual and crossmodal effects. Lastly, paper G discussed how crossmodal correspondences facilitated creation of meaning in music by infusing ecologically founded sonification parameters with visual and spatial metaphors. The seven papers constitute an investigation into how sound affects us, and what sound means to us.

  • 50.
    Lindborg, PerMagnus
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. Nanyang Technological University, Singapore.
    Kwan, Nicholas A
    Nanyang Technological University.
    Audio Quality Moderates Localisation Accuracy: Two Distinct Perceptual Effects2015In: Proc. 138th Convention of the Audio Engineering Society, Warsaw, Poland: Audio Engineering Society, Inc., 2015Conference paper (Refereed)
    Abstract [en]

    Audio quality is known to cross-modally influence reaction speed, sense of presence, and visual quality. We designed an experiment to test the effect of audio quality on source localisation. Stimuli with different MP3 compression rates, as a proxy for audio quality, were generated from drum samples. Participants (n = 18) estimated the position of a snare drum target while compression rate, masker, and target position were systematically manipulated in a full-factorial repeated-measures experiment design. Analysis of variance revealed that location accuracy was better in wide target positions than in narrow, with a medium effect size; and that the effect of target position was moderated by compression rate in different directions for wide and narrow targets. The results suggest that there might be two perceptual effects at play: one, whereby increased audio quality causes a widening of the soundstage, possibly via a SMARC-like mechanism, and two, whereby it enables higher localisation accuracy. In the narrow target positions in this experiment, the two effects acted in opposite directions and largely cancelled each other out. In the wide target presentations, their effects were compounded and led to significant correlations between compression rate and localisation error.

12 1 - 50 of 88
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf