Change search
Refine search result
123 1 - 50 of 114
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bjurling, Johan
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Timing in piano music: Testing a model of melody lead2008In: Proc. of the 10th International Conference on Music Perception and Cognition, Sapporo, Japan, 2008Conference paper (Refereed)
  • 2. Bolíbar, Jordi
    et al.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Sound feedback for the optimization of performance in running2012In: TMH-QPSR special issue: Proceedings of SMC Sweden 2012 Sound and Music Computing, Understanding and Practicing in Sweden, ISSN 1104-5787, Vol. 52, no 1, p. 39-40Article in journal (Refereed)
  • 3.
    Bresin, R.
    et al.
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Friberg, Anders
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Synthesis and decoding of emotionally expressive music performance1999In: Proceedings of the IEEE 1999 Systems, Man and Cybernetics Conference - SMC’99, 1999, Vol. 4, p. 317-322Conference paper (Refereed)
    Abstract [en]

    A recently developed application of Director Musices (DM) is presented. The DM is a rule-based software tool for automatic music performance developed at the Speech Music and Hearing Dept. at the Royal Institute of Technology, Stockholm. It is written in Common Lisp and is available both for Windows and Macintosh. It is demonstrated that particular combinations of rules defined in the DM can be used for synthesizing performances that differ in emotional quality. Different performances of two pieces of music were synthesized so as to elicit listeners’ associations to six different emotions (fear, anger, happiness, sadness, tenderness, and solemnity). Performance rules and their parameters were selected so as to match previous findings about emotional aspects of music performance. Variations of the performance variables IOI (Inter-Onset Interval), OOI (Offset-Onset Interval) and L (Sound Level) are presented for each rule-setup. In a forced-choice listening test 20 listeners were asked to classify the performances with respect to emotions. The results showed that the listeners, with very few exceptions, recognized the intended emotions correctly. This shows that a proper selection of rules and rule parameters in DM can indeed produce a wide variety of meaningful, emotional performances, even extending the scope of the original rule definition

  • 4.
    Bresin, Roberto
    KTH, Superseded Departments, Speech, Music and Hearing.
    Artificial neural networks based models for automatic performance of musical scores1998In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 27, no 3, p. 239-270Article in journal (Refereed)
    Abstract [en]

    This article briefly summarises the author's research on automatic performance, started at CSC (Centro di Sonologia Computazionale, University of Padua) and continued at TMH-KTH (Speech, Music Hearing Department at the Royal Institute of Technology, Stockholm). The focus is on the evolution of the architecture of an artificial neural networks (ANNs) framework, from the first simple model, able to learn the KTH performance rules, to the final one, that accurately simulates the style of a real pianist performer, including time and loudness deviations. The task was to analyse and synthesise the performance process of a professional pianist, playing on a Disklavier. An automatic analysis extracts all performance parameters of the pianist, starting from the KTH rule system. The system possesses good generalisation properties: applying the same ANN, it is possible to perform different scores in the performing style used for the training of the networks. Brief descriptions of the program Melodia and of the two Java applets Japer and Jalisper are given in the Appendix. In Melodia, developed at the CSC, the user can run either rules or ANNs, and study their different effects. Japer and Jalisper, developed at TMH, implement in real time on the web the performance rules developed at TMH plus new features achieved by using ANNs.

  • 5.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Real-time visualization of musical expression2004In: Proceedings of Network of Excellence HUMAINE Workshop "From Signals to Signs of Emotion and Vice Versa", Santorini, Greece, Institute of Communication and Computer Systems, National Technical University of Athens, 2004, p. 19-23Conference paper (Refereed)
    Abstract [en]

    A system for real-time feedback of expressive music performance is presented.The feedback is provided by using a graphical interface where acoustic cues arepresented in an intuitive fashion. The graphical interface presents on the computerscreen a three-dimensional object with continuously changing shape, size,position, and colour. Some of the acoustic cues were associated with the shape ofthe object, others with its position. For instance, articulation was associated withshape, staccato corresponded to an angular shape and legato to a rounded shape.The emotional expression resulting from the combination of cues was mapped interms of the colour of the object (e.g., sadness/blue). To determine which colourswere most suitable for respective emotion, a test was run. Subjects rated how welleach of 8 colours corresponds to each of 12 music performances expressingdifferent emotions.

  • 6.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    SMC Sweden 2014: Sound and Music Computing: Bridging science, art, and industry2014Conference proceedings (editor) (Refereed)
  • 7.
    Bresin, Roberto
    KTH, Superseded Departments, Speech, Music and Hearing.
    Virtual virtuosity2000Doctoral thesis, comprehensive summary (Other scientific)
    Abstract [en]

    This dissertation presents research in the field ofautomatic music performance with a special focus on piano.

    A system is proposed for automatic music performance, basedon artificial neural networks (ANNs). A complex,ecological-predictive ANN was designed thatlistensto the last played note,predictsthe performance of the next note,looksthree notes ahead in the score, and plays thecurrent tone. This system was able to learn a professionalpianist's performance style at the structural micro-level. In alistening test, performances by the ANN were judged clearlybetter than deadpan performances and slightly better thanperformances obtained with generative rules.

    The behavior of an ANN was compared with that of a symbolicrule system with respect to musical punctuation at themicro-level. The rule system mostly gave better results, butsome segmentation principles of an expert musician were onlygeneralized by the ANN.

    Measurements of professional pianists' performances revealedinteresting properties in the articulation of notes markedstaccatoandlegatoin the score. Performances were recorded on agrand piano connected to a computer.Staccatowas realized by a micropause of about 60% ofthe inter-onset-interval (IOI) whilelegatowas realized by keeping two keys depressedsimultaneously; the relative key overlap time was dependent ofIOI: the larger the IOI, the shorter the relative overlap. Themagnitudes of these effects changed with the pianists' coloringof their performances and with the pitch contour. Theseregularities were modeled in a set of rules for articulation inautomatic piano music performance.

    Emotional coloring of performances was realized by means ofmacro-rules implemented in the Director Musices performancesystem. These macro-rules are groups of rules that werecombined such that they reflected previous observations onmusical expression of specific emotions. Six emotions weresimulated. A listening test revealed that listeners were ableto recognize the intended emotional colorings.

    In addition, some possible future applications are discussedin the fields of automatic music performance, music education,automatic music analysis, virtual reality and soundsynthesis.

  • 8.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    What is the color of that music performance?2005In: Proceedings of the International Computer Music Conference - ICMC 2005, Barcelona, 2005, p. 367-370Conference paper (Refereed)
    Abstract [en]

    The representation of expressivity in music is still a fairlyunexplored field. Alternative ways of representing musicalinformation are necessary when providing feedback onemotion expression in music such as in real-time tools formusic education, or in the display of large music databases.One possible solution could be a graphical non-verbal representationof expressivity in music performance using coloras index of emotion. To determine which colors aremost suitable for an emotional expression, a test was run.Subjects rated how well each of 8 colors and their 3 nuancescorresponds to each of 12 music performances expressingdifferent emotions. Performances were playedby professional musicians with 3 instruments, saxophone,guitar, and piano. Results show that subjects associateddifferent hues to different emotions. Also, dark colorswere associated to music in minor tonality and light colorsto music in major tonality. Correspondence betweenspectrum energy and color hue are preliminary discussed.

  • 9.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Askenfelt, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Hansen, Kjetil
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Ternström, Sten
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Sound and Music Computing at KTH2012In: Trita-TMH, ISSN 1104-5787, Vol. 52, no 1, p. 33-35Article in journal (Other academic)
    Abstract [en]

    The SMC Sound and Music Computing group at KTH (formerly the Music Acoustics group) is part of the Department of Speech Music and Hearing, School of Computer Science and Communication. In this short report we present the current status of the group mainly focusing on its research.

  • 10.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments, Speech, Music and Hearing.
    Battel, Giovanni Umberto
    Articulation strategies in expressive piano performance - Analysis of legato, staccato, and repeated notes in performances of the Andante movement of Mozart's Sonata in G major (K 545)2000In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 29, no 3, p. 211-224Article in journal (Refereed)
    Abstract [en]

    Articulation strategies applied by pianists in expressive performances of the same core are analysed. Measurements of key overlap time and its relation to the inter-onset-interval are collected for notes marked legato and staccato in the first sixteen bars of the Andante movement of W.A. Mozart's Piano Sonata in G major, K 545. Five pianists played the piece nine times. First, they played in a wa that they considered "optimal". In the remaining eight performances they were asked to represent different expressive characters, as specified in terms of different adjectives. Legato,staccato, and repeated notes articulation applied by the right hand were examined by means of statistical analysis. Although the results varied considerably between pianists, some trends could be observed. The pianists generally used similar strategies in the rendering intended to represent different expressive characters. legato was played with a key overlap ratio that depended on the inter-onset-interval (IOI). Staccato tones had approximate duration of 40% of the IOI. Repeated notes were played with a duration of about 60% of the IOI. The results seem useful as a basis for articulation rules in grammars for automatic piano performance.

  • 11.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    de Witt, Anna
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Papetti, Stefano
    University of Verona.
    Civolani, Marco
    University of Verona.
    Fontana, Federico
    University of Verona.
    Expressive sonification of footstep sounds2010In: Proceedings of ISon 2010: 3rd Interactive Sonification Workshop / [ed] Bresin, Roberto; Hermann, Thomas; Hunt, Andy, Stockholm, Sweden: KTH Royal Institute of Technology, 2010, p. 51-54Conference paper (Refereed)
    Abstract [en]

    In this study we present the evaluation of a model for the interactive sonification of footsteps. The sonification is achieved by means of specially designed sensored-shoes which control the expressive parameters of novel sound synthesis models capable of reproducing continuous auditory feedback for walking. In a previousstudy, sounds corresponding to different grounds were associated to different emotions and gender. In this study, we used an interactive sonification actuated by the sensored-shoes for providing auditory feedback to walkers. In an experiment we asked subjects to walk (using the sensored-shoes) with four different emotional intentions (happy, sad, aggressive, tender) and for each emotion we manipulated the ground texture sound four times (wood panels, linoleum, muddy ground, and iced snow). Preliminary results show that walkers used a more active walking style (faster pace) when the sound of the walking surface was characterized by an higher spectral centroid (e.g. iced snow), and a less active style (slower pace) when the spectral centroid was low (e.g. muddy ground). Harder texture sounds lead to more aggressive walking patters while softer ones to more tender and sad walking styles.

  • 12.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Delle Monache, Stefano
    University of Verona.
    Fontana, Federico
    University of Verona.
    Papetti, Stefano
    University of Verona.
    Polotti, Pietro
    University of Verona.
    Visell, Yon
    McGill University.
    Auditory feedback through continuous control of crumpling sound synthesis2008In: Proceedings of Sonic Interaction Design: Sound, Information and Experience. A CHI 2008 Workshop organized by COST Action IC0601, IUAV University of Venice , 2008, p. 23-28Conference paper (Refereed)
    Abstract [en]

    A realtime model for the synthesis of crumpling sounds ispresented. By capturing the statistics of short sonic transients which give rise to crackling noise, it allows for a consistent description of a broad spectrum of audible physical processes which emerge in several everyday interaction contexts.The model drives a nonlinear impactor that sonifies every transient, and it can be parameterized depending on the physical attributes of the crumpling material. Three different scenarios are described, respectively simulating the foot interaction with aggregate ground materials, augmenting a dining scenario, and affecting the emotional content of a footstep sequence. Taken altogether, they emphasize the potential generalizability of the model to situations in which a precise control of auditory feedback can significantly increase the enactivity and ecological validity of an interface.

  • 13.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Falkenberg Hansen, Kjetil
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Månsson, Lisa
    Tardat, Bruno
    Musikcyklarna/Music bikes: An installation for enabling children to investigate the relationship between expressive music performance and body motion2014In: Proceedings of the Sound and Music Computing Sweden Conference 2014, KTH Royal Institute of Technology, 2014, p. 1-2Conference paper (Refereed)
  • 14.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Favero, Federico
    KTH, School of Architecture and the Built Environment (ABE).
    Annersten, Lars
    Musikverket.
    Berner, David
    Musikverket.
    Morreale, Fabio
    Queen Mary University of London.
    SOUND FOREST/LJUDSKOGEN: A LARGE-SCALE STRING-BASED INTERACTIVE MUSICAL INSTRUMENT2016In: Sound and Music Computing 2016, SMC Sound&Music Computing NETWORK , 2016, p. 79-84Conference paper (Refereed)
    Abstract [en]

     In this paper we present a string-based, interactive, largescale installation for a new museum dedicated to performing arts, Scenkonstmuseet, which will be inaugurated in 2017 in Stockholm, Sweden. The installation will occupy an entire room that measures 10x5 meters. We aim to create a digital musical instrument (DMI) that facilitates intuitive musical interaction, thereby enabling visitors to quickly start creating music either alone or together. The interface should be able to serve as a pedagogical tool; visitors should be able to learn about concepts related to music and music making by interacting with the DMI. Since the lifespan of the installation will be approximately five years, one main concern is to create an experience that will encourage visitors to return to the museum for continued instrument exploration. In other words, the DMI should be designed to facilitate long-term engagement. Finally, an important aspect in the design of the installation is that the DMI should be accessible and provide a rich experience for all museum visitors, regardless of age or abilities.

  • 15.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Friberg, Anders
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    A multimedia environment for interactive music performance1997In: TMH-QPSR, Vol. 38, no 2-3, p. 029-032Article in journal (Other academic)
    Abstract [en]

    We propose a music performance tool based on the Java programming language. This software runs in any Java applet viewer (i.e. a WWW browser) and interacts with the local Midi equipment by mean of a multi-task software module for Midi applications (MidiShare). Two main ideas are at the base of our project: one is to realise an easy, intuitive, hardware and software independent tool for performance, and the other is to achieve an easier development of the tool itself. At the moment there are two projects under development: a system based only on a Java applet, called Japer (Java performer), and a hybrid system based on a Java user interface and a Lisp kernel for the development of the performance tools. In this paper, the first of the two projects is presented.

  • 16.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing. KTH, Superseded Departments (pre-2005), Speech Transmission and Music Acoustics.
    Friberg, Anders
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    A multimedia environment for interactive music performance1997In: Proceedings of KANSEI - The Technology of Emotion, AIMI International Workshop, 1997, p. 64-67Conference paper (Refereed)
  • 17.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Emotion rendering in music: Range and characteristic values of seven musical variables2011In: Cortex, ISSN 0010-9452, E-ISSN 1973-8102, Vol. 47, no 9, p. 1068-1081Article in journal (Refereed)
    Abstract [en]

    Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 x 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than stimuli without performance variations.

  • 18.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments, Speech, Music and Hearing.
    Friberg, Anders
    KTH, Superseded Departments, Speech, Music and Hearing.
    Emotional coloring of computer controlled music performance2000In: Computer music journal, ISSN 0148-9267, E-ISSN 1531-5169, Vol. 24, no 4, p. 44-63Article in journal (Refereed)
  • 19.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Friberg, Anders
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Emotional coloring of computer controlled music performance2000In: Computer music journal, ISSN 0148-9267, E-ISSN 1531-5169, Vol. 24, no 4, p. 44-61Article in journal (Refereed)
  • 20.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Evaluation of computer systems for expressive music performance2013In: Guide to Computing for Expressive Music Performance / [ed] Kirke, Alexis; Miranda, Eduardo R., Springer, 2013, p. 181-203Chapter in book (Refereed)
    Abstract [en]

    In this chapter, we review and summarize different methods for the evaluation of CSEMPs. The main categories of evaluation methods are (1) comparisons with measurements from real performances, (2) listening experiments, and (3) production experiments. Listening experiments can be of different types. For example, in some experiments, subjects may be asked to rate a particular expressive characteristic (such as the emotion conveyed or the overall expression) or to rate the effect of a particular acoustic cue. In production experiments, subjects actively manipulate system parameters to achieve a target performance. Measures for estimating the difference between performances are discussed in relation to the objectives of the model and the objectives of the evaluation. There will be also a section with a presentation and discussion of the Rencon (Performance Rendering Contest). Rencon is a contest for comparing the expressive musical performances of the same score generated by different CSEMPs. Practical examples from previous works are presented, commented on, and analysed.

  • 21.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Friberg, Anders
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Expressive musical icons2001In: Proceedings of the International Conference on Auditory Display - ICAD 2001, 2001, p. 141-143Conference paper (Refereed)
    Abstract [en]

    Recent research on the analysis and synthesis of music performance has resulted in tools for the control of the expressive content in automatic music performance [1]. These results can be relevant for applications other than performance of music by a computer. In this work it is presented how the techniques for enhancing the expressive character in music performance can be used also in the design of sound logos, in the control of synthesis algorithms, and for achieving better ringing tones in mobile phones. 

  • 22.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Influence of Acoustic Cues on the Expressive Performance of Music2008In: Proceedings of the 10th International Conference on Music Perception and Cognition, Sapporo, Japan, 2008Conference paper (Refereed)
  • 23.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Friberg, Anders
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Rule-based emotional colouring of music performance2000In: Proceedings of the International Computer Music Conference - ICMC 2000 / [ed] Zannos, I., San Francisco: ICMA , 2000, p. 364-367Conference paper (Refereed)
  • 24.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Friberg, Anders
    KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.
    Software tools for musical expression.2000In: Proceedings of the InternationalComputer Music Conference 2000 / [ed] Zannos, Ioannis, San Francisco, USA: Computer Music Association , 2000, p. 499-502Conference paper (Refereed)
  • 25.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Hansen, Kjetil Falkenberg
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dahl, Sofia
    The Radio Baton as configurable musical instrument and controller2003In: Proc. Stockholm Music Acoustics Conference, 2003, Vol. 2, p. 689-691Conference paper (Refereed)
    Abstract [en]

    The Max Mathews radio baton (RB) has been produced in about 40 pieces until today. It has usually been applied as an orchestra conducting system, as interactive music composition controller using typical percussionist gestures, and as a controller for sound synthesis models. In the framework of the Sounding Object EU founded project, the RB has found new applications scenarios. Three applications were based on this controller. This was achieved by changing the gesture controls. Instead of the default batons, a new radio sender that fits the fingertips was developed. This new radio sender allows musicians’ interaction based on hand gestures and it can also fit different devices. A Pd model of DJ scratching techniques (submitted to SMAC03) was controlled with the RB and the fingertip radio sender. This controller allows DJs a direct control of sampled sounds maintaining hand gestures similar to those used on vinyl. The sound model of a bodhran (submitted to SMAC03) was controlled with a traditional playing approach. The RB was controlled with a traditional bodhran double beater with one fingertip radio sender at each end. This allowed detection of the beater position on the RB surface, the surfaced corresponding to the membrane in the sound model. In a third application the fingertip controller was used to move a virtual ball rolling along the elastic surface of a box placed over the surface of the RB. The DJ console and the virtual bodhran were played in concerts.

  • 26.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Hansen, Kjetil Falkenberg
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dahl, Sofia
    Rath, Mathias
    Marshall, Mark
    Moynihan, Breege
    Devices for manipulation and control of sounding objects: the Vodhran and the Invisiball2003In: The Sounding Object / [ed] Rocchesso, Davide; Fontana, Federico, Mondo Estremo , 2003, p. 271-295Chapter in book (Other academic)
  • 27.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Hansen, Kjetil Falkenberg
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Karjalainen, Matti
    Helsinki University of Technology.
    Mäki-Patola, Teemu
    Helsinki University of Technology.
    Kanerva, Aki
    Helsinki University of Technology.
    Huovilainen, Antti
    Helsinki University of Technology.
    Jordá, Sergi
    University Pompeu Fabra.
    Kaltenbrunner, Martin
    University Pompeu Fabra.
    Geiger, Günter
    University Pompeu Fabra.
    Bencina, Ross
    University Pompeu Fabra.
    de Götzen, Amalia
    University of Padua.
    Rocchesso, Davide
    IUAV University of Venice.
    Controlling sound production2008In: Sound to Sense, Sense to Sound: A state of the art in Sound and Music Computing / [ed] Polotti, Pietro; Rocchesso, Davide, Berlin: Logos Verlag , 2008, p. 447-486Chapter in book (Refereed)
  • 28.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Hermann, T.
    Hunt, A.
    Interactive sonification2012In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 5, no 3-4, p. 85-86Article in journal (Refereed)
    Abstract [en]

    In October 2010, Roberto Bresin, Thomas Hermann and Andy Hunt launched a call for papers for a special issue on Interactive Sonification of the Journal on Multimodal User Interfaces (JMUI). The call was published in eight major mailing lists in the field of Sound and Music Computing and on related websites. Twenty manuscripts were submit- ted for review, and eleven of them have been accepted for publication after further improvements. Three of the papers are further developments of works presented at ISon 2010— Interactive Sonification workshop. Most of the papers went through a three-stage review process.

    The papers give an interesting overview of the field of Interactive Sonification as it is today. Their topics include the sonification of data exploration and of motion, a new sound synthesis model suitable for interactive sonification applications, a study on perception in the everyday periphery of attention, and the proposal of a conceptual framework for interactive sonification. 

  • 29.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Hermann, ThomasBielefeld University, Bielefeld, Germany.Hunt, AndyUniversity of York, York, UK.
    Proceedings of ISon 2010 - Interactive Sonification Workshop: Human Interaction with Auditory Displays2010Conference proceedings (editor) (Other academic)
    Abstract [en]

    Introduction

    These are the proceedings of the ISon 2010 meeting, which is the 3rd international Interactive Sonification Workshop. The first ISon workshop was held in Bielefeld (Germany) in 2004, and a second one was held in York (UK) in 2007.These meetings:

    • focus on the link between auditory displays and human‐computer interaction
    • bring together experts in sonification to exchange ideas and work‐in‐progress
    • strengthen networking in sonification research

    High quality work is assured by a peer‐reviewing process, and the successful papers were presented at the conference and are published here.

    ISon 2010 was supported by COST IC0601 Action on Sonic Interaction Design (SID) (http://www.cost‐sid.org/).

     

    About Interactive Sonification

    Sonification & Auditory Displays are increasingly becoming an established technology for exploring data, monitoring complex processes, or assisting exploration and navigation of data spaces. Sonification addresses the auditory sense by transforming data into sound, allowing the human user to get valuable information from data by using their natural listening skills.

    The main differences of sound displays over visual displays are that sound can:

    • Represent frequency responses in an instant (as timbral characteristics)
    • Represent changes over time, naturally
    • Allow microstructure to be perceived
    • Rapidly portray large amounts of data
    • Alert listener to events outside the current visual focus
    • Holistically bring together many channels of information

    Auditory displays typically evolve over time since sound is inherently a temporal phenomenon. Interaction thus becomes an integral part of the process in order to select, manipulate, excite or control the display, and this has implications for the interface between humans and computers. In recent years it has become clear that there is an important need for research to address the interaction with auditory displays more explicitly. Interactive Sonification is the specialized research topic concerned with the use of sound to portray data, but where there is a human being at the heart of an interactive control loop. Specifically it deals with:

    • interfaces between humans and auditory displays
    • mapping strategies and models for creating coherency between action and reaction (e.g. acoustic feedback, but also combined with haptic or visual feedback)
    • perceptual aspects of the display (how to relate actions and sound, e.g. cross‐modal effects, importance of synchronisation)
    • applications of Interactive Sonification
    • evaluation of performance, usability and multi‐modal interactive systems including auditory feedback

    Although ISon shines a spotlight on the particular situations where there is real‐time interaction with sonification systems, the usual community for exploring all aspects of auditory display is ICAD (http://www.icad.org/).

     

    Contents

    These proceedings contain the conference versions of all contributions to the 3rd International interactive Sonification Workshop. Where papers have audio or audiovisual examples, these are listed in the paper and will help to illustrate the multimedia content more clearly.

    We very much hope that the proceedings provide an inspiration for your work and extend your perspective on the new emerging research field of interactive sonification.

    Roberto Bresin, Thomas Hermann, Andy Hunt, ISon 2010 Organisers

  • 30.
    Bresin, Roberto
    et al.
    KTH, Superseded Departments, Speech, Music and Hearing.
    Widmer, Gerhard
    Production of staccato articulation in Mozart sonatas played on a grand piano.: Preliminary results2000In: Speech Music and Hearing Quarterly Progress and Status Report, ISSN 1104-5787, Vol. 41, no 4, p. 001-006Article in journal (Refereed)
  • 31.
    Burger, Birgitta
    et al.
    Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music, University of Jyväskylä.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Communication of Musical Expression by Means of Mobile Robot Gestures2010In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 3, no 1, p. 109-118Article in journal (Refereed)
    Abstract [en]

    We developed a robotic system that can behave in an emotional way. A 3-wheeled simple robot with limited degrees of freedom was designed. Our goal was to make the robot displaying emotions in music performance by performing expressive movements. These movements have been compiled and programmed based on literature about emotion in music, musicians’ movements in expressive performances, and object shapes that convey different emotional intentions. The emotions happiness, anger, and sadness have been implemented in this way. General results from behavioral experiments show that emotional intentions can be synthesized, displayed and communicated by an artificial creature, also in constrained circumstances.

  • 32.
    Burger, Birgitta
    et al.
    University of Cologne, Dept. of Systematic Musicology, Germany.
    Bresin, Roberto
    KTH, Superseded Departments, Speech, Music and Hearing.
    Displaying expression in musical performance by means of a mobile robot2007In: Affective Computing And Intelligent Interaction, Proceedings, 2007, Vol. 4738, p. 753-754Conference paper (Refereed)
  • 33. Camurri, A.
    et al.
    Bevilacqua, F.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Maestre, E.
    Penttinen, H.
    Seppänen, J.
    Välimäki, V.
    Volpe, G.
    Warusfel, O.
    Embodied music listening and making in context-aware mobile applications: the EU-ICT SAME Project2009Conference paper (Refereed)
  • 34.
    Camurri, Antonio
    et al.
    University of Genova.
    Volpe, Gualtiero
    University of Genova.
    Vinet, Hugues
    IRCAM, Paris.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Maestre, Esteban
    Universitat Pompeu Fabra, Barcelona.
    Llop, Jordi
    Universitat Pompeu Fabra, Barcelona.
    Kleimola, Jari
    Oksanen, Sami
    Välimäki, Vesa
    Seppanen, Jarno
    User-centric context-aware mobile applications for embodied music listening2009In: User Centric Media / [ed] Akan, Ozgur; Bellavista, Paolo; Cao, Jiannong; Dressler, Falko; Ferrari, Domenico; Gerla, Mario; Kobayashi, Hisashi; Palazzo, Sergio; Sahni, Sartaj; Shen, Xuemin (Sherman); Stan, Mircea; Xiaohua, Jia; Zomaya, Albert; Coulson, Geoffrey; Daras, Petros; Ibarra, Oscar Mayora, Heidelberg: Springer Berlin , 2009, p. 21-30Chapter in book (Refereed)
    Abstract [en]

    This paper surveys a collection of sample applications for networked user-centric context-aware embodied music listening. The applications have been designed and developed in the framework of the EU-ICT Project SAME (www.sameproject.eu) and have been presented at Agora Festival (IRCAM, Paris, France) in June 2009. All of them address in different ways the concept of embodied, active listening to music, i.e., enabling listeners to interactively operate in real-time on the music content by means of their movements and gestures as captured by mobile devices. In the occasion of the Agora Festival the applications have also been evaluated by both expert and non-expert users

  • 35.
    Castellano, Ginevra
    et al.
    InfoMus Lab, DIST, University of Genova.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Camurri, Antonio
    InfoMus Lab, DIST, University of Genova.
    Volpe, Gualtiero
    InfoMus Lab, DIST, University of Genova.
    Expressive Control of Music and Visual Media by Full-Body Movement2007In: Proceedings of the 7th International Conference on New Interfaces for Musical Expression, NIME '07, New York, NY, USA: ACM Press, 2007, p. 390-391Conference paper (Refereed)
    Abstract [en]

    In this paper we describe a system which allows users to use their full-body for controlling in real-time the generation of an expressive audio-visual feedback. The system extracts expressive motion features from the user’s full-body movements and gestures. The values of these motion features are mapped both onto acoustic parameters for the real-time expressive rendering ofa piece of music, and onto real-time generated visual feedback projected on a screen in front of the user.

  • 36. Castellano, Ginevra
    et al.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Camurri, Antonio
    Volpe, Gualtiero
    User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements2007In: Affective Computing and Intelligent Interaction / [ed] Paiva, Ana; Prada, Rui; Picard, Rosalind W., Berlin / Heidelberg: Springer Berlin/Heidelberg, 2007, p. 501-510Chapter in book (Refereed)
    Abstract [en]

    In this paper we describe a system allowing users to express themselves through their full-body movement and gesture and to control in real-time the generation of an audio-visual feedback. The systems analyses in real-time the user’s full-body movement and gesture, extracts expressive motion features and maps the values of the expressive motion features onto real-time control of acoustic parameters for rendering a music performance. At the same time, a visual feedback generated in real-time is projected on a screen in front of the users with their coloured silhouette, depending on the emotion their movement communicates. Human movement analysis and visual feedback generation were done with the EyesWeb software platform and the music performance rendering with pDM. Evaluation tests were done with human participants to test the usability of the interface and the effectiveness of the design.

  • 37.
    Dahl, Sofia
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bevilacqua, Frédéric
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Clayton, Martin
    Leante, Laura
    Poggi, Isabella
    Rasamimanana, Nicolas
    Gestures in performance2009In: Musical Gestures: Sound, Movement, and Meaning / [ed] Godøy, Rolf Inge; Leman, Marc, New York: Routledge , 2009, p. 36-68Chapter in book (Refereed)
    Abstract [en]

    We experience and understand the world, including music, through body movement–when we hear something, we are able to make sense of it by relating it to our body movements, or form an image in our minds of body movements. Musical Gestures is a collection of essays that explore the relationship between sound and movement. It takes an interdisciplinary approach to the fundamental issues of this subject, drawing on ideas, theories and methods from disciplines such as musicology, music perception, human movement science, cognitive psychology, and computer science.

  • 38.
    De Witt, Anna
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Sound design for affective interaction2007In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics / [ed] Paiva, A; Prada, R; Picard, RW, 2007, Vol. 4738, p. 523-533Conference paper (Refereed)
    Abstract [en]

    Different design approaches contributed to what we see today as the prevalent design paradigm for Human Computer Interaction; though they have been mostly applied to the visual aspect of interaction. In this paper we presented a proposal for sound design strategies that can be used in applications involving affective interaction. For testing our approach we propose the sonification of the Affective Diary, a digital diary with focus on emotions, affects, and bodily experience of the user. We applied results from studies in music and emotion to sonic interaction design. This is one of the first attempts introducing different physics-based models for the real-time complete sonification of an interactive user interface in portable devices.

  • 39.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities2013In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 8, no 12, p. e82491-Article in journal (Refereed)
    Abstract [en]

    The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed.

  • 40.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Evaluation of a system for the sonification of elite rowing in an interactive contextManuscript (preprint) (Other academic)
  • 41.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Exploration and evaluation of a system for interactive sonification of elite rowing2015In: Sports Engineering, ISSN 1369-7072, E-ISSN 1460-2687, Vol. 18, no 1, p. 29-41Article in journal (Refereed)
    Abstract [en]

    In recent years, many solutions based on interactive sonification have been introduced for enhancing sport training. Few of them have been assessed in terms of efficiency or design. In a previous study, we performed a quantitative evaluation of four models for the sonification of elite rowing in a non-interactive context. For the present article, we conducted on-water experiments to investigate the effects of some of these models on two kinematic quantities: stroke rate value and fluctuations in boat velocity. To this end, elite rowers interacted with discrete and continuous auditory displays in two experiments. A method for computing an average rowing cycle is introduced, together with a measure of velocity fluctuations. Participants answered to questionnaires and interviews to assess the degree of acceptance of the different models and to reveal common trends and individual preferences. No significant effect of sonification could be determined in either of the two experiments. The measure of velocity fluctuations was found to depend linearly on stroke rate. Participants provided feedback about their aesthetic preferences and functional needs during interviews, allowing us to improve the models for future experiments to be conducted over longer periods.

  • 42.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Sonification of physical quantities throughout history: a meta-study of previous mapping strategies2011In: Proceedings of the 17th International Conference on Auditory Display (ICAD 2011), Budapest, Hungary: OPAKFI Egyesület , 2011Conference paper (Refereed)
    Abstract [en]

    We introduce a meta-study of previous sonification designs taking physical quantities as input data. The aim is to build a solid foundation for future sonification works so that auditory display researchers would be able to take benefit from former studies, avoiding to start from scratch when beginning new sonification projects. This work is at an early stage and the objective of this paper is rather to introduce the methodology than to come to definitive conclusions. After a historical introduction, we explain how to collect a large amount of articles and extract useful information about mapping strategies. Then, we present the physical quantities grouped according to conceptual dimensions, as well as the sound parameters used in sonification designs and we summarize the current state of the study by listing the couplings extracted from the article database. A total of 54 articles have been examined for the present article. Finally, a preliminary analysis of the results is performed.

  • 43.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Sonification of sculler movements, development of preliminary methods2010In: Proceedings of ISon 2010, 3rd Interactive Sonification Workshop / [ed] Bresin, Roberto; Hermann, Thomas; Hunt, Andy, Stockholm, Sweden: KTH Royal Institute of Technology , 2010, p. 39-43Conference paper (Refereed)
    Abstract [en]

    Sonification is a widening field of research with many possibilitiesfor practical applications in various scientific domains. The rapiddevelopment of mobile technology capable of efficiently handlingnumerical information offers new opportunities for interactive auditorydisplay. In this scope, the SONEA project (SONification ofElite Athletes) aims at improving performances of Olympic-levelathletes by enhancing their training techniques, taking advantageof both the strong coupling between auditory and sensorimotorsystems, and the efficient learning and memorizing abilities pertainingthe sense of hearing. An application to rowing is presentedin this article. Rough estimates of the position and mean velocityof the craft are given by a GPS receiver embedded in a smartphonetaken onboard. An external accelerometer provides boatacceleration data with higher temporal resolution. The developmentof preliminary methods for sonifying the collected data hasbeen carried out under the specific constraints of a mobile deviceplatform. The sonification is either performed by the phone as areal-time feedback or by a computer using data files as input foran a posteriori analysis of the training. In addition, environmentalsounds recorded during training can be synchronized with thesonification to perceive the coherence of the sequence of soundsthroughout the rowing cycle. First results show that sonificationusing a parameter-mapping method over

  • 44.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Hansen, Kjetil Falkenberg
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    An overview of sound and music applications for Android available on the market2012In: Proceedings of the 9th Sound and Music Computing Conference, SMC 2012 / [ed] Serafin, Stefania, Sound and music Computing network , 2012, p. 541-546Conference paper (Refereed)
    Abstract [en]

    This paper introduces a database of sound-based applications running on the Android mobile platform. The longterm objective is to provide a state-of-the-art of mobile applications dealing with sound and music interaction. After exposing the method used to build up and maintain the database using a non-hierarchical structure based on tags, we present a classification according to various categories of applications, and we conduct a preliminary analysis of the repartition of these categories reflecting the current state of the database.

  • 45.
    Eerola, Tuomas
    et al.
    Department of Music, University of Jyväskylä, Jyväskylä, Finland .
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Emotional expression in music: Contribution, linearity, and additivity of primary musical cues2013In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 4, p. 487-Article in journal (Refereed)
    Abstract [en]

    The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77-89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0-8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music.

  • 46.
    Elblaus, Ludvig
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Goina, Maurizio
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Robitaille, Marie Andree
    Stockholm University of the Arts.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Modes of sonic interaction in circus: Three proofs of concept2014In: Proceedings of Sound and Music Computing Conference 2014, Athens, 2014, p. 1698-1706Conference paper (Refereed)
    Abstract [en]

    The art of circus is a vibrant and competitive culture that embraces new tools and technology. In this paper, a series of exploratory design processes resulting in proofs of concepts are presented, showing strategies for effective use of three different modes of sonic interaction in contemporary circus. Each design process is based on participatory studio work, involving professional circus artists. All of the proofs of concepts have been evaluated, both with studio studies and public circus performances, taking the work beyond theoretical laboratory projects and properly engaging the practice and culture of contemporary circus.The first exploration uses a contortionist’s extreme bodily manipulation as inspiration for sonic manipulations in an accompanying piece of music. The second exploration uses electric amplification of acoustic sounds as a transformative enhancement of existing elements of circus performance. Finally, a sensor based system of real-time sonification of body gestures is explored and ideas from the sonification of dance are translated into the realm of circus.

  • 47.
    Elblaus, Ludvig
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Hansen, Kjetil Falkenberg
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    NIME Design and Contemporary Music Practice: Benefits and Challenges2014Conference paper (Refereed)
    Abstract [en]

    This paper deals with the question of how the developmentof new musical artifacts can benet from deeply engagingwith contemporary musical practice. With the novel ideasproduced by the NIME community manifested in musicalinstruments in continuous use, new research questions canbe answered and new sources of knowledge can be explored.This can also be very helpful in evaluation, as it is possi-ble to evaluate the qualities of an instrument in a speciedcontext, rather than evaluating a prototyped instrument onthe basis of its unrealised potential. The information fromsuch evaluation can then be fed back into the developmentprocess, allowing researchers to probe musical practice itselfwith their designs.

  • 48.
    Elblaus, Ludvig
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Tsaknaki, Vasiliki
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Lewandowski, Vincent
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Nebula: An Interactive Garment Designed for Functional Aesthetics2015In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA: ACM , 2015, p. 275-278Conference paper (Refereed)
    Abstract [en]

    In this paper we present Nebula, a prototype for examining the properties of textiles, fashion accessories, and digital technologies to arrive at a garment design that brings these elements together in a cohesive manner. Bridging the gap between everyday performativity and enactment, we aim at discussing aspects of the making process, interaction and functional aesthetics that emerged. Nebula is part of the Sound Clothes project that aims at exploring the expressive potential of wearable technologies creating sound from motion.

  • 49.
    Eriksson, Martin
    et al.
    KTH, School of Technology and Health (STH), Medical sensors, signals and systems (MSSS) (Closed 20130701).
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Improving running mechanics by use of interactive sonification2010In: Proceedings of the Interaction Sonification workshop (ISon) 2010 / [ed] Bresin, Roberto; Hermann, Thomas; Hunt, Andy, Stockholm, Sweden: KTH Royal Institute of Technology, 2010, p. 95-98Conference paper (Refereed)
    Abstract [en]

    Running technique has a large effect on running economy interms of consumed amount of oxygen. Changing the naturalrunning technique, though, is a difficult task. In this paper, a method based on sonification is presented, that will assist the runner in obtaining a more efficient running style. The system is based on an accelerometer sending data to a mobile phone.Thus the system is non-obtrusive and possible to use in theeveryday training. Specifically, the feedback given is based on the runner’s vertical displacement of the center of mass. As this is the main source of energy expenditure during running, it is conjectured that a reduced vertical displacement should improve running economy.

  • 50.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Interactive sonification of expressive hand gestures on a handheld device2012In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 6, no 1-2, p. 49-57Article in journal (Refereed)
    Abstract [en]

    We present here a mobile phone application called MoodifierLive which aims at using expressive music performances for the sonification of expressive gestures through the mapping of the phone’s accelerometer data to the performance parameters (i.e. tempo, sound level, and articulation). The application, and in particular the sonification principle, is described in detail. An experiment was carried out to evaluate the perceived matching between the gesture and the music performance that it produced, using two distinct mappings between gestures and performance. The results show that the application produces consistent performances, and that the mapping based on data collected from real gestures works better than one defined a priori by the authors.

123 1 - 50 of 114
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf