Change search
Refine search result
1 - 17 of 17
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Camurri, Antonio
    et al.
    University of Genova.
    Volpe, Gualtiero
    University of Genova.
    Vinet, Hugues
    IRCAM, Paris.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Maestre, Esteban
    Universitat Pompeu Fabra, Barcelona.
    Llop, Jordi
    Universitat Pompeu Fabra, Barcelona.
    Kleimola, Jari
    Oksanen, Sami
    Välimäki, Vesa
    Seppanen, Jarno
    User-centric context-aware mobile applications for embodied music listening2009In: User Centric Media / [ed] Akan, Ozgur; Bellavista, Paolo; Cao, Jiannong; Dressler, Falko; Ferrari, Domenico; Gerla, Mario; Kobayashi, Hisashi; Palazzo, Sergio; Sahni, Sartaj; Shen, Xuemin (Sherman); Stan, Mircea; Xiaohua, Jia; Zomaya, Albert; Coulson, Geoffrey; Daras, Petros; Ibarra, Oscar Mayora, Heidelberg: Springer Berlin , 2009, p. 21-30Chapter in book (Refereed)
    Abstract [en]

    This paper surveys a collection of sample applications for networked user-centric context-aware embodied music listening. The applications have been designed and developed in the framework of the EU-ICT Project SAME (www.sameproject.eu) and have been presented at Agora Festival (IRCAM, Paris, France) in June 2009. All of them address in different ways the concept of embodied, active listening to music, i.e., enabling listeners to interactively operate in real-time on the music content by means of their movements and gestures as captured by mobile devices. In the occasion of the Agora Festival the applications have also been evaluated by both expert and non-expert users

  • 2.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA (closed 2012-06-30).
    Convergence improvement and qualification of a model for fission gas behaviour in nuclear fuel2007Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    During the irradiation in a fuel assembly of a Pressurized Water Reactor (PWR), the thermal diffusion processes are not the only active mechanisms: the effects of irradiation are significant, involving variations of the diffusion coefficients as well as creation and evolution of point defects and cavities in the material.In this document we present the work we have done on MOGADOR, a numerical fission gas behaviour model for nuclear fuel under irradiation in a PWR focusing on the modelling of the behaviour of irradiation defects, fission gases and as-fabricated pores. This work has been accomplished within the frame of a six-month internship, from September 2006 to March 2007 at the LSC laboratory, located at the Cadarache research centre.The first task was to optimize the model and to improve its convergence in a simplified case, which was a necessary condition to go further with the study of a complete case. Then we started to tackle the physical qualification of MOGADOR, that is to say verify the behaviour of some physical quantities and their dependencies to some parameters. We present also a short review of numerical methods commonly used for solving ordinary differential equations.

  • 3.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Evaluation of four models for the sonification of elite rowing2012In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 5, no 3-4, p. 143-156Article in journal (Refereed)
    Abstract [en]

    Many aspects of sonification represent potential benefits for the practice of sports. Taking advantage of the characteristics of auditory perception, interactive sonification offers promising opportunities for enhancing the training of athletes. The efficient learning and memorizing abilities pertaining to the sense of hearing, together with the strong coupling between auditory and sensorimotor systems, make the use of sound a natural field of investigation in quest of efficiency optimization in individual sports at a high level. This study presents an application of sonification to elite rowing, introducing and evaluating four sonification models.The rapid development of mobile technology capable of efficiently handling numerical information offers new possibilities for interactive auditory display. Thus, these models have been developed under the specific constraints of a mobile platform, from data acquisition to the generation of a meaningful sound feedback. In order to evaluate the models, two listening experiments have then been carried out with elite rowers. Results show a good ability of the participants to efficiently extract basic characteristics of the sonified data, even in a non-interactive context. Qualitative assessment of the models highlights the need for a balance between function and aesthetics in interactive sonification design. Consequently, particular attention on usability is required for future displays to become widespread.

  • 4.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Interactive sonification of motion: Design, implementation and control of expressive auditory feedback with mobile devices2013Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Sound and motion are intrinsically related, by their physical nature and through the link between auditory perception and motor control. If sound provides information about the characteristics of a movement, a movement can also be influenced or triggered by a sound pattern. This thesis investigates how this link can be reinforced by means of interactive sonification. Sonification, the use of sound to communicate, perceptualize and interpret data, can be used in many different contexts. It is particularly well suited for time-related tasks such as monitoring and synchronization, and is therefore an ideal candidate to support the design of applications related to physical training. Our objectives are to develop and investigate computational models for the sonification of motion data with a particular focus on expressive movement and gesture, and for the sonification of elite athletes movements.  We chose to develop our applications on a mobile platform in order to make use of advanced interaction modes using an easily accessible technology. In addition, networking capabilities of modern smartphones potentially allow for adding a social dimension to our sonification applications by extending them to several collaborating users. The sport of rowing was chosen to illustrate the assistance that an interactive sonification system can provide to elite athletes. Bringing into play complex interactions between various kinematic and kinetic quantities, studies on rowing kinematics provide guidelines to optimize rowing efficiency, e.g. by minimizing velocity fluctuations around average velocity. However, rowers can only rely on sparse cues to get information relative to boat velocity, such as the sound made by the water splashing on the hull. We believe that an interactive augmented feedback communicating the dynamic evolution of some kinematic quantities could represent a promising way of enhancing the training of elite rowers. Since only limited space is available on a rowing boat, the use of mobile phones appears appropriate for handling streams of incoming data from various sensors and generating an auditory feedback simultaneously. The development of sonification models for rowing and their design evaluation in offline conditions are presented in Paper I. In Paper II, three different models for sonifying the synchronization of the movements of two users holding a mobile phone are explored. Sonification of expressive gestures by means of expressive music performance is tackled in Paper III. In Paper IV, we introduce a database of mobile applications related to sound and music computing. An overview of the field of sonification is presented in Paper V, along with a systematic review of mapping strategies for sonifying physical quantities. Physical and auditory dimensions were both classified into generic conceptual dimensions, and proportion of use was analyzed in order to identify the most popular mappings. Finally, Paper VI summarizes experiments conducted with the Swedish national rowing team in order to assess sonification models in an interactive context.

  • 5.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities2013In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 8, no 12, p. e82491-Article in journal (Refereed)
    Abstract [en]

    The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed.

  • 6.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Evaluation of a system for the sonification of elite rowing in an interactive contextManuscript (preprint) (Other academic)
  • 7.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Exploration and evaluation of a system for interactive sonification of elite rowing2015In: Sports Engineering, ISSN 1369-7072, E-ISSN 1460-2687, Vol. 18, no 1, p. 29-41Article in journal (Refereed)
    Abstract [en]

    In recent years, many solutions based on interactive sonification have been introduced for enhancing sport training. Few of them have been assessed in terms of efficiency or design. In a previous study, we performed a quantitative evaluation of four models for the sonification of elite rowing in a non-interactive context. For the present article, we conducted on-water experiments to investigate the effects of some of these models on two kinematic quantities: stroke rate value and fluctuations in boat velocity. To this end, elite rowers interacted with discrete and continuous auditory displays in two experiments. A method for computing an average rowing cycle is introduced, together with a measure of velocity fluctuations. Participants answered to questionnaires and interviews to assess the degree of acceptance of the different models and to reveal common trends and individual preferences. No significant effect of sonification could be determined in either of the two experiments. The measure of velocity fluctuations was found to depend linearly on stroke rate. Participants provided feedback about their aesthetic preferences and functional needs during interviews, allowing us to improve the models for future experiments to be conducted over longer periods.

  • 8.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Sonification of physical quantities throughout history: a meta-study of previous mapping strategies2011In: Proceedings of the 17th International Conference on Auditory Display (ICAD 2011), Budapest, Hungary: OPAKFI Egyesület , 2011Conference paper (Refereed)
    Abstract [en]

    We introduce a meta-study of previous sonification designs taking physical quantities as input data. The aim is to build a solid foundation for future sonification works so that auditory display researchers would be able to take benefit from former studies, avoiding to start from scratch when beginning new sonification projects. This work is at an early stage and the objective of this paper is rather to introduce the methodology than to come to definitive conclusions. After a historical introduction, we explain how to collect a large amount of articles and extract useful information about mapping strategies. Then, we present the physical quantities grouped according to conceptual dimensions, as well as the sound parameters used in sonification designs and we summarize the current state of the study by listing the couplings extracted from the article database. A total of 54 articles have been examined for the present article. Finally, a preliminary analysis of the results is performed.

  • 9.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Sonification of sculler movements, development of preliminary methods2010In: Proceedings of ISon 2010, 3rd Interactive Sonification Workshop / [ed] Bresin, Roberto; Hermann, Thomas; Hunt, Andy, Stockholm, Sweden: KTH Royal Institute of Technology , 2010, p. 39-43Conference paper (Refereed)
    Abstract [en]

    Sonification is a widening field of research with many possibilitiesfor practical applications in various scientific domains. The rapiddevelopment of mobile technology capable of efficiently handlingnumerical information offers new opportunities for interactive auditorydisplay. In this scope, the SONEA project (SONification ofElite Athletes) aims at improving performances of Olympic-levelathletes by enhancing their training techniques, taking advantageof both the strong coupling between auditory and sensorimotorsystems, and the efficient learning and memorizing abilities pertainingthe sense of hearing. An application to rowing is presentedin this article. Rough estimates of the position and mean velocityof the craft are given by a GPS receiver embedded in a smartphonetaken onboard. An external accelerometer provides boatacceleration data with higher temporal resolution. The developmentof preliminary methods for sonifying the collected data hasbeen carried out under the specific constraints of a mobile deviceplatform. The sonification is either performed by the phone as areal-time feedback or by a computer using data files as input foran a posteriori analysis of the training. In addition, environmentalsounds recorded during training can be synchronized with thesonification to perceive the coherence of the sequence of soundsthroughout the rowing cycle. First results show that sonificationusing a parameter-mapping method over

  • 10.
    Dubus, Gaël
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Hansen, Kjetil Falkenberg
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    An overview of sound and music applications for Android available on the market2012In: Proceedings of the 9th Sound and Music Computing Conference, SMC 2012 / [ed] Serafin, Stefania, Sound and music Computing network , 2012, p. 541-546Conference paper (Refereed)
    Abstract [en]

    This paper introduces a database of sound-based applications running on the Android mobile platform. The longterm objective is to provide a state-of-the-art of mobile applications dealing with sound and music interaction. After exposing the method used to build up and maintain the database using a non-hierarchical structure based on tags, we present a classification according to various categories of applications, and we conduct a preliminary analysis of the repartition of these categories reflecting the current state of the database.

  • 11.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Interactive sonification of expressive hand gestures on a handheld device2012In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 6, no 1-2, p. 49-57Article in journal (Refereed)
    Abstract [en]

    We present here a mobile phone application called MoodifierLive which aims at using expressive music performances for the sonification of expressive gestures through the mapping of the phone’s accelerometer data to the performance parameters (i.e. tempo, sound level, and articulation). The application, and in particular the sonification principle, is described in detail. An experiment was carried out to evaluate the perceived matching between the gesture and the music performance that it produced, using two distinct mappings between gestures and performance. The results show that the application produces consistent performances, and that the mapping based on data collected from real gestures works better than one defined a priori by the authors.

  • 12.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Interactive sonification of emotionally expressive gestures by means of music performance2010In: Proceedings of ISon 2010, 3rd Interactive Sonification Workshop / [ed] Bresin, Roberto; Hermann, Thomas; Hunt, Andy, Stockholm, Sweden: KTH Royal Institute of Technology, 2010, p. 113-116Conference paper (Refereed)
    Abstract [en]

    This study presents a procedure for interactive sonification of emotionally expressive hand and arm gestures by affecting a musical performance in real-time. Three different mappings are described that translate accelerometer data to a set of parameters that control the expressiveness of the performance by affecting tempo, dynamics and articulation. The first two mappings, tested with a numberof subjects during a public event, are relatively simple and were designed by the authors using a top-down approach. According to user feedback, they were not intuitive and limited the usability of the software. A bottom-up approach was taken for the third mapping: a Classification Tree was trained with features extracted from gesture data from a number of test subject who were asked toexpress different emotions with their hand movements. A second set of data, where subjects were asked to make a gesture that corresponded to a piece of expressive music they just listened to, wereused to validate the model. The results were not particularly accurate, but reflected the small differences in the data and the ratings given by the subjects to the different performances they listened to.

  • 13.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    MoodifierLive: Interactive and collaborative music performance on mobile devices2011In: Proceedings of the International Conference on New Interfaces for Musical Expression (NIME11), 2011Conference paper (Refereed)
  • 14.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Using modern smartphones to create interactive listening experiences for hearing impaired2012In: TMH-QPSR special issue: Proceedings of SMC Sweden 2012 Sound and Music Computing, Understanding and Practicing in Sweden, ISSN 1104-5787, Vol. 52, no 1, p. 42-Article in journal (Refereed)
  • 15.
    Lungaro, Pietro
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Saeik, Firdose
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Mateu Gisbert, Conrado
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Dubus, Gaël
    Demonstration of a low-cost hyper-realistic testbed for designing future onboard experiences2018In: Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, Association for Computing Machinery, Inc , 2018, p. 235-238Conference paper (Refereed)
    Abstract [en]

    This demo presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing vehicular testbeds and simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remotely controlled or monitored vehicles, it is expected that the digital components of the driving experience will become more relevant. That is because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of self-driving interfaces have been implemented, including Heads-Up Display (HUDs), Augmented Reality (ARs) and directional audio.

  • 16. Lungaro, Pietro
    et al.
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Saeik, Firdose
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Mateu Gisbert, Conrado
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Dubus, Gaël
    DriverSense: A hyper-realistic testbed for the design and evaluation of novel user interfaces in self-driving vehicles2018In: Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, Association for Computing Machinery, Inc , 2018, p. 127-131Conference paper (Refereed)
    Abstract [en]

    This paper presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing academic and industrial testbeds and vehicular simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remote controlled vehicular modalities, it is expected that the digital components of the driving experience will become more and more relevant, because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future onboard interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of selected case studies, including Heads-Up Diplays (HUDs), Augmented Reality (ARs) and directional audio solutions, are presented. 

  • 17. Varni, Giovanna
    et al.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Oksanen, Sami
    Volpe, Gualtiero
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Kleimola, Jari
    Välimäki, Vesa
    Camurri, Antonio
    Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices2012In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 5, no 3-4, p. 157-173Article in journal (Refereed)
    Abstract [en]

    This paper evaluates three different interactive sonifications of dyadic coordinated human rhythmic activity. An index of phase synchronisation of gestures was chosen as coordination metric. The sonifications are implemented as three prototype applications exploiting mobile devices: Sync’n’Moog, Sync’n’Move, and Sync’n’Mood. Sync’n’Moog sonifies the phase synchronisation index by acting directly on the audio signal and applying a nonlinear time-varying filtering technique. Sync’n’Move intervenes on the multi-track music content by making the single instruments emerge and hide. Sync’n’Mood manipulates the affective features of the music performance. The three sonifications were also tested against a condition without sonification.

1 - 17 of 17
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf