Change search
Refine search result
1 - 19 of 19
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Camurri, Antonio
    et al.
    University of Genova.
    Volpe, Gualtiero
    University of Genova.
    Vinet, Hugues
    IRCAM, Paris.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Maestre, Esteban
    Universitat Pompeu Fabra, Barcelona.
    Llop, Jordi
    Universitat Pompeu Fabra, Barcelona.
    Kleimola, Jari
    Oksanen, Sami
    Välimäki, Vesa
    Seppanen, Jarno
    User-centric context-aware mobile applications for embodied music listening2009In: User Centric Media / [ed] Akan, Ozgur; Bellavista, Paolo; Cao, Jiannong; Dressler, Falko; Ferrari, Domenico; Gerla, Mario; Kobayashi, Hisashi; Palazzo, Sergio; Sahni, Sartaj; Shen, Xuemin (Sherman); Stan, Mircea; Xiaohua, Jia; Zomaya, Albert; Coulson, Geoffrey; Daras, Petros; Ibarra, Oscar Mayora, Heidelberg: Springer Berlin , 2009, p. 21-30Chapter in book (Refereed)
    Abstract [en]

    This paper surveys a collection of sample applications for networked user-centric context-aware embodied music listening. The applications have been designed and developed in the framework of the EU-ICT Project SAME (www.sameproject.eu) and have been presented at Agora Festival (IRCAM, Paris, France) in June 2009. All of them address in different ways the concept of embodied, active listening to music, i.e., enabling listeners to interactively operate in real-time on the music content by means of their movements and gestures as captured by mobile devices. In the occasion of the Agora Festival the applications have also been evaluated by both expert and non-expert users

  • 2.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    A method for the modification of acoustic instrument tone dynamics2009In: Proceedings of the 12th International Conference on Digital Audio Effects, DAFx 2009, 2009, p. 359-364Conference paper (Refereed)
    Abstract [en]

    A method is described for making natural sounding modifications of the dynamic level of tones produced by acoustic instruments. Each tone is first analyzed in the frequency domain and divided into a harmonic and a noise component. The two components are modified separately using filters based on spectral envelopes extracted from recordings of isolated tones played at different dynamic levels. When transforming from low to high dynamics, additional high frequency partials are added to the spectrum to enhance the brightness of the sound. Finally, the two modified components are summed and a time domain signal is synthesized.

  • 3.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Frequency, phase and amplitude estimation of overlapping partials in monaural musical signals2010In: 13th International Conference on Digital Audio Effects, DAFx 2010 Proceedings, 2010, p. 1-8Conference paper (Refereed)
    Abstract [en]

    A method is described that simultaneously estimates the frequency, phase and amplitude of two overlapping partials in a monaural musical signal from the amplitudes and phases in three frequency bins of the signal's Odd Discrete Fourier Transform (ODFT). From the transform of the analysis window in its analytical form, and given the frequencies of the two partials, an analytical solution for the amplitude and phase of the two overlapping partials was obtained. Furthermore, the frequencies are estimated numerically solving a system of two equations and two unknowns, since no analytical solution could be found. Although the estimation is done independently frame by frame, particular situations (e.g. extremely close frequencies, same phase in the time window) lead to errors, which can be partly corrected with a moving average filter over several time frames. Results are presented for artificial sinusoids with time varying frequencies and amplitudes, and with different levels of noise added. The system still performs well with a Signalto- Noise ratio of down to 30 dB, with moderately modulated frequencies, and time varying amplitudes.

  • 4.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Interactive computer-aided expressive music performance: Analysis, control, modification and synthesis2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis describes the design and implementation process of two applications (PerMORFer and MoodifierLive) for the interactive manipulation of music performance. Such applications aim at closing the gap between the musicians, who play the music, and the listeners, who passively listen to it. The goal was to create computer programs that allow the user to actively control how the music is performed. This is achieved by modifying such parameters as tempo, dynamics, and articulation, much like a musician does when playing an instrument. An overview of similar systems and the problems related to their development is given in the first of the included papers.

    Four requirements were defined for the applications: (1) to produce a natural, high quality sound; (2) to allow for realistic modifications of the performance parameters; (3) to be easy to control, even for non-musicians; (4) to be portable. Although there are many similarities between PerMORFer and MoodifierLive, the two applications fulfill different requirements. The first two were addressed in PerMORFer, with which the user can manipulate pre-recorded audio performance. The last two were addressed in MoodifierLive, a mobile phone application for gesture-based control of a MIDI score file. The tone-by tone modifications in both applications are based on the KTH rule system for music performance. The included papers describe studies, methods, and algorithms used in the development of the two applications.

    Audio recordings of real performance have been used in PerMORFer toachieve a natural sound. The tone-by-tone manipulations defined by the KTH rules first require an analysis of the original performance to separate the tones and estimate their parameters (IOI, duration, dynamics). Available methods were combined with novel solutions, such as an approach to the separation of two overlapping sinusoidal components. On the topic of performance analysis, ad-hoc algorithms were also developed to analyze DJ scratching recordings.

    A particularly complex problem is the estimation of a tone’s dynamic level. A study was conducted to identify the perceptual cues that listeners use to determinethe dynamics of a tone. The results showed that timbre is as important as loudness. These findings were applied in a partly unsuccessful attempt to estimate dynamics from spectral features.

    The manipulation of tempo is a relatively simple problem, as is that of articulation (i.e. legato-staccato) as long as the tone can be separated. The modification of dynamics on the other hand is more difficult, as was its estimation. Following the findings of the previously mentioned perceptual study, a method to modify both loudness and timbre using a database of spectral models was implemented.

    MoodifierLive was used to experiment with performance control interfaces. In particular, the mobile phone’s built-in accelerometer was used to track, analyze, and interpret the movements of the user. Expressive gestures were then mapped to corresponding expressive music performances. Evaluation showed that modes based on natural gestures were easier to use than those created witha top-down approach.

  • 5.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Permorfer: Interactive Rule-Based Modification of Audio Recordings2011In: Computer music journal, ISSN 0148-9267, E-ISSN 1531-5169Article in journal (Other academic)
  • 6.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Interactive sonification of expressive hand gestures on a handheld device2012In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 6, no 1-2, p. 49-57Article in journal (Refereed)
    Abstract [en]

    We present here a mobile phone application called MoodifierLive which aims at using expressive music performances for the sonification of expressive gestures through the mapping of the phone’s accelerometer data to the performance parameters (i.e. tempo, sound level, and articulation). The application, and in particular the sonification principle, is described in detail. An experiment was carried out to evaluate the perceived matching between the gesture and the music performance that it produced, using two distinct mappings between gestures and performance. The results show that the application produces consistent performances, and that the mapping based on data collected from real gestures works better than one defined a priori by the authors.

  • 7.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Interactive sonification of emotionally expressive gestures by means of music performance2010In: Proceedings of ISon 2010, 3rd Interactive Sonification Workshop / [ed] Bresin, Roberto; Hermann, Thomas; Hunt, Andy, Stockholm, Sweden: KTH Royal Institute of Technology, 2010, p. 113-116Conference paper (Refereed)
    Abstract [en]

    This study presents a procedure for interactive sonification of emotionally expressive hand and arm gestures by affecting a musical performance in real-time. Three different mappings are described that translate accelerometer data to a set of parameters that control the expressiveness of the performance by affecting tempo, dynamics and articulation. The first two mappings, tested with a numberof subjects during a public event, are relatively simple and were designed by the authors using a top-down approach. According to user feedback, they were not intuitive and limited the usability of the software. A bottom-up approach was taken for the third mapping: a Classification Tree was trained with features extracted from gesture data from a number of test subject who were asked toexpress different emotions with their hand movements. A second set of data, where subjects were asked to make a gesture that corresponded to a piece of expressive music they just listened to, wereused to validate the model. The results were not particularly accurate, but reflected the small differences in the data and the ratings given by the subjects to the different performances they listened to.

  • 8.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    MoodifierLive: Interactive and collaborative music performance on mobile devices2011In: Proceedings of the International Conference on New Interfaces for Musical Expression (NIME11), 2011Conference paper (Refereed)
  • 9.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    A prototype system for rule-based expressive modifications of audio recordings2007In: Proc. of the Int. Symp. on Performance Science 2007, Porto, Portugal: AEC (European Conservatories Association) , 2007, p. 355-360Conference paper (Refereed)
    Abstract [en]

    A prototype system is described that aims to modify a musical recording in an expressive way using a set of performance rules controlling tempo, sound level and articulation. The audio signal is aligned with an enhanced score file containing performance rules information. A time-frequency transformation is applied, and the peaks in the spectrogram, representing the harmonics of each tone, are tracked and associated with the corresponding note in the score. New values for tempo, note lengths and sound levels are computed based on rules and user decisions. The spectrogram is modified by adding, subtracting and scaling spectral peaks to change the original tone’s length and sound level. For tempo variations, a time scale modification algorithm is integrated in the time domain re-synthesis process. The prototype is developed in Matlab. An intuitive GUI is provided that allows the user to choose parameters, listen and visualize the audio signals involved and perform the modifications. Experiments have been performed on monophonic and simple polyphonic recordings of classical music for piano and guitar.

  • 10.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Expressive modifications of musical audio recordings: preliminary results2007In: Proc. of the 2007 Int. Computer Music Conf. (ICMC07), Copenhagen, Denmark: The International Computer Music Association and Re:New , 2007, p. 21-24Conference paper (Refereed)
    Abstract [en]

    A system is described that aims to modify the performance of a musical recording (classical music) by changing the basic performance parameters tempo, sound level and tone duration. The input audio file is aligned with the corresponding score, which also contains extra information defining rule-based modifications of these parameters. The signal is decomposed using analysis-synthesis techniques to separate and modify each tone independently. The user can control the performance by changing the quantity of performance rules or by directly modifying the parameters values. A prototype Matlab implementation of the system performs expressive tempo and articulation modifications of monophonic and simple polyphonic audio recordings.

  • 11.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Influence of pitch, loudness, and timbre on the perception of instrument dynamics2011In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 130, no 4, p. EL193-EL199Article in journal (Refereed)
    Abstract [en]

    The effect of variations in pitch, loudness, and timbre on the perception of the dynamics of isolated instrumental tones is investigated. A full factorial design was used in a listening experiment. The subjects were asked to indicate the perceived dynamics of each stimulus on a scale from pianissimo to fortissimo. Statistical analysis showed that for the instruments included (i.e., clarinet, flute, piano, trumpet, and violin) timbre and loudness had equally large effects, while pitch was relevant mostly for the first three. The results confirmed our hypothesis that loudness alone is not a reliable estimate of the dynamics of musical tones.

  • 12.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Rule-based expressive modifications of tempo in polyphonic audio recordings2008In: COMPUTER MUSIC MODELING AND RETRIEVAL: SENSE OF SOUNDS     / [ed] KronlandMartinet R; Ystad S; Jensen K, BERLIN: SPRINGER-VERLAG , 2008, Vol. 4969, p. 288-302Conference paper (Refereed)
    Abstract [en]

    This paper describes a few aspects of a system for expressive, rule-based modifications of audio recordings regarding tempo, dynamics and articulation. The input audio signal is first aligned with a score containing extra information on how to modify a performance. The signal is then transformed into the time-frequency domain. Each played tone is identified using partial tracking and the score information. Articulation and dynamics are changed by modifying the length and content of the partial tracks. The focus here is on the tempo modification which is done using a combination of time frequency techniques and phase reconstruction. Preliminary results indicate that the accuracy of the tempo modification is in average 8.2 ms when comparing Inter Onset Intervals in the resulting signal with the desired ones. Possible applications of such a system are in music pedagogy, basic perception research as well as interactive music systems.

  • 13.
    Fabiani, Marco
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Friberg, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Systems for Interactive Control of Computer Generated Music Performance2013In: Guide to Computing for Expressive Music Performance / [ed] Kirke, A., & Miranda, E., Springer Berlin/Heidelberg, 2013, p. 49-73Chapter in book (Refereed)
    Abstract [en]

    This chapter is a literature survey of systems for real-time interactive control of automatic expressive music performance. A classification is proposed based on two initial design choices: the music material to interact with (i.e., MIDI or audio recordings) and the type of control (i.e., direct control of the low-level parameters such as tempo, intensity, and instrument balance or mapping from high-level parameters, such as emotions, to low-level parameters). Their pros and cons are briefly discussed. Then, a generic approach to interactive control is presented, comprising four steps: control data collection and analysis, mapping from control data to performance parameters, modification of the music material, and audiovisual feedback synthesis. Several systems are then described, focusing on different technical and expressive aspects. For many of the surveyed systems, a formal evaluation is missing. Possible methods for the evaluation of such systems are finally discussed.

  • 14.
    Friberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Hansen, Kjetil Falkenberg
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Enabling emotional expression and interaction with new expressive interfaces2009In: Front. Hum. Neurosci. Conference Abstract: Tuning the Brain for Music, 2009, Vol. 9Conference paper (Refereed)
  • 15.
    Friberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Schoonderwaldt, Erwin
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. Hanover University, Germany .
    Hedblad, Anton
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Elowsson, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Using listener-based perceptual features as intermediate representations in music information retrieval2014In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 136, no 4, p. 1951-1963Article in journal (Refereed)
    Abstract [en]

    The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.

  • 16.
    Friberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Schoonderwaldt, Erwin
    Hanover University of Music, Germany.
    Hedblad, Anton
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Elowsson, Anders
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Using perceptually defined music features in music information retrieval2014Manuscript (preprint) (Other academic)
    Abstract [en]

    In this study, the notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, in order to understand the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The selected perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic (MIDI) and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance up to 90%; (3) the perceptual features could only to a limited extent be modeled using existing audio features. The results also clearly indicated that a small number of dedicated features were superior to a 'brute force' model using a large number of general audio features.

  • 17.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Analysis of the acoustics and playing strategies of turntable scratching2011In: Acta Acoustica united with Acustica, ISSN 1610-1928, E-ISSN 1861-9959, Vol. 97, no 2, p. 303-314Article in journal (Refereed)
    Abstract [en]

    Scratching performed by a DJ (disk jockey) is a skillful style of playingthe turntable with complex musical output. This study focuses on the description of some of the acoustical parameters and playing strategies of typical scratch improvisations, and how these parameters typically are used for expressive performance. Three professional DJs were instructed to express different emotions through improvisations, and both audio and gesturaldata were recorded. Feature extraction and analysis of the recordings are based on a combination of audio and gestural data, instrument characteristics, and playing techniques. The acoustical and performance parameters extracted from the recordings give a first approximation on the functional ranges within which DJs normally play. Results from the analysis show that parameters which are important for other solo instrument performances, suchas pitch, have less influence in scratching. Both differences and commonalities between the DJs’ playing styles were found. Impact that the findings of this work may have on constructing models for scratch performances arediscussed.

  • 18. Rovetta, D.
    et al.
    Sarti, A.
    De Sanctis, G.
    Fabiani, Marco
    Politecnico di Milano, Milano, Italy.
    Modelling Elastic Wave Propagation In Thin Plates2006In: Proceedings of the 14th European Signal Processing Conference (EUSIPCO 2006), 2006Conference paper (Refereed)
  • 19. Varni, Giovanna
    et al.
    Dubus, Gaël
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Oksanen, Sami
    Volpe, Gualtiero
    Fabiani, Marco
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.
    Kleimola, Jari
    Välimäki, Vesa
    Camurri, Antonio
    Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices2012In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 5, no 3-4, p. 157-173Article in journal (Refereed)
    Abstract [en]

    This paper evaluates three different interactive sonifications of dyadic coordinated human rhythmic activity. An index of phase synchronisation of gestures was chosen as coordination metric. The sonifications are implemented as three prototype applications exploiting mobile devices: Sync’n’Moog, Sync’n’Move, and Sync’n’Mood. Sync’n’Moog sonifies the phase synchronisation index by acting directly on the audio signal and applying a nonlinear time-varying filtering technique. Sync’n’Move intervenes on the multi-track music content by making the single instruments emerge and hide. Sync’n’Mood manipulates the affective features of the music performance. The three sonifications were also tested against a condition without sonification.

1 - 19 of 19
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf