Endre søk
Begrens søket
123 101 - 150 of 150
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 101.
    Friberg, Anders
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Sundström, Andreas
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Preferred swing ratio in jazz as a function of tempo1997Inngår i: TMH-QPSR, Vol. 38, nr 4, s. 019-027Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    In jazz music it is common to perform consecutive eighth notes with an alternating duration pattern of long-short. The exact duration ratio (the swing ratio) of the long-short pattern has been largely unknown. The first experiment describes measurements of the swing ratio in the ride cymbal from well-known jazz recordings. The second experiment was a production task where subjects adjusted the swing ratio of a computer generated performance to a preferred value. Both these experiments show that the swing ratio varies approximately linearly with tempo. The swing ratio can be as high as 3.5:1 at comparatively slow tempi around 120 bpm. When the tempo is fast the swing ratio reaches 1:1, that is, the eighth notes are performed evenly. The duration of the short note in the long-short pattern is approximately constant (≅ 100 ms) for medium to fast tempi.

  • 102.
    Gleiser, Julieta E.
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Granqvist, Svante
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    A method for extracting vibrato parameters applied to violin performance1998Inngår i: TMH-QPSR, Vol. 39, nr 4, s. 039-044Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    A method is presented which semi-automatically extracts the fundamental frequency and displays as continuous signals vibrato rate, vibrato extent and sound level. The method is tested on specially made recordings of violin music with piano accompaniment, using a small microphone mounted directly on the violin. The fundamental frequency was successfully extracted by means of a waveform correlation program. Likewise, vibrato rate and extent were extracted separately for each tone from the fundamental frequency signal after elimination of its DC component. The results seem promising, offering the opportunity of visual examination and measurement of changes in vibrato characteristics during performances of entire pieces of music. 

  • 103. Goebl, Werner
    et al.
    Dixon, Simon
    De Poli, Giovanni
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Widmer, Gerhard
    Sense in expressive music performance: Data acquisition, computational studies, and models2008Inngår i: Sound to Sense - Sense to Sound: A state of the art in Sound and Music Computing / [ed] Polotti, Pietro; Rocchesso, Davide, Berlin: Logos Verlag , 2008, s. 195-242Kapittel i bok, del av antologi (Fagfellevurdert)
  • 104.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Describing the emotional content of hip-hop DJ recordings2008Inngår i: The Neurosciences and Music III, Montreal: New York Academy of Sciences, 2008, s. 565-Konferansepaper (Fagfellevurdert)
  • 105.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Principles for expressing emotional content in turntable scratching2006Inngår i: Proc. 9th International Conference on Music Perception & Cognition / [ed] Baroni, M.; Addessi, A. R.; Caterina, R.; Costa, M., Bologna: Bonomia University Press , 2006, s. 532-533Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Background: Scratching is a novel musical style that introduces the turntable as a musical instrument. Sounds are generated by moving vinyl records with one or two hands on the turntable and controlling amplitude with the crossfader with one hand. With this instrument mapping, complex gestural combinations that produce unique 'tones' can be achieved. These combinations have established a repertoire of playing techniques, and musicians (or DJs) know how to perform most of them. Scratching is normally not a melodically based style of music. It is very hard to produce tones with discrete and constant pitch. The sound is always strongly dependent on the source material on the record, and its timbre is not controllable in any ordinary way. However, tones can be made to sound different by varying the speed of the gesture and thereby creating pitch modulations. Consequently timing and rhythm remain as important candidates for expressive playing when compared to conventional musical instruments, and with the additional possibility to modulate the pitch.Aims: The experiment presented aims to identify acoustical features that carry emotional content in turntable scratching performances, and to find relationships with how music is expressed with other instruments. An overall aim is to investigate why scratching is growing in popularity even if it a priori seems ineffective as an expressive interface.Method: A number of performances by experienced DJs were recorded. Speed of the record, mixer amplitude and the generated sounds were measured. The analysis focuses on finding the underlying principles for expressive playing by examining musician's gestures and the musical performance. The found principles are compared to corresponding methods for expressing emotional intentions used for other instruments.Results: The data analysis is not completed yet. The results will give an indication of which acoustical features DJs use to play expressively on their instrument with musically limited possibilities. Preliminary results show that the principles for expressive playing are in accordance with current research on expression.Conclusions: The results present some important features in turntable scratching that may help explain why it remains a popular instrument despite its rather unsatisfactory playability both melodically and rhythmically.

  • 106. Istok, E.
    et al.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Huotilainen, M.
    Tervaniemi, M.
    Expressive timing facilitates the processing of phrase boundaries in music: Evidence from the event-related potential2012Inngår i: International Journal of Psychophysiology, ISSN 0167-8760, E-ISSN 1872-7697, Vol. 85, nr 3, s. 403-404Artikkel i tidsskrift (Fagfellevurdert)
  • 107. Istok, Eva
    et al.
    Tervaniemi, Mari
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Seifert, Uwe
    Effects of timing cues in music performances on auditory grouping and pleasantness judgments2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    By means of varying timing, dynamics, pitch, and timbre musicperformers put emphasis on important events of a musical piece andprovide their listeners with acoustic cues that facilitate the perceptualand cognitive analysis of the musical structure. Evidence exists thatthe speed and the accuracy with which stimulus features are beingprocessed contribute to how a stimulus itself is evaluated. In our study,we tested whether expressive timing facilitates auditory grouping andwhether these timing variations influence pleasantness judgments. Tothis aim, participants listened to short atonal melodies containing oneor two auditory groups and performed both a cognitive and anevaluative task. The expressive phrasing patterns of the excerpts weregradually modified ranging from inverted phrasing through deadpanversions to exaggerated timing patterns. Reaction times decreasedand hit rates increased with a more pronounced grouping structureindicating that subtle timing variations alone do facilitate theformation of auditory groups in a musical context. Timing variationsalso modulated the direction of pleasantness ratings. However, theresults suggest that the threshold of an expressive musicalperformance to become more pleasant than its deadpan counterpartpresumably can be exceeded only by the simultaneous covariance ofmore than one acoustic cue.

  • 108. Istók, E.
    et al.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Huotilainen, M.
    Tervaniemi, M.
    Expressive Timing Facilitates the Neural Processing of Phrase Boundaries in Music: Evidence from Event-Related Potentials2013Inngår i: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 8, nr 1, s. e55150-Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The organization of sound into meaningful units is fundamental to the processing of auditory information such as speech and music. In expressive music performance, structural units or phrases may become particularly distinguishable through subtle timing variations highlighting musical phrase boundaries. As such, expressive timing may support the successful parsing of otherwise continuous musical material. By means of the event-related potential technique (ERP), we investigated whether expressive timing modulates the neural processing of musical phrases. Musicians and laymen listened to short atonal scale-like melodies that were presented either isochronously (deadpan) or with expressive timing cues emphasizing the melodies' two-phrase structure. Melodies were presented in an active and a passive condition. Expressive timing facilitated the processing of phrase boundaries as indicated by decreased N2b amplitude and enhanced P3a amplitude for target phrase boundaries and larger P2 amplitude for non-target boundaries. When timing cues were lacking, task demands increased especially for laymen as reflected by reduced P3a amplitude. In line, the N2b occurred earlier for musicians in both conditions indicating general faster target detection compared to laymen. Importantly, the elicitation of a P3a-like response to phrase boundaries marked by a pitch leap during passive exposure suggests that expressive timing information is automatically encoded and may lead to an involuntary allocation of attention towards significant events within a melody. We conclude that subtle timing variations in music performance prepare the listener for musical key events by directing and guiding attention towards their occurrences. That is, expressive timing facilitates the structuring and parsing of continuous musical material even when the auditory input is unattended.

  • 109.
    Juslin, P. N.
    et al.
    Uppsala University.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Bresin, R.
    Computational modelling of different aspects of expressivity: The GERM model2002Inngår i: Proceedings of ICMPC7 - 7th International Conference on Music Perception & Cognition, 2002, s. 13-Konferansepaper (Fagfellevurdert)
  • 110.
    Juslin, Patrik N
    et al.
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Samhällsvetenskapliga fakulteten, Institutionen för psykologi..
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Toward a computational model of expression in music performance: The GERM model2002Inngår i: Musicae scientiae, ISSN 1029-8649, E-ISSN 2045-4147, Vol. Special Issue 2001-2002, s. 63-122Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This article presents a computational model of expression in music performance: The GERM model. The purpose of the GERM model is to (a) describe the principal sources of variability in music performance, (b) emphasize the need to integrate different aspects of performance in a common model, and (c) provide some preliminaries (germ = a basis from which a thing may develop) for a computational model that simulates the different aspects. Drawing on previous research on performance, we propose that that performance expression derives from four main sources of variability: (1) Generative Rules, which function to convey the generative structure in a musical manner (e.g., Clarke, 1988; Sundberg, 1988); (2) Emotional Expression, which is governed by the performer’s expressive intention (e.g., Juslin, 1997a); (3) Random Variations, which reflect internal timekeeper variance and motor delay variance (e.g., Gilden, 2001; Wing & Kristofferson, 1973); and (4) Movement Principles, which prescribe that certain features of the performance are shaped in accordance with biological motion (e.g., Shove & Repp, 1995). A preliminary version of the GERM model was implemented by means of computer synthesis. Synthesized performances were evaluated by musically trained participants in a listening test. The results from the test support a decomposition of expression in terms of the GERM model. Implications for future research on music performance are discussed.

  • 111. Juslin, Patrik N.
    et al.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Schoonderwaldt, Erwin
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Karlsson, Jessica
    Feedback learning of musical expressivity2004Inngår i: Musical excellence - Strategies and techniques to enhance performance / [ed] Aaron Williamon, Oxford University Press, 2004, s. 247-270Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    Communication of emotion is of fundamental importance to the performance of music. However, recent research indicates that expressive aspects of performance are neglected in music education, with teachers spending more time and effort on technical aspects. Moreover, traditional strategies for teaching expressivity rarely provide informative feedback to the performer. In this chapter we explore the nature of expressivity in music performance and evaluate novel methods for teaching expressivity based on recent advances in musical science, psychology, technology, and acoustics. First, we provide a critical discussion of traditional views on expressivity, and dispel some of the myths that surround the concept of expressivity. Then, we propose a revised view of expressivity based on modern research. Finally, a new and empirically based approach to learning expressivity termed cognitive feedback is described and evaluated. The goal of cognitive feedback is to allow the performer to compare a model of his or her playing to an “optimal” model based on listeners’ judgments of expressivity. This method is being implemented in user-friendly software, which is evaluated in collaboration with musicians and music teachers.

  • 112. Juslin, Patrik N.
    et al.
    Karlsson, Jessika
    Lindström, Erik
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Schoonderwaldt, Erwin
    Play it again with feeling: Computer feedback in musical communication of emotions2006Inngår i: Journal of experimental psychology. Applied, ISSN 1076-898X, E-ISSN 1939-2192, Vol. 12, nr 2, s. 79-95Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Communication of emotions is of crucial importance in music performance. Yet research has suggested that this skill is neglected in music education. This article presents and evaluates a computer program that automatically analyzes music performances and provides feedback to musicians in order to enhance their communication of emotions. Thirty-six semiprofessional jazz/rock guitar players were randomly assigned to one of 3 conditions: (1) feedback from the computer program, (2) feedback from music teachers, and (3) repetition without feedback. Performance measures revealed the greatest improvement in communication accuracy for the computer program, but usability measures indicated that certain aspects of the program could be improved. Implications for music education are discussed.

  • 113.
    Karipidou, Kelly
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Ahnlund, Josefin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Alexanderson, Simon
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Computer Analysis of Sentiment Interpretation in Musical Conducting2017Inngår i: Proceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017, IEEE, 2017, s. 400-405, artikkel-id 7961769Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a unique dataset consisting of 20 recordings of the same musical piece, conducted with 4 different musical intentions in mind. The upper body and baton motion of a professional conductor was recorded, as well as the sound of each instrument in a professional string quartet following the conductor. The dataset is made available for benchmarking of motion recognition algorithms. An HMM-based emotion intent classification method is trained with subsets of the data, and classification of other subsets of the data show firstly that the motion of the baton communicates energetic intention to a high degree, secondly, that the conductor’s torso, head and other arm conveys calm intention to a high degree, and thirdly, that positive vs negative sentiments are communicated to a high degree through other channels than the body and baton motion – most probably, through facial expression and muscle tension conveyed through articulated hand and finger motion. The long-term goal of this work is to develop a computer model of the entire conductor-orchestra communication pro- cess; the studies presented here indicate that computer modeling of the conductor-orchestra communication is feasible.

  • 114. Kleber, Boris
    et al.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Zeitouni, Anthony
    Zatorre, Robert
    Experience-dependent modulation of right anterior insula and sensorimotor regions as a function of noise-masked auditory feedback in singers and nonsingers2016Inngår i: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 147, s. 97-110Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Previous studies on vocal motor production in singing suggest that the right anterior insula (AI) plays a role in experience-dependent modulation of feedback integration. Specifically, when somatosensory input was reduced via anesthesia of the vocal fold mucosa, right AI activity was down regulated in trained singers. In the current fMRI study, we examined how masking of auditory feedback affects pitch-matching accuracy and corresponding brain activity in the same participants. We found that pitch-matching accuracy was unaffected by masking in trained singers yet declined in nonsingers. The corresponding brain region with the most differential and interesting activation pattern was the right AI, which was up regulated during masking in singers but down regulated in nonsingers. Likewise, its functional connectivity with inferior parietal, frontal, and voice-relevant sensorimotor areas was increased in singers yet decreased in nonsingers. These results indicate that singers relied more on somatosensory feedback, whereas nonsingers depended more critically on auditory feedback. When comparing auditory vs somatosensory feedback involvement, the right anterior insula emerged as the only region for correcting intended vocal output by modulating what is heard or felt as a function of singing experience. We propose the right anterior insula as a key node in the brain's singing network for the integration of signals of salience across multiple sensory and cognitive domains to guide vocal behavior.

  • 115. Kleber, Boris
    et al.
    Zeitouni, Anthony G.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Zatorre, Robert J.
    Experience-Dependent Modulation of Feedback Integration during Singing: Role of the Right Anterior Insula2013Inngår i: Journal of Neuroscience, ISSN 0270-6474, E-ISSN 1529-2401, Vol. 33, nr 14, s. 6070-6080Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Somatosensation plays an important role in the motor control of vocal functions, yet its neural correlate and relation to vocal learning is not well understood. We used fMRI in 17 trained singers and 12 nonsingers to study the effects of vocal-fold anesthesia on the vocal-motor singing network as a function of singing expertise. Tasks required participants to sing musical target intervals under normal conditions and after anesthesia. At the behavioral level, anesthesia altered pitch accuracy in both groups, but singers were less affected than nonsingers, indicating an experience-dependent effect of the intervention. At the neural level, this difference was accompanied by distinct patterns of decreased activation in singers (cortical and subcortical sensory and motor areas) and nonsingers (subcortical motor areas only) respectively, suggesting that anesthesia affected the higher-level voluntary (explicit) motor and sensorimotor integration network more in experienced singers, and the lower-level (implicit) subcortical motor loops in nonsingers. The right anterior insular cortex (AIC) was identified as the principal area dissociating the effect of expertise as a function of anesthesia by three separate sources of evidence. First, it responded differently to anesthesia in singers (decreased activation) and nonsingers (increased activation). Second, functional connectivity between AIC and bilateral A1, M1, and S1 was reduced in singers but augmented in nonsingers. Third, increased BOLD activity in right AIC in singers was correlated with larger pitch deviation under anesthesia. We conclude that the right AIC and sensory-motor areas play a role in experience-dependent modulation of feedback integration for vocal motor control during singing.

  • 116.
    Källblad, Anna
    et al.
    University College of Dance Stockholm, Sweden .
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Svensson, Karl
    SKIMRA ljusproduktion STHLM .
    Sjöstedt Edelholm, Elisabet
    University College of Dance Stockholm, Sweden .
    Hoppsa Universum – An interactive dance installation for children2008Inngår i: Proceedings of New Interfaces for Musical Expression (NIME), Genova, 2008, 2008, s. 128-133Konferansepaper (Fagfellevurdert)
    Abstract [en]

    It started with an idea to create an empty space in which you activated music and light as you moved around. In responding to the music and lighting you would activate more or different sounds and thereby communicate with the space through your body. This led to an artistic research project in which children’s spontaneous movement was observed, a choreography made based on the children’s movements and music written and recorded for the choreography. This music was then decomposed and choreographed into an empty space at Botkyrka konsthall creating an interactive dance installation. It was realized using an interactive sound and light system in which 5 video cameras were detecting the motion in the room connected to a 4-channel sound system and a set of 14 light modules. During five weeks people of all ages came to dance and move around in the installation. The installation attracted a wide range of people of all ages and the tentative evaluation indicates that it was very positively received and that it encouraged free movement in the intended way. Besides observing the activity in the installation interviews were made with schoolchildren age 7 who had participated in the installation.

  • 117. Lindborg, Per Magnus
    et al.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Personality Traits Bias the Perceived Quality of Sonic Environment2016Inngår i: Applied Sciences: APPS, ISSN 1454-5101, E-ISSN 1454-5101, Vol. 6, nr 12, artikkel-id 405Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    There have been few empirical investigations of how individual differences influence the perception of the sonic environment. The present study included the Big Five traits and noise sensitivity as personality factors in two listening experiments (n = 43, n = 45). Recordings of urban and restaurant soundscapes that had been selected based on their type were rated for Pleasantness and Eventfulness using the Swedish Soundscape Quality Protocol. Multivariate multiple regression analysis showed that ratings depended on the type and loudness of both kinds of sonic environments and that the personality factors made a small yet significant contribution. Univariate models explained 48% (cross-validated adjusted R2) of the variation in Pleasantness ratings of urban soundscapes, and 35% of Eventfulness. For restaurant soundscapes the percentages explained were 22% and 21%, respectively. Emotional stability and noise sensitivity were notable predictors whose contribution to explaining the variation in quality ratings was between one-tenth and nearly half of the soundscape indicators, as measured by squared semipartial correlation. Further analysis revealed that 36% of noise sensitivity could be predicted by broad personality dimensions, replicating previous research. Our study lends empirical support to the hypothesis that personality traits have a significant though comparatively small influence on the perceived quality of sonic environments.

  • 118.
    Lindborg, Per Magnus
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH. Nanyang Technological University, Singapore.
    Friberg, Anders K.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Personality traits influence perception of soundscape qualityManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    We are interested in how the perceived soundscape quality varies across everyday environments, and in predicting ratings using psychoacoustic descriptors and individual personality traits. Two listening experiments (n = 43, n = 42) with recordings of rural and urban parks, shops, and restaurants were conducted. Loudness, Fluctuation strength and other descriptors of soundscapes were extracted, and participant Big Five dimensions and Noise sensitivity were estimated. In Experiment 1, quality ratings depended strongly on soundscape type and weakly on traits such as Emotional stability. In Experiment 2, a multivariate regression model explained 25% of Pleasantness and 30% of Eventfulness (cross-validated adjusted R2). The contribution of personality traits reached a tenth of psychoacoustic descriptors. 36% of Noise sensitivity could be predicted by Big Five dimensions. The article discusses the results in light of personality theory. Both broad and narrow personality traits might be helpful to understand people's appraisal of sonic environments.

  • 119.
    Lindborg, PerMagnus
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik. Nanyang Technological University, Singapore.
    Friberg, Anders K
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Colour Association with Music is Mediated by Emotion: Evidence from an Experiment using a CIE Lab Interface and Interviews2015Inngår i: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, nr 12, artikkel-id e0144013Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.

  • 120.
    Lindeberg, Tony
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Idealized computational models for auditory receptive fields2015Inngår i: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, nr 3, artikkel-id e0119032Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales.

    For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions.

    When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions.

    It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals.

  • 121.
    Lindeberg, Tony
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Scale-space theory for auditory signals2015Inngår i: Scale Space and Variational Methods in Computer Vision: 5th International Conference, SSVM 2015, Lège-Cap Ferret, France, May 31 - June 4, 2015, Proceedings / [ed] J.-F. Aujol et al., Springer, 2015, Vol. 9087, s. 3-15Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We show how the axiomatic structure of scale-space theory can be applied to the auditory domain and be used for deriving idealized models of auditory receptive fields via scale-space principles. For defining a time-frequency transformation of a purely temporal signal, it is shown that the scale-space framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal window functions. Applied to the definition of a second layer of receptive fields from the spectrogram, it is shown that the scale-space framework leads to two canonical families of spectro-temporal receptive fields, using a combination of Gaussian filters over the logspectral domain with either Gaussian filters or a cascade of first-order integrators over the temporal domain. These spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Such idealized models of auditory receptive fields respect auditory invariances, can be used for computing basic auditory features for audio processing and lead to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields in the inferior colliculus (ICC) and the primary auditory cortex (A1).

  • 122. Lindström, E.
    et al.
    Camurri, A.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Volpe, G.
    Rinman, M. L.
    Affect, attitude and evaluation of multisensory performances2005Inngår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 34, nr 1, s. 69-86Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The EU-IST project MEGA (Multisensory Expressive Gesture Applications; see www.megaproject.org) addresses innovative technologies for multimodal interactive systems in artistic scenarios. Basic research on expressive communication in music, gesture and dance has been focused by EU-IST-funded European researchers in psychology, technology and computer engineering. The output from this cooperation with artists has also revealed ideas and innovations for applications in social, artistic and communicative entertainment. However, even the most careful basic research and computer engineering could never estimate the real efficiency and benefit of such new expressive applications. The purpose of this article, therefore, is to get feedback from the audience and the artists/ performers/ players at three public MEGA events: the interactive music concert Allegoria dell'opinione verbale, a dance performance by Groove Machine and a public game (Ghost in the Cave). General attitude, perceived communication/affect and practical efficiency were evaluated by questionnaires. Results showed that: (a) the performers were able to control the expressive output within the application, (b) the audience was positive to each event,

  • 123.
    Masko, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Fischer Friberg, Jonathan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Software tools for automatic music performance2014Inngår i: 1st international workshop on computer and robotic Systems for Automatic Music Performance (SAMP14), Venice, 2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Two new computer programs are presented for the purpose of facilitating new research in music performance modeling. Director Musices (DM) is a new an implementation of the user interface for the KTH rule system. It includes a new integrated player and several other improvements. The automatic sampler facilitates the sampling of a MIDI-controlled instrument, such as the Disklavier piano from Yamaha. Both programs are open source, cross-platform written in Java and in the case of DM including also different lisp dialects.

  • 124. Mathews, M. V.
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Bennett, G.
    Sapp, C.
    Sundberg, Johan
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    A marriage of the Director Musices program and the conductor program2003Inngår i: Proceedings of the Stockholm Music Acoustics Conference, August 6-9, 2003 (SMAC 03), Stockholm, Sweden, 2003, Vol. 1, s. 13-16Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper will describe an ongoing collaboration between the authors to combine the Director Musices and Conductor programs in order to achieve a more expressive and socially interactive performance of a midi file score by an electronic orchestra. Director Musices processes a “square” midi file, adjusting the dynamics and timing of the notes to achieve the expressive performance of a trained musician. The Conductor program and the Radio-baton allow a conductor, wielding an electronic baton, to follow and synchronize with other musicians, for example to provide an orchestral accompaniment to an operatic singer. These programs may be particularly useful for student soloists who wish to practice concertos with orchestral accompaniments. 

  • 125. Parncutt, R.
    et al.
    Bisesi, E.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    A Preliminary Computational Model of Immanent Accent Salience in Tonal Music2013Inngår i: Proceedings of the Sound and Music Computing Conference 2013, SMC 2013, Stockholm, Sweden / [ed] Roberto Bresin, 2013, s. 335-340Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We describe the first stage of a two-stage semialgorithmic approach to music performance rendering. In the first stage, we estimate the perceptual salience of immanent accents (phrasing, metrical, melodic, harmonic) in the musical score. In the second, we manipulate timing, dynamics and other performance parameters in the vicinity of immanent accents (e. g., getting slower and/or louder near an accent). Phrasing and metrical accents emerge from the hierarchical structure of phrasing and meter; their salience depends on the hierarchical levels that they demarcate, and their salience. Melodic accents follow melodic leaps; they are strongest at contour peaks and (to a lesser extent) valleys; and their salience depends on the leap interval and the distance of the target tone from the local mean pitch. Harmonic accents depend on local dissonance (roughness, non-harmonicity, non-diatonicity) and chord/key changes. The algorithm is under development and is being tested by comparing its predictions with music analyses, recorded performances and listener evaluations.

  • 126.
    Rinman, Marie Louise
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Friberg, Anders
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Bendiksen, B.
    Cirotteau, D.
    Dahl, Sofia
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Kjellmo, I.
    Mazzarino, B.
    Camurri, A.
    Ghost in the Cave: an interactive collaborative game using non-verbal communication2004Inngår i: GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION / [ed] Camurri, A; Volpe, G, Berlin: Springer Verlag , 2004, s. 549-556Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The interactive game environment, Ghost in the Cave, presented in this short paper, is a work still in progress. The game involves participants in an activity using non-verbal emotional expressions. Two teams use expressive gestures in either voice or body movements to compete. Each team has an avatar controlled either by singing into a microphone or by moving in front of a video camera. Participants/players control their avatars by using acoustical or motion cues. The avatar is navigated in a 3D distributed virtual environment using the Octagon server and player system. The voice input is processed using a musical cue analysis module yielding performance variables such as tempo, sound level and articulation as well as an emotional prediction. Similarly, movements captured from a video camera are analyzed in terms of different movement cues. The target group is young teenagers and the main purpose to encourage creative expressions through new forms of collaboration.

  • 127. Rinman, M-L
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Kjellmo, I.
    Camurri, A.
    Cirotteau, D.
    Dahl, S.
    Mazzarino, B.
    Bendiksen, B.
    McCarthy, H.
    EPS - an interactive collaborative game using non-verbal communication2003Inngår i: Proceedings of the Stockholm Music Acoustics Conference, August 6-9, 2003 (SMAC 03), Stockholm, Sweden / [ed] Bresin, R., 2003, Vol. 2, s. 561-563Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The interactive game environment EPS (expressive performance space), presented in this short paper, is a work still in progress. EPS involves participants in an activity using non-verbal emotional expressions. Two teams use expressive gestures in either voice or body movements to compete. Each team has an avatar controlled either by singing into a microphone or by moving in front of a video camera. Participants/players control their avatars by using acoustical or motion cues. The avatar is navigated /moving around in a 3D distributed virtual environment using the Octagon server and player system. The voice input is processed using a musical cue analysis module yielding performance variables such as tempo, sound level and articulation as well as an emotional prediction. Similarly, movements captured from the video camera are analyzed in terms of different movement cues. The target group is children aged 13- 16 and the purpose is to elaborate new forms of collaboration.

  • 128. Ross, J.
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Generative Performance Rules and Folksong Performance2000Inngår i: Sixth International Conference on Music Perception and Cognition, Keele, UK, August 2000 / [ed] Woods, C., Luck, G., Brochard, R., Seddon, F., & Sloboda, J. A., 2000Konferansepaper (Fagfellevurdert)
  • 129.
    Schoonderwaldt, Erwin
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Towards a rule-based model for violin vibrato2001Inngår i: Proc of the Workshop on Current Research Directions in Computer Music, 2001, s. 61-64Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Vibrato is one of the most important expressive parameters that players can control when rendering a piece of music. The simulation of vibrato, in systems for automatic music performance, is still an open problem. A mere regular periodic modulation of pitch generally yields unsatisfactory results, sounding both unnatural and mechanical. An appropriate control of vibrato rate and vibrato extent is a major requirement of a successful vibrato model. The goal of the present work was to develop a generative, rule-based model for expressive violin vibrato. Measurements of vibrato as performed by professional violinists were used for this purpose. The model generates vibrato rate and extent envelopes, which are used to control a sampled violin synthesizer.

  • 130.
    Schoonderwaldt, Erwin
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Bresin, Roberto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Juslin, P. N.
    Uppsala University.
    A system for improving the communication of emotion in music performance by feedback learning2002Inngår i: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 111, nr 5, s. 2471-Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Expressivity is one of the most important aspects of music performance. However, in music education, expressivity is often overlooked in favor of technical abilities. This could possibly depend on the difficulty in describing expressivity, which makes it problematic to provide the student with specific feedback. The aim of this project is to develop a computer program, which will improve the students’ ability in communicating emotion in music performance. The expressive intention of a performer can be coded in terms of performance parameters (cues), such as tempo, sound level, timbre, and articulation. Listeners’ judgments can be analyzed in the same terms. An algorithm was developed for automatic cue extraction from audio signals. Using note onset–offset detection, the algorithm yields values of sound level, articulation, IOI, and onset velocity for each note. In previous research, Juslin has developed a method for quantitative evaluation of performer–listener communication. This framework forms the basis of the present program. Multiple regression analysis on performances of the same musical fragment, played with different intentions, determines the relative importance of each cue and the consistency of cue utilization. Comparison with built‐in listener models, simulating perceived expression using a regression equation, provides detailed feedback regarding the performers’ cue utilization.

  • 131. Sundberg, J.
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner                               , Talöverföring och musikakustik.
    Bresin, Roberto
    KTH, Tidigare Institutioner                               , Talöverföring och musikakustik.
    Attempts to reproduce a pianist's expressive timing with director musices performance rules2003Inngår i: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 32, nr 3, s. 317-325Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The Director Musices generative grammar of music performance is a system of context dependent rules that automatically introduces expressive deviation in performances of input score files. A number of these rule concern timing. In this investigation the ability of such rules to reproduce a professional pianist's timing deviations from nominal note inter-onset-intervals is examined. Rules affecting tone inter-onset-intervals were first tested one by one for the various sections of the excerpt, and then in combinations. Results were evaluated in terms-of the correlation between the deviations made by the pianist and by the rule system. It is found that rules reflecting the phrase structure produced high correlations in some sections. On the other hand, some rules failed to produce significant correlation with the pianist's deviations, and thus seemed irrelevant to the particular performance analysed. It is concluded that phrasing was a prominent principle in this performance and that rule combinations have to change between sections in order to match this pianist's deviations.

  • 132. Sundberg, Johan
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Stopping running and stopping a piece of music. Comparing locomotion and music performance1996Inngår i: Proc of NAM 96, Nordic Acoustical Meeting / [ed] Riederer, K., & Lahti, T., 1996, s. 351-358Konferansepaper (Fagfellevurdert)
  • 133.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Common Secrets of Musicians and Listeners - An analysis-by-synthesis Study of Musical Performance1991Inngår i: Representing Musical Structure / [ed] Howell, P.; West, R.; Cross, I., London: Academic Press, 1991, s. 161-197Kapittel i bok, del av antologi (Fagfellevurdert)
  • 134.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Music and locomotion. a study of the perception of tones with level envelopes replicating force patterns of walking1992Inngår i: STL-QPSR, Vol. 33, nr 4, s. 109-122Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    Music listening ofien produces associations to locomotion. This suggests that some patterns in music are similar to those perceived during locomotion. The present investigation tests the hypothesis that the sound level envelope of tones allude to force patterns associated with walking and dancing. Six examples of such force patterns were recorded using a force platform, and the vertical components were translated from kg to dB and used as level envelopes for tones. Sequences of four copies of each of these tones were presented with four different fixed inter-onset times. Music students were asked to characterize these sequences in three tests. In one test, the subjects were free to use any expression, and the occurrence of motion words in the responses was examined. In another test, they were asked to describe, ifpossible, the motion characteristics of the sequences, and the number of blank responses were studied. In the third test, they were asked to describe the sequences along 24 motion adjective scales, and the responses were submitted to a factor analysis. The results from the three tests showed a reasonable degree of coherence, suggesting that associations to locomotions are likely to occur under these conditions, particularly when (1) the inter-onset time is similar to the inter-step time typical of walking, and (2) when the inter-onset time agreed with that observed when the gait patterns were recorded. The latter observation suggests that the different motion patterns thus translated to sound level envelopes also may convey information on the type of motion.

  • 135.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Music and locomotion. Perception of tones with level envelopes replicating force patterns of walking1994Inngår i: Proc. of SMAC ’93, Stockholm Music Acoustics Conference, 1994, s. 136-141Konferansepaper (Fagfellevurdert)
  • 136.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Musicians’ and nonmusicians’ sensitivity to differences in music performance1988Inngår i: STL-QPSR, Vol. 29, nr 4, s. 077-081Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    A set of ordered context-dependent rules for the automatic transformation of a music score to the corresponding musical performance has been developed, using an analysis-by-synthesis method [Sundberg, J. (1987): "Computer synthesis of music performance," pp. 52-69 in (J. Sloboda, ed.) Generative Processes in Music, Clarendon, Oxford]. The rules are implemented in the LeLisp language on a Macintosh microcomputer that controls a synthesizer via a MIDI interface. The rules manipulate sound level, fundamental frequency, vibrato extent, and duration of the tones. The present experiment was carried out in order to find out if the sensitivity of these effects differed between musicians and nonrnusicians. Pairs of performances of the same examples were presented in different series, one for each rule. Between the pairs in a series, the performance differences were varied within wide limits and, in the first pair in each series, the difference was pat, so as to catch the subject's attention. Subjects were asked to decide whether the two performances were identical. The results showed that musicians had a clearly greater sensitivity. The pedagogical implications of this finding will be discussed. 

  • 137.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Rules for automated performance of ensemble music1989Inngår i: Contemporary Music Review, ISSN 0749-4467, E-ISSN 1477-2256, Vol. 3, s. 89-109Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Recently developed parts of a computer program are presented that contain a rule system which automatically converts music scores to musical performance, and which, in a sense, can be regarded as a model of a musically gifted player. The development of the rule system has followed the analysis-by-synthesis strategy; various rules have been formulated according to the suggestions of a professional string quartet violinist and teacher of ensemble playing. The effects of various rules concerning synchronization and timing and also tuning, in performance of ensemble music are evaluated by a listening panel of professional musicians. Further support for the notion of melodic clzarge, previously introduced and playing a prominent rule in the performance rules, is found in a correlation with fine tuning of intervals. 

  • 138.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Rules for automatized performance of ensemble music1987Inngår i: STL-QPSR, Vol. 28, nr 4, s. 057-078Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    Recently developed parts of a computer program are presented that contain a rule system which automatically converts music scores to musical performance, and which, in a sense, can be regarded as a model of a musically gifted player. The development of the rule system has followed the analysis-by-synthesis strategy; various rules have been formulated after having been suggested by a professional string quartet violinist and teacher of ensemble playing. The effects of various rules concerning synchronization and timing and, also, tuning in performance of ensemble music are evaluated by a listening panel of professional musicians. Further support for the notion of melodic charge, previously introduced and playing a prominent rule in the performance rules, is found in a correlation with fine tuning of intervals. 

  • 139.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Threshold and preference Quantities of Rules for Music Performance1991Inngår i: Music perception, ISSN 0730-7829, E-ISSN 1533-8312, Vol. 9, nr 1, s. 71-92Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In an analysis- by-synthesis investigation of music performance, rules have been developed that describe when and how expressive deviations are made from the nominal music notation in the score. Two experiments that consider the magnitudes of such deviations are described. In Experiment 1, the musicians' and nonmusicians' sensitivities to expressive deviations generated by seven performance rules are compared. The musicians showed a clearly greater sensitivity. In Experiment 2, professional musicians adjusted to their satisfaction the quantity by which six rules affected the performance. For most rules, there was a reasonable agreement between the musicians regarding preference. The preferred quantities seemed close to the threshold of perceptibility.

  • 140.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Mathews, M. V.
    Bennett, G.
    Experiences of combining the radio baton with the director musices performance grammar2001Inngår i: MOSART project workshop on current research directions in computer music, 2001Konferansepaper (Fagfellevurdert)
  • 141.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Expressive aspects of instrumental and sung performance1994Inngår i: Proceedings  of the Symposium on Psychophysiology and Psychopathology of the Sense of Music / [ed] Steinberg, R., Heidelberg: Springer Berlin/Heidelberg, 1994Konferansepaper (Fagfellevurdert)
  • 142. Sundberg, Johan
    et al.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Expressive aspects of instrumental and vocal performance1995Inngår i: Music and the Mind Machine: Psychophysiology and Psychopathology of the Sense of Music / [ed] Steinberg, R., Heidelberg: Springer Berlin/Heidelberg, 1995Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    Several music computers can now convert an input note file to a sounding performance. Listening to such performances demonstrates convincingly the significance of the musicians’ contribution to music performance; when the music score is accurately replicated as nominally written, the music sounds dull and nagging. It is the musicians’ contributions that make the performance interesting. In other words, by deviating slightly from what is nominally written in the music score, the musicians add expressivity to the music.

  • 143.
    Sundberg, Johan
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Music communication as studied by means of performance1991Inngår i: STL-QPSR, Vol. 32, nr 1, s. 065-083Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    This article presents an overview of a long-term research work with a rule system for the automatic performance of music. The performance rules produce deviations from the durations, sound levels, and pitches nominally specified in the music score. They can be classified according to their apparent musical function: to help the listener (I) in the differentiation of different pitch and duration categories and (2) in the grouping of the tones. Apart from this, some rules serve the purpose of organizing tuning and synchronization in ensemble performance. The rules reveal striking similarities between music performance and speech; for instance final lengthening occur in both and the acoustic code used for marking of emphasis are similar.

  • 144.
    Ternström, Sten
    et al.
    KTH, Tidigare Institutioner (före 2005), Talöverföring och musikakustik. KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Analysis and simulation of small variations in the fundamental frequency of sustained vowels1989Inngår i: STL-QPSR, Vol. 30, nr 3, s. 001-014Artikkel i tidsskrift (Annet vitenskapelig)
  • 145.
    Ternström, Sten
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Sundberg, Johan
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Monteverdi’s vespers. A case study in music synthesis1988Inngår i: STL-QPSR, Vol. 29, nr 2-3, s. 093-105Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    The article describes the methods used in synthesizing a performance of the first movement of Monteverdi's Vespers from 1610. The synthesis combines results from studies of singing voice acoustics, ensemble acoustics, and rules for music performance. The emphasis is on the synthesis of choir sounds.

  • 146.
    Ternström, Sten
    et al.
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Sundberg, Johan
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Synthesizing choir singing1988Inngår i: Journal of Voice, ISSN 0892-1997, E-ISSN 1873-4588, Vol. 1, nr 4, s. 332-335Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Analysis by synthesis is a method that has been successfully applied in many areas of scientific research. In speech research, it has proven to be an excellent tool for identifying perceptually relevant acoustical properties of sounds. This paper reports on some first attempts at synthesizing choir singing, the aim being to elucidate the importance of factors such as the frequency scatter in the fundamental and the formants. The presentation relies heavily on sound examples.

  • 147. Thompson, W. F.
    et al.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Sundberg, Johan
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Evaluating rules for the synthetic performance of melodies1986Inngår i: STL-QPSR, Vol. 27, nr 2-3, s. 027-044Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    Starting from a text-to-speech conversion program (Carlson & Granstrom, 1975), a note-to-tone conversion program has been developed (!Xmdberg & ~rydh, 1985). It works with a set of ordered rules af fe&- ing the performance of melodies written into the computer. Depending on the musical context, each of these rules manipulates various tone parameters, such as sound level, fundamental frequency, duration, etc. In the present study the effect of some of the rules developed so far on the musical quality of the performance is tested; various musical excerpts perbrmed according to different combinations an5 versions of nine performance rules were played to musically trained listeners who rated the musical quality. The results support the assumption that the musical quality of the performance is improved by applying the rules. 

  • 148. Thompson, W. F.
    et al.
    Sundberg, Johan
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Friberg, Anders
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Frydén, Lars
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    The Use of Rules for Expression in the Performance of Melodies1989Inngår i: Psychology of Music, ISSN 0305-7356, E-ISSN 1741-3087, Vol. 17, s. 63-82Artikkel i tidsskrift (Fagfellevurdert)
  • 149. Wolff, D.
    et al.
    Bellec, Guillaume
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Friberg, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    MacFarlane, A.
    Weyde, T.
    Creating audio based experiments as social Web games with the CASimIR framework2014Inngår i: Proceedings of the AES International Conference, 2014, s. 288-297Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This article presents the CASimIR framework for online experiments, its use for creating a music audio game, and initial results and analyses of the collected data. Gathering user data is essential to understand the semantics of music audio and CASimIR is a new open framework for creating games with a purpose and surveys to collect user data in the Web and social networks. Its design facilitates collaborative data annotation in dynamic environments by providing modules for managing media and annotation data, as well as functionality for creating and running online real-time multi-player games. Technical benefits of the extensible framework as well as its cost effectiveness are discussed. As a case-study, we present its use in Spot The Odd Song Out, a multi-player game for collecting annotations on musical similarity and rhythm and we analyse the quantity and quality of data obtained to date with this game. The results and lessons for future data collection projects in the context of shared and linked data are discussed.

  • 150.
    Zamovaro, A. M:
    et al.
    Research Institute on Health Sciences, University of Balearic Islands, Palma de Mallorca, Spain.
    Zatorre, R. J.
    International Laboratory for Brain, Music and Sound research (BRAMS), Montreal, Canada.
    Vuust, Peter
    Friberg, Anders
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Birbaumer, Niels
    Wyss Center for Bio and Neuroengeneering, Chenin de Mines 9, 1202, Geneva, Switzerland.
    Kleber, Boris
    Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, & The Royal Academy of Music Aarhus/Aalborg, Denmark.
    Enhanced insular connectivity with speech sensorimotor regions in trained singers – a resting-state fMRI studyManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    The insula contributes to the detection and integration of salient events during goaldirected behavior and facilitates the interaction between motor, multisensory, and cognitive networks. Task-fMRI studies have suggested that experience with singing can enhance access to these resources. However, the long-term effects of vocal motor training on insula-based networks are currently unknown. In thisstudy, we used restingstate fMRI to explore experience-dependent differences in insula co-activation patterns between conservatory-trained singers and non-singers. We found enhanced insula connectivity in singers compared to non-singers with constituents of the speech sensorimotor network, including the cerebellum (lobule VI, crus 2), primary somatosensory cortex, the parietal lobes, and the thalamus. Moreover, accumulated singing training correlated positively with increased co-activation in bilateral primary sensorimotor cortices in the somatotopic representations of the larynx (left dorsal anterior insula, dAI) and the diaphragm (bilateral dAI)—crucial regions for motorcortical control of complex vocalizations—together with the thalamus (bilateral posterior insula/left dAI) and the left putamen (left dAI). The results of this study support the view that the insula plays a central role in the experience-dependent modulation of sensory integration within the vocal motor system, possibly by optimizing conscious and non-conscious aspects of salience processing associated with singing-related bodily signals.

123 101 - 150 of 150
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf