kth.sePublications
Change search
Refine search result
1 - 46 of 46
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Favero, Federico
    KTH, School of Architecture and the Built Environment (ABE).
    Annersten, Lars
    Musikverket.
    Berner, David
    Musikverket.
    Morreale, Fabio
    Queen Mary University of London.
    SOUND FOREST/LJUDSKOGEN: A LARGE-SCALE STRING-BASED INTERACTIVE MUSICAL INSTRUMENT2016In: Sound and Music Computing 2016, SMC Sound&Music Computing NETWORK , 2016, p. 79-84Conference paper (Refereed)
    Abstract [en]

     In this paper we present a string-based, interactive, largescale installation for a new museum dedicated to performing arts, Scenkonstmuseet, which will be inaugurated in 2017 in Stockholm, Sweden. The installation will occupy an entire room that measures 10x5 meters. We aim to create a digital musical instrument (DMI) that facilitates intuitive musical interaction, thereby enabling visitors to quickly start creating music either alone or together. The interface should be able to serve as a pedagogical tool; visitors should be able to learn about concepts related to music and music making by interacting with the DMI. Since the lifespan of the installation will be approximately five years, one main concern is to create an experience that will encourage visitors to return to the museum for continued instrument exploration. In other words, the DMI should be designed to facilitate long-term engagement. Finally, an important aspect in the design of the installation is that the DMI should be accessible and provide a rich experience for all museum visitors, regardless of age or abilities.

    Download full text (pdf)
    fulltext
  • 2.
    Bresin, Roberto
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. IRCAM STMS Lab.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound2021Conference paper (Refereed)
    Abstract [en]

    The aim of the SONAO project is to establish new methods basedon sonification of expressive movements for achieving a robust interaction between users and humanoid robots. We want to achievethis by combining competences of the research team members inthe fields of social robotics, sound and music computing, affective computing, and body motion analysis. We want to engineersound models for implementing effective mappings between stylized body movements and sound parameters that will enable anagent to express high-level body motion qualities through sound.These mappings are paramount for supporting feedback to andunderstanding robot body motion. The project will result in thedevelopment of new theories, guidelines, models, and tools forthe sonic representation of high-level body motion qualities in interactive applications. This work is part of the growing researchfield known as data sonification, in which we combine methodsand knowledge from the fields of interactive sonification, embodied cognition, multisensory perception, non-verbal and gesturalcommunication in robots.

  • 3.
    Bresin, Roberto
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Mancini, Maurizio
    University College Cork National University of Ireland: Cork, IE.
    Elblaus, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sonification of the self vs. sonification of the other: Differences in the sonification of performed vs. observed simple hand movements2020In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 144Article in journal (Refereed)
    Abstract [en]

    Existing works on interactive sonification of movements, i.e., the translation of human movement qualities from the physical to the auditory domain, usually adopt a predetermined approach: the way in which movement features modulate the characteristics of sound is fixed. In our work we want to go one step further and demonstrate that the user role can influence the tuning of the mapping between movement cues and sound parameters. Here, we aim to verify if and how the mapping changes when the user is either the performer or the observer of a series of body movements (tracing a square or an infinite shape with the hand in the air). We asked participants to tune movement sonification while they were directly performing the sonified movement vs. while watching another person performing the movement and listening to its sonification. Results show that the tuning of the sonification chosen by participants is influenced by three variables: role of the user (performer vs observer), movement quality (the amount of Smoothness and Directness in the movement), and physical parameters of the movements (velocity and acceleration). Performers focused more on the quality of their movement, while observers focused more on the sonic rendering, making it more expressive and more connected to low-level physical features.

  • 4.
    Falkenberg, Kjetil
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Creating digital musical instruments with and for children: Including vocal sketching as a method for engaging in codesign2020In: Human Technology, E-ISSN 1795-6889, Vol. 16, no 3, p. 348-371Article in journal (Refereed)
    Abstract [en]

    A class of master of science students and a group of preschool children codesigned new digital musical instruments based on workshop interviews involving vocal sketching, a method for imitating and portraying sounds. The aim of the study was to explore how the students and children would approach vocal sketching as one of several design methods. The children described musical instruments to the students using vocal sketching and other modalities (verbal, drawing, gestures). The resulting instruments built by the students were showcased at the Swedish Museum of Performing Arts in Stockholm. Although all the children tried vocal sketching during preparatory tasks, few employed the method during the workshop. However, the instruments seemed to meet the children’s expectations. Consequently, even though the vocal sketching method alone provided few design directives in the given context, we suggest that vocal sketching, under favorable circumstances, can be an engaging component that complements other modalities in codesign involving children.

  • 5.
    Falkenberg, Kjetil
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Ljungdahl Eriksson, Martin
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Otterbring, Tobias
    Daunfeldt, Sven-Olov
    Auditory notification of customer actions in a virtual retail environment: Sound design, awareness and attention2021In: Proceedings of International Conference on Auditory Displays ICAD 2021, 2021Conference paper (Refereed)
  • 6.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Accessible Digital Musical Instruments: A Review of Musical Interfaces in Inclusive Music Practice2019In: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 3, no 3, article id 57Article in journal (Refereed)
    Abstract [en]

    Current advancements in music technology enable the creation of customized Digital Musical Instruments (DMIs). This paper presents a systematic review of Accessible Digital Musical Instruments (ADMIs) in inclusive music practice. History of research concerned with facilitating inclusion in music-making is outlined, and current state of developments and trends in the field are discussed. Although the use of music technology in music therapy contexts has attracted more attention in recent years, the topic has been relatively unexplored in Computer Music literature. This review investigates a total of 113 publications focusing on ADMIs. Based on the 83 instruments in this dataset, ten control interface types were identified: tangible controllers, touchless controllers, Brain–Computer Music Interfaces (BCMIs), adapted instruments, wearable controllers or prosthetic devices, mouth-operated controllers, audio controllers, gaze controllers, touchscreen controllers and mouse-controlled interfaces. The majority of the AMDIs were tangible or physical controllers. Although the haptic modality could potentially play an important role in musical interaction for many user groups, relatively few of the ADMIs (15.6%) incorporated vibrotactile feedback. Aspects judged to be important for successful ADMI design were instrument adaptability and customization, user participation, iterative prototyping, and interdisciplinary development teams.

    Download full text (csv)
    Dataset
  • 7.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Accessible Digital Musical Instruments: A Survey of Inclusive Instruments Presented at the NIME, SMC and ICMC Conferences2018In: Proceedings of the International Computer Music Conference 2018: Daegu, South Korea / [ed] Tae Hong Park, Doo-Jin Ahn, San Francisco: The International Computer Music Association , 2018, p. 53-59Conference paper (Refereed)
    Abstract [en]

    This paper describes a survey of accessible Digital Musical Instruments (ADMIs) presented at the NIME, SMC and ICMC conferences. It outlines the history of research concerned with facilitating inclusion in music making and discusses advances, current state of developments and trends in the field. Based on a systematic analysis of DMIs presented at the three conferences, seven control interface types could be identified: tangible, nontangible, audio, touch-screen, gaze, BCMIs and adapted instruments. Most of the ADMIs were tangible interfaces or physical controllers. Many of the instruments were designed for persons with physical disabilities or children with health conditions or impairments. Little attention was paid to DMIs for blind users. Although the haptic modality could play an important role in musical interaction in this context, relatively few of the ADMIs (26.7%) incorporated vibrotactile feedback. A discussion on future directions for inclusive design of DMIs is presented.

    Download full text (csv)
    dataset
  • 8.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Diverse Sounds: Enabling Inclusive Sonic Interaction2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This compilation thesis collects a series of publications on designing sonic interactions for diversity and inclusion. The presented papers focus on case studies in which musical interfaces were either developed or reviewed. While the described studies are substantially different in their nature, they all contribute to the thesis by providing reflections on how musical interfaces could be designed to enable inclusion rather than exclusion. Building on this work, I introduce two terms: inclusive sonic interaction design and Accessible Digital Musical Instruments (ADMIs). I also define nine properties to consider in the design and evaluation of ADMIs: expressiveness, playability, longevity, customizability, pleasure, sonic quality, robustness, multimodality and causality. Inspired by the experience of playing an acoustic instrument, I propose to enable musical inclusion for under-represented groups (for example persons with visual- and hearing-impairments, as well as elderly people) through the design of Digital Musical Instruments (DMIs) in the form of rich multisensory experiences allowing for multiple modes of interaction. At the same time, it is important to enable customization to fit user needs, both in terms of gestural control and provided sonic output. I conclude that the computer music community has the potential to actively engage more people in music-making activities. In addition, I stress the importance of identifying challenges that people face in these contexts, thereby enabling initiatives towards changing practices.

    Download full text (pdf)
    Emma Frid - Diverse Sounds
  • 9.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Erratum: Accessible digital musical instruments—a review of musical interfaces in inclusive music practice (Multimodal Technologies and Interaction, (2019) 3, 57, 10.3390/mti3030057)2020In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 4, no 3, p. 1-2, article id 34Article in journal (Refereed)
    Abstract [en]

    Unfortunately, some errors and imprecise descriptions were made in the final proofreading phase, and the author, therefore, wishes to make the following corrections to this paper [1]: In the Abstract, it is erroneously stated that the percentage of ADMIs that incorporated vibrotactile feedback was 15.6%. The correct percentage should be 14.5%. The same error is replicated in Section 4.4. Output Modalities, on page 11 (13 ADMIs should be 12 ADMIs), and in Section 6. Conclusions, on page 15. The author would like to apologize for any inconvenience caused by these changes. The correct percentage further supports the claim that relatively few of the ADMIs incorporated vibrotactile feedback. Based on guidelines for writing for accessibility [2], the author would like to refrain from using the term “elderly” and instead use the term “older adults” in Sections 4.5 Target User Group (page 11), 5. Discussion (page 13), and Conclusions (page 15). Minor formatting errors were identified in Figure 4, on page 9, where the terms “touchscreen” and “touchless” were mistakenly spelled “touch-screen” and “touch-less”. In Table 2, “Book Sections” should be “Book Chapters”. There were also two errors in Table 3, where “Eyes-web” should be spelled “EyesWeb” and the word “sensor” was misspelled as “senor”. The figure and table were updated to account for these mistakes.

  • 10.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS Science and Technology of Music and Sound, IRCAM Institute for Research and Coordination in Acoustics/Music, Paris, France.
    Musical Robots: Overview and Methods for Evaluation2023In: Sound and Robotics: Speech, Non-Verbal Audio and Robotic Musicianship / [ed] Richard Savery, Boca Raton, FL, USA: Informa UK Limited , 2023, p. 1-42Chapter in book (Refereed)
    Abstract [en]

    Musical robots are complex systems that require the integration of several different functions to successfully operate. These processes range from sound analysis and music representation to mapping and modeling of musical expression. Recent advancements in Computational Creativity (CC) and Artificial Intelligence (AI) have added yet another level of complexity to these settings, with aspects of Human–AI Interaction (HAI) becoming increasingly important. The rise of intelligent music systems raises questions not only about the evaluation of Human-Robot Interaction (HRI) in robot musicianship but also about the quality of the generated musical output. The topic of evaluation has been extensively discussed and debated in the fields of Human–Computer Interaction (HCI) and New Interfaces for Musical Expression (NIME) throughout the years. However, interactions with robots often have a strong social or emotional component, and the experience of interacting with a robot is therefore somewhat different from that of interacting with other technologies. Since musical robots produce creative output, topics such as creative agency and what is meant by the term "success" when interacting with an intelligent music system should also be considered. The evaluation of musical robots thus expands beyond traditional evaluation concepts such as usability and user experience. To explore which evaluation methodologies might be appropriate for musical robots, this chapter first presents a brief introduction to the field of research dedicated to robotic musicianship, followed by an overview of evaluation methods used in the neighboring research fields of HCI, HRI, HAI, NIME, and CC. The chapter concludes with a review of evaluation methods used in robot musicianship literature and a discussion of prospects for future research.

  • 11.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Sonification of women in sound and music computing - The sound of female authorship in ICMC, SMC and NIME proceedings2017In: 2017 ICMC/EMW - 43rd International Computer Music Conference and the 6th International Electronic Music Week, Shanghai Conservatory of Music , 2017, p. 233-238Conference paper (Refereed)
    Abstract [en]

    The primary goal of this study was to approximate the number of female authors in the academic field of Sound and Music Computing. This was done through gender prediction from author names for proceedings from the ICMC, SMC and NIME conferences, and by sonifying these results. Although gender classification by first name can only serve as an estimation of the actual number of female authors in the field, some conclusions could be drawn. The total percentage of author names classified as female was 10.3% for ICMC, 11.9% for SMC and 11.9% for NIME. When merging data from all three conferences for years 2004-2016, it could be concluded that names classified as female ranged from 9.5 to 14.3%. Changes in the ratio of female vs. male authors over time were further illustrated by sonifications, allowing the reader to explore, compare and reflect upon the results by listening to sonic representations of the data. The conclusion that can be drawn from this study is that the field of Sound and Music Computing is still far from being gender-balanced.

  • 12.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RepMus - Représentations Musicales, STMS - Sciences et Technologies de la Musique et du Son, IRCAM - Institut de Recherche et Coordination Acoustique/Musique.
    The Gender Gap and the Computer Music Narrative: On the Under-Representation of Women at Computer Music Conferences2021In: Array - Journal of the International Computer Music Association, E-ISSN 2590-0056, Vol. 1, p. 43-49Article in journal (Refereed)
    Download full text (pdf)
    fulltext
  • 13.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Perceptual Evaluation of Blended Sonification of Mechanical Robot Sounds Produced by Emotionally Expressive Gestures: Augmenting Consequential Sounds to Improve Non-verbal Robot Communication2021In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805Article in journal (Refereed)
    Abstract [en]

    This paper presents two experiments focusing on perception of mechanical sounds produced by expressive robot movement and blended sonifications thereof. In the first experiment, 31 participants evaluated emotions conveyed by robot sounds through free-form text descriptions. The sounds were inherently produced by the movements of a NAO robot and were not specifically designed for communicative purposes. Results suggested no strong coupling between the emotional expression of gestures and how sounds inherent to these movements were perceived by listeners; joyful gestures did not necessarily result in joyful sounds. A word that reoccurred in text descriptions of all sounds, regardless of the nature of the expressive gesture, was “stress”. In the second experiment, blended sonification was used to enhance and further clarify the emotional expression of the robot sounds evaluated in the first experiment. Analysis of quantitative ratings of 30 participants revealed that the blended sonification successfully contributed to enhancement of the emotional message for sound models designed to convey frustration and joy. Our findings suggest that blended sonification guided by perceptual research on emotion in speech and music can successfully improve communication of emotions through robot sounds in auditory-only conditions.

  • 14.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Alborno, Paolo
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Interactive Sonification of Spontaneous Movement of Children: Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound2016In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 10, article id 521Article in journal (Refereed)
    Abstract [en]

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children’s spontaneous movement in terms of energy-, smoothness- and directness index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g. expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g. energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.

  • 15.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Alexanderson, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot - Implications for Movement Sonification of Humanoids2018In: Proceedings of the 15th Sound and Music Computing Conference / [ed] Anastasia Georgaki and Areti Andreopoulou, Limassol, Cyprus, 2018Conference paper (Refereed)
    Abstract [en]

    In this paper we present a pilot study carried out within the project SONAO. The SONAO project aims to compen- sate for limitations in robot communicative channels with an increased clarity of Non-Verbal Communication (NVC) through expressive gestures and non-verbal sounds. More specifically, the purpose of the project is to use move- ment sonification of expressive robot gestures to improve Human-Robot Interaction (HRI). The pilot study described in this paper focuses on mechanical robot sounds, i.e. sounds that have not been specifically designed for HRI but are inherent to robot movement. Results indicated a low correspondence between perceptual ratings of mechanical robot sounds and emotions communicated through ges- tures. In general, the mechanical sounds themselves ap- peared not to carry much emotional information compared to video stimuli of expressive gestures. However, some mechanical sounds did communicate certain emotions, e.g. frustration. In general, the sounds appeared to commu- nicate arousal more effectively than valence. We discuss potential issues and possibilities for the sonification of ex- pressive robot gestures and the role of mechanical sounds in such a context. Emphasis is put on the need to mask or alter sounds inherent to robot movement, using for exam- ple blended sonification.

  • 16.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Moll, Jonas
    Sallnäs Pysander, Eva-Lotta
    Sonification of haptic interaction in a virtual scene2014In: Sound and Music Computing Sweden 2014, Stockholm, December 4-5, 2014 / [ed] Roberto Bresin, 2014, p. 14-16Conference paper (Refereed)
    Abstract [en]

    This paper presents a brief overview of work-in-progress for a study on correlations between visual and haptic spatial attention in a multimodal single-user application comparing different modalities. The aim is to gain insight into how auditory and haptic versus visual representations of temporal events may affect task performance and spatial attention. For this purpose, a 3D application involving one haptic model and two different sound models for interactive sonification are developed.

    Download full text (pdf)
    fulltext
  • 17.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sallnäs Pysander, Eva-Lotta
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Moll, Jonas
    Uppsala University.
    An Exploratory Study On The Effect Of Auditory Feedback On Gaze Behavior In a Virtual Throwing Task With and Without Haptic Feedback2017In: Proceedings of the 14th Sound and Music Computing Conference / [ed] Tapio Lokki, Jukka Pätynen, and Vesa Välimäki, Espoo, Finland, 2017, p. 242-249Conference paper (Refereed)
    Abstract [en]

    This paper presents findings from an exploratory study on the effect of auditory feedback on gaze behavior. A total of 20 participants took part in an experiment where the task was to throw a virtual ball into a goal in different conditions: visual only, audiovisual, visuohaptic and audio- visuohaptic. Two different sound models were compared in the audio conditions. Analysis of eye tracking metrics indicated large inter-subject variability; difference between subjects was greater than difference between feedback conditions. No significant effect of condition could be observed, but clusters of similar behaviors were identified. Some of the participants’ gaze behaviors appeared to have been affected by the presence of auditory feedback, but the effect of sound model was not consistent across subjects. We discuss individual behaviors and illustrate gaze behavior through sonification of gaze trajectories. Findings from this study raise intriguing questions that motivate future large-scale studies on the effect of auditory feedback on gaze behavior. 

  • 18.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Interactive sonification of a fluid dance movement: an exploratory study2019In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, no 3, p. 181-189Article in journal (Refereed)
    Abstract [en]

    In this paper we present three different experiments designed to explore sound properties associated with fluid movement: (1) an experiment in which participants adjusted parameters of a sonification model developed for a fluid dance movement, (2) a vocal sketching experiment in which participants sketched sounds portraying fluid versus nonfluid movements, and (3) a workshop in which participants discussed and selected fluid versus nonfluid sounds. Consistent findings from the three experiments indicated that sounds expressing fluidity generally occupy a lower register and has less high frequency content, as well as a lower bandwidth, than sounds expressing nonfluidity. The ideal sound to express fluidity is continuous, calm, slow, pitched, reminiscent of wind, water or an acoustic musical instrument. The ideal sound to express nonfluidity is harsh, non-continuous, abrupt, dissonant, conceptually associated with metal or wood, unhuman and robotic. Findings presented in this paper can be used as design guidelines for future applications in which the movement property fluidity is to be conveyed through sonification.

  • 19.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Sonification of fluidity -
An exploration of perceptual connotations of a particular movement feature2016In: Proceedings of ISon 2016, 5th Interactive Sonification Workshop, Bielefeld, Germany, 2016, p. 11-17Conference paper (Refereed)
    Abstract [en]

    In this study we conducted two experiments in order to investigate potential strategies for sonification of the expressive movement quality “fluidity” in dance: one perceptual rating experiment (1) in which five different sound models were evaluated on their ability to express fluidity, and one interactive experiment (2) in which participants adjusted parameters for the most fluid sound model in (1) and performed vocal sketching to two video recordings of contemporary dance. Sounds generated in the fluid condition occupied a low register and had darker, more muffled, timbres compared to the non-fluid condition, in which sounds were characterized by a higher spectral centroid and contained more noise. These results were further supported by qualitative data from interviews. The participants conceptualized fluidity as a property related to water, pitched sounds, wind, and continuous flow; non-fluidity had connotations of friction, struggle and effort. The biggest conceptual distinction between fluidity and non-fluidity was the dichotomy of “nature” and “technology”, “natural” and “unnatural”, or even “human” and “unhuman”. We suggest that these distinct connotations should be taken into account in future research focusing on the fluidity quality and its corresponding sonification.

  • 20.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Designing and reporting research on sound design and music for health: Methods and frameworks for impact2021In: Doing Research in Sound Design / [ed] Michael Filimowicz, Focal Press , 2021, p. 125-150Chapter in book (Refereed)
    Abstract [en]

    This chapter presents key methodological aspects to consider for researchers in the fields of sound design and music computing when evaluating and making strategic choices for conducting research targeting health, accessibility and disability. We present practical suggestions for how to effectively increase the impact in the research community based on existing methods commonly used in evidence-based research. Although many of the described models, frameworks and methods are not novel, they have so far only been extensively applied in music therapy studies and music medicine interventions, but not in sound design research nor music computing. The frameworks presented here are gathered from, primarily, practices concerning systematic reviews. We conclude with a discussion about the current state of the field and provide examples of how proposed frameworks and guidelines can be used when reporting results from quantitative research studies to systematic reviews.

  • 21.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Inst Rech & Coordinat Acoust Mus IRCAM, Sci & Technol Mus & Son STMS, UMR9912, Paris, France..
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Agres, Kat
    Natl Univ Singapore, Ctr Mus & Hlth, Yong Siew Toh Conservatory Mus, Singapore, Singapore..
    Lucas, Alex
    Queens Univ Belfast, Son Arts Res Ctr, Belfast, North Ireland..
    Editorial: New advances and novel applications of music technologies for health, well-being, and inclusion2024In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 6, article id 1358454Article in journal (Refereed)
  • 22.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Giordano, Marcello
    McGill University.
    Schumacher, Marlon M.
    McGill University.
    Wanderley, Marcelo M.
    McGill University.
    Perceptual Characterization of a Tactile Display for a Live-Electronics Notification System2014In: Proceedings of the ICMC|SMC|2014 Conference, National and Kapodistrian University of Athens , 2014Conference paper (Refereed)
    Abstract [en]

    In this paper we present a study we conducted to assess physical and perceptual properties of a tactile display for a tactile notification system within the CIRMMT Live Electronics Framework (CLEF), a Max-based1 modular environment for composition and performance of live electronic music. Our tactile display is composed of two rotating eccentric mass actuators driven by a PWM signal generated from an Arduino microcontroller. We conducted physical measurements using an accelerometer and two user-based studies in order to evaluate: vibrotactile absolute perception threshold, differential threshold and vibration spectral peaks. Results, obtained through the use of a logit regression model, provide us with precise design guidelines. These guidelines will enable us to ensure robust perceptual discrimination between vibrotactile stimuli at different intensities. Among with other characterizations presented in this study, these guidelines will allow us to better design tactile cues for our notification system for live-electronics performance. 

    Download full text (pdf)
    fulltext
  • 23. Frid, Emma
    et al.
    Gomes, Celso
    Jin, Zeyu
    Music Creation by Example2020Manuscript (preprint) (Other academic)
    Abstract [en]

    Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI music engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about humanAI interaction as well as the design and evaluation of mixedinitiative interfaces in creative practice.

  • 24.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Sciences et Technologies de la Musique et du Son Laboratoire, STMS, CNRS, Ircam, Sorbonne Université, Ministère de la Culture, Paris, France.
    Ilsar, Alon
    Reimagining (Accessible) Digital Musical Instruments: A Survey on Electronic Music-Making Tools2021In: Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) 2021, 2021Conference paper (Refereed)
    Abstract [en]

    This paper discusses findings from a survey on interfaces for making electronic music. We invited electronic music makers of varying experience to reflect on their practice and setup and to imagine and describe their ideal interface for music-making. We also asked them to reflect on the state of gestural controllers, machine learning, and artificial intelligence in their practice. We had 118 people respond to the survey, with 40.68% professional musicians, and 10.17% identifying as living with a disability or access requirement. Results highlight limitations of music-making setups as perceived by electronic music makers, reflections on how imagined novel interfaces could address such limitations, and positive attitudes towards ML and AI in general.

    Download (csv)
    Data
    Download (pdf)
    Survey
  • 25.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Adobe Research.
    Jin, Zeyu
    Adobe Research, Seattle, WA, USA.
    Gomes, Celso
    Adobe Research, Seattle, WA, USA.
    Music Creation by Example2020In: Proceedings CHI '20: CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery (ACM) , 2020, p. 1-13, article id 387Conference paper (Refereed)
    Abstract [en]

    Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about human-AI interaction as well as the design and evaluation of mixedinitiative interfaces in creative practice.

    Download full text (zip)
    supplementary material
  • 26.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    KMH Royal College of Music.
    Haptic Music: Exploring Whole-Body Vibrations and Tactile Sound for a Multisensory Music Installation2020In: Proceedings of the Sound and Music Computing Conference (SMC) 2020 / [ed] Simone Spagnol and Andrea Valle, Torino, Italy, 2020, p. 68-75Conference paper (Refereed)
    Abstract [en]

    This paper presents a study on the composition of haptic music for a multisensory installation and how composers could be aided by a preparatory workshop focusing on the perception of whole-body vibrations prior to such a composition task. Five students from a Master’s program in Music Production were asked to create haptic music for the installation Sound Forest. The students were exposed to a set of different sounds producing whole-body vibrations through a wooden platform and asked to describe perceived sensations for respective sound. Results suggested that the workshop helped the composers successfully complete the composition task and that awareness of haptic possibilities of the multisensory installation could be improved through training. Moreover, the sounds used as stimuli provided a relatively wide range of perceived sensations, ranging from pleasant to unpleasant. Considerable intra-subject differences motivate future large-scale studies on the perception of whole-body vibrations in artistic music practice.

    Download full text (pdf)
    fulltext
  • 27.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal College of Music, Stockholm, Sweden.
    Hansen, Kjetil Falkenberg
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sound Forest - Evaluation of an Accessible Multisensory Music Installation2019In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM , 2019, p. 1-12, article id 677Conference paper (Refereed)
    Abstract [en]

    Sound Forest is a music installation consisting of a room with light-emitting interactive strings, vibrating platforms and speakers, situated at the Swedish Museum of Performing Arts. In this paper we present an exploratory study focusing on evaluation of Sound Forest based on picture cards and interviews. Since Sound Forest should be accessible for everyone, regardless age or abilities, we invited children, teens and adults with physical and intellectual disabilities to take part in the evaluation. The main contribution of this work lies in its fndings suggesting that multisensory platforms such as Sound Forest, providing whole-body vibrations, can be used to provide visitors of diferent ages and abilities with similar associations to musical experiences. Interviews also revealed positive responses to haptic feedback in this context. Participants of diferent ages used diferent strategies and bodily modes of interaction in Sound Forest, with activities ranging from running to synchronized music-making and collaborative play.

  • 28.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Ljungdahl Eriksson, Martin
    Otterbring, Tobias
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lidbo, Håkan
    Daunfeldt, Sven-Olov
    On Designing Sounds to Reduce Shoplifting in Retail Environments2021Conference paper (Refereed)
  • 29.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Moll, Jonas
    Uppsala University.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Sallnäs Pysander, Eva-Lotta
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task2018In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, no 4, p. 279-290Article in journal (Refereed)
    Abstract [en]

    In this paper we present a study on the effects of auditory- and haptic feedback in a virtual throwing task performed with a point-based haptic device. The main research objective was to investigate if and how task performance and perceived intuitiveness is affected when interactive sonification and/or haptic feedback is used to provide real-time feedback about a movement performed in a 3D virtual environment. Emphasis was put on task solving efficiency and subjective accounts of participants’ experiences of the multimodal interaction in different conditions. The experiment used a within-subjects design in which the participants solved the same task in different conditions: visual-only, visuohaptic, audiovisual and audiovisuohaptic. Two different sound models were implemented and compared. Significantly lower error rates were obtained in the audiovisuohaptic condition involving movement sonification based on a physical model of friction, compared to the visual-only condition. Moreover, a significant increase in perceived intuitiveness was observed for most conditions involving haptic and/or auditory feedback, compared to the visual-only condition. The main finding of this study is that multimodal feedback can not only improve perceived intuitiveness of an interface but that certain combinations of haptic feedback and movement sonification can also contribute with performance-enhancing properties. This highlights the importance of carefully designing feedback combinations for interactive applications.

  • 30.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Sciences et Technologies de la Musique et du Son Laboratoire, STMS, CNRS, Ircam, Sorbonne Université, Ministère de la Culture, Paris, France.
    Orini, Michele
    Martinelli, Giampaolo
    Chew, Elaine
    Mapping Inter-Cardiovascular Time-Frequency Coherence to Harmonic Tension in Sonification of Ensemble Interaction Between a COVID-19 Patient and the Medical Team2021In: Proceedings of the International Conference on Auditory Display (ICAD) 2021, 2021Conference paper (Refereed)
    Abstract [en]

    This paper presents exploratory work on sonic and visual representations of heartbeats of a COVID-19 patient and a medical team. The aim of this work is to sonify heart signals to reflect how a medical team comes together during a COVID-19 treatment, i.e. to highlight other aspects of the COVID-19 pandemic than those usually portrayed through sonification, which often focuses on the number of cases. The proposed framework highlights synergies between sound and heart signals through mapping between timefrequency coherence (TFC) of heart signals and harmonic tension and dissonance in music. Results from a listening experiment suggested that the proposed mapping between TFC and harmonic tension was successful in terms of communicating low versus high coherence between heart signals, with an overall accuracy of 69%, which was significantly higher than chance. In the light of the performed work, we discuss how links between heart- and sound signals can be further explored through sonification to promote understanding of aspects related to cardiovascular health.

    Download full text (pdf)
    Frid et al. ICAD 2021
    Download full text (csv)
    csv
  • 31.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS IRCAM.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Haptic Music Players for Children with Profound and Multiple Learning Dis-abilities (PMLD): Exploring Different Modes of Interaction for Felt Sound2022In: Proceedings of the 24th International Congress on Acoustics (ICA2022): A10 -05 Physiological Acoustics - Multi-modal solutions to enhance hearing / [ed] Jeremy Marozeau, Sebastian Merchel, Gyeongju, South Korea: Acoustic Society of Korea , 2022, article id ABS-0021Conference paper (Refereed)
    Abstract [en]

    This paper presents a six-month exploratory case study on the evaluation of three Haptic Music Players (HMPs) with four pre-verbal children with Profound and Multiple Learning Disabilities (PMLD). The evaluated HMPs were 1) a commercially available haptic pillow, 2) a haptic device embedded in a modified plush-toy backpack, and 3) a custom-built plush toy with a built-in speaker and tactile shaker. We evaluated the HMPs through qualitative interviews with a teacher who served as a proxy for the preverbal children participating in the study; the teacher augmented the students’ communication by reporting observations from each test session. The interviews explored functionality, accessibility, versus user experience aspects of respective HMP and revealed significant differences between devices. Our findings highlighted the influence of physical affordances provided by the HMP designs and the importance of a playful design in this context. Results suggested that sufficient time should be allocated to HMP familiarization prior to any evaluation procedure, since experiencing musical haptics through objects is a novel experience that might require some time to get used to. We discuss design considerations for Haptic Music Players and provide suggestions for future developments of multimodal systems dedicated to enhancing music listening in special education settings. 

    Download full text (pdf)
    fulltext
  • 32.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. IRCAM, STMS Sci & Technol Mus & Son UMR9912, 1 Pl Igor Stravinsky, F-75004 Paris, France..
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Núñez-Pacheco, Claudia
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Customizing and Evaluating Accessible Multisensory Music Experiences with Pre-Verbal Children: A Case Study on the Perception of Musical Haptics Using Participatory Design with Proxies2022In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 6, no 7, article id 55Article in journal (Refereed)
    Abstract [en]

    Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD) can be involved in such processes. In this paper, we apply in-depth qualitative and mixed methodologies in a case study with four students with PMLD. Using Participatory Design with Proxies (PDwP), we assess how these students can be involved in the customization and evaluation of the design of a multisensory music experience intended for a large-scale ADMI. Results from an experiment focused on communication of musical haptics highlighted the diversity in employed interaction strategies used by the children, accessibility limitations of the current multisensory experience design, and the importance of using a multifaceted variety of qualitative and quantitative methods to arrive at more informed conclusions when applying a design with proxies methodology.

  • 33.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bouvier, Baptiste
    STMS IRCAM CNRS SU, Paris, France.
    Fraticelli, Matthieu
    Département d’études cognitives ENS, Paris, France.
    A Dual-Task Experimental Methodology for Exploration of Saliency of Auditory Notifications in a Retail Soundscape2023In: Proceedings of the 28th International Conference on Auditory Display (ICAD2023): Sonification for the Masses, 2023, 2023Conference paper (Refereed)
    Abstract [en]

    This paper presents an experimental design of a dual-task experiment aimed at exploring the salience of auditory notifications. The first task is a Sustained Attention to Response Task (SART) and the second task involves listening to a complex store soundscape that includes ambient sounds, background music and auditory notifications. In this task, subjects are asked to press a button when an auditory notification is detected. The proposed method is based on a triangulation approach in which quantitative variables are combined with perceptual ratings and free-text question replies to obtain a holistic picture of how the sound environment is perceived. Results from this study can be used to inform the design of systems presenting music and peripheral auditory notifications in a retail environment.

    Download full text (pdf)
    fulltext
  • 34. Giordano, M
    et al.
    Hattwick, I
    Franco, I
    Egloff, D
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Lamontagne, V
    TeZ, C
    Salter, C
    Wanderley, M
    Design and Implementation of a Whole-Body Haptic Suit for “Ilinx”, a Multisensory Art Installation2015In: Proc. of the 12th Int. Conference on Sound and Music Computing (SMC-15) / [ed] Joseph Timoney and Thomas Lysaght, Maynooth, Ireland: Maynooth University , 2015, Vol. 1, p. 169-175Conference paper (Refereed)
    Abstract [en]

    Ilinx is a multidisciplinary art/science research project focusing on the development of a multisensory art installation involving sound, visuals and haptics. In this paper we describe design choices and technical challenges behind the development of the haptic technology embedded into six augment garments. Starting from perceptual experiments, conducted to characterize the thirty vibrating actuators used in the garments, we describe hardware and software design, and the development of several haptic effects. The garments have successfully been used by over 300 people during the premiere of the installation in the TodaysArt 2014 festival in The Hague.

    Download full text (pdf)
    fulltext
  • 35.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal Acad Mus, Mus & Media Prod, Stockholm, Sweden.
    Unproved methods from the frontier in the course curriculum: A bidirectional and mutually beneficial research challenge2020In: INTED2020 Proceedings, IATED , 2020, p. 7033-7038Conference paper (Refereed)
  • 36.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sonic characteristics of robots in films2019In: Proceedings of the 16th Sound and Music Computing Conference, Malaga, Spain, 2019, p. 1-6, article id P2.7Conference paper (Refereed)
    Abstract [en]

    Robots are increasingly becoming an integral part of our everyday life. Expectations on robots could be influenced by how robots are represented in science fiction films. We hypothesize that sonic interaction design for real-world robots may find inspiration from sound design of fictional robots. In this paper, we present an exploratory study focusing on sonic characteristics of robot sounds in films. We believe that findings from the current study could be of relevance for future robotic applications involving the communication of internal states through sounds, as well for sonification of expressive robot movements. Excerpts from five films were annotated and analysed using Long Time Average Spectrum (LTAS). As an overall observation, we found that robot sonic presence is highly related to the physical appearance of robots. Preliminary results show that most of the robots analysed in this study have “metallic” voice qualities, matching the material of their physical form. Characteristics of robot voices show significant differences compared to voices of human characters; fundamental frequency of robotic voices is either shifted to higher or lower values, and the voices span over a broader frequency band.

  • 37.
    Lindetorp, Hans
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal Coll Mus, Dept Mus & Media Prod, Stockholm, Sweden.
    Svahn, Maria
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Hölling, Josefine
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Inst Res & Coordinat Acoust Mus IRCAM, Sci & Technol Mus & Sound STMS, Paris, France..
    Collaborative music-making: special educational needs school assistants as facilitators in performances with accessible digital musical instruments2023In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 5, article id 1165442Article in journal (Refereed)
    Abstract [en]

    The field of research dedicated to Accessible Digital Musical Instruments (ADMIs) is growing and there is an increased interest in promoting diversity and inclusion in music-making. We have designed a novel system built into previously tested ADMIs that aims at involving assistants, students with Profound and Multiple Learning Disabilities (PMLD), and a professional musician in playing music together. In this study the system is evaluated in a workshop setting using quantitative as well as qualitative methods. One of the main findings was that the sounds from the ADMIs added to the musical context without making errors that impacted the music negatively even when the assistants mentioned experiencing a split between attending to different tasks, and a feeling of insecurity toward their musical contribution. We discuss the results in terms of how we perceive them as drivers or barriers toward reaching our overarching goal of organizing a joint concert that brings together students from the SEN school with students from a music school with a specific focus on traditional orchestral instruments. Our study highlights how a system of networked and synchronized ADMIs could be conceptualized to include assistants more actively in collaborative music-making, as well as design considerations that support them as facilitators.

  • 38. Ljungdahl Eriksson, Martin
    et al.
    Otterbring, Tobias
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sounds and Satisfaction: A Novel Conceptualization of the Soundscape in Sales and Service Settings2022In: Proceedings of the Nordic Retail and Wholesale Conference, 2022Conference paper (Refereed)
  • 39.
    Núñez-Pacheco, Claudia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS IRCAM.
    Sharing Earthquake Narratives: Making Space for Others in our Autobiographical Design Process2023In: CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems / [ed] Albrecht Schmidt, Kaisa Väänänen,Tesh Goyal, Per Ola Kristensson,Anicia Peters, Stefanie Mueller, Julie R. Williamson, Max L. Wilson, New York, NY, United States, 2023, article id 685Conference paper (Refereed)
    Abstract [en]

    As interaction designers are venturing to design for others based on autobiographical experiences, it becomes particularly relevant to critically distinguish the designer’s voice from others’ experiences. However, few reports go into detail about how self and others mutually shape the design process and how to incorporate external evaluation into these designs. We describe a one-year process involving the design and evaluation of a prototype combining haptics and storytelling, aiming to materialise and share somatic memories of earthquakes experienced by a designer and her partner. We contribute with three strategies for bringing others into our autobiographical processes, avoiding the dilution of frst-person voices while critically addressing design faws that might hinder the representation of our stories. 

  • 40.
    Núñez-Pacheco, Claudia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS Science et Technology of Music and Sound Lab, IRCAM Institute of Research and Coordination in Acoustics/Music, Paris, France.
    Sharing Earthquake Narratives: Making Space for Others in our Autobiographical Design Process2023In: CHI 2023 - Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery (ACM) , 2023, article id 685Conference paper (Refereed)
    Abstract [en]

    As interaction designers are venturing to design for others based on autobiographical experiences, it becomes particularly relevant to critically distinguish the designer's voice from others' experiences. However, few reports go into detail about how self and others mutually shape the design process and how to incorporate external evaluation into these designs. We describe a one-year process involving the design and evaluation of a prototype combining haptics and storytelling, aiming to materialise and share somatic memories of earthquakes experienced by a designer and her partner. We contribute with three strategies for bringing others into our autobiographical processes, avoiding the dilution of first-person voices while critically addressing design flaws that might hinder the representation of our stories.

  • 41.
    Paloranta, Jimmie
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Lundström, Anders
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Interaction with a large sized augmented string instrument intended for a public setting2016In: Sound and Music Computing 2016 / [ed] Großmann, Rolf and Hajdu, Georg, Hamburg: Zentrum für Mikrotonale Musik und Multimediale Komposition (ZM4) , 2016, p. 388-395Conference paper (Refereed)
    Abstract [en]

    In this paper we present a study of the interaction with a large sized string instrument intended for a large installation in a museum, with focus on encouraging creativity,learning, and providing engaging user experiences. In the study, nine participants were video recorded while interacting with the string on their own, followed by an interview focusing on their experiences, creativity, and the functionality of the string. In line with previous research, our results highlight the importance of designing for different levels of engagement (exploration, experimentation, challenge). However, results additionally show that these levels need to consider the users age and musical background as these profoundly affect the way the user plays with and experiences the string.

    Download full text (pdf)
    fulltext
  • 42.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    SuperOM: a SuperCollider class to generate music scores in OpenMusic2023In: Proceedings of the 8th International Conference on Technologies for Music Notation and Representation (TENOR) / [ed] Anthony Paul De Ritis, Victor Zappi, Jeremy Van Buskirk and John Mallia, Boston, MA, USA: Northeastern University Library , 2023, p. 68-75Conference paper (Refereed)
    Abstract [en]

    This paper introduces SuperOM, a class built for the software SuperCollider in order to create a bridge to OpenMu- sic and thus facilitate the creation of musical scores from SuperCollider patches. SuperOM is primarily intended to be used as a tool for SuperCollider users who make use of assisted composition techniques and want the output of such processes to be captured through automatic notation transcription. This paper first presents an overview of existing transcription tools for SuperCollider, followed by a detailed description of SuperOM and its implementation, as well as examples of how it can be used in practice. Finally, a case study in which the transcription tool was used as an assistive composition tool to generate the score of a sonification – which later was turned into a piano piece – is discussed. 

    Download full text (pdf)
    fulltext
  • 43.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Mattias, Sköld
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal College of Music.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    From vocal sketching to sound models by means of a sound-based musical transcription system2019In: Proceedings of the Sound and Music Computing Conferences, CERN , 2019, p. 167-173Conference paper (Refereed)
    Abstract [en]

    This paper explores how notation developed for the representation of sound-based musical structures could be used for the transcription of vocal sketches representing expressive robot movements. A mime actor initially produced expressive movements which were translated to a humanoid robot. The same actor was then asked to illustrate these movements using vocal sketching. The vocal sketches were transcribed by two composers using sound-based notation. The same composers later synthesized new sonic sketches from the annotated data. Different transcriptions and synthesized versions of these were compared in order to investigate how the audible outcome changes for different transcriptions and synthesis routines. This method provides a palette of sound models suitable for the sonification of expressive body movements.

  • 44.
    Spiro, Neta
    et al.
    Centre for Performance Science, Royal College of Music, London, UK;Faculty of Medicine, Imperial College London, London, UK.
    Sanfilippo, Katie Rose M.
    Centre for Healthcare Innovation Research, School of Health and Psychological Sciences, City, University of London, UK.
    McConnell, Bonnie B.
    College of Arts and Social Sciences, Australian National University, Canberra, Australia.
    Pike-Rowney, Georgia
    Centre for Classical Studies, Australian National University, Canberra, Australia.
    Bonini Baraldi, Filippo
    Instituto de Etnomusicologia – Centro de Estudos em Música e Dança (INET-md), Faculty of Social and Human Sciences, NOVA University Lisbon, Portugal;Centre de Recherche en Ethnomusicologie (CREM-LESC), Paris Nanterre University, Nanterre, France.
    Brabec, Bernd
    Institute of Musicology, University of Innsbruck, Innsbruck, Austria.
    Van Buren, Kathleen
    Humanities in Medicine, Mayo Clinic.
    Camlin, Dave
    Department of Music Education, Royal College of Music, London, UK.
    Cardoso, Tânya Marques
    Musicoterapia (Music Therapy Undergraduate Course), Universidade Federal de Goiás (Federal University of Goiás), Goiania, Brasil (Brazil).
    Çifdalöz, Burçin Uçaner
    Ankara Haci Bayram Veli University, Ankara, Türkiye.
    Cross, Ian
    Faculty of Music, Centre for Music & Science, Cambridge, UK.
    Dumbauld, Ben
    Marimba Band, New York, New York, USA.
    Ettenberger, Mark
    Music Therapy Service Clínica Colsanitas, Bogotá, Colombia;SONO – Centro de Musicoterapia, Bogotá, Colombia.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Fouché, Sunelle
    University of Pretoria, Pretoria, South Africa;MusicWorks, Clareinch, South Africa.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS Science and Technology of Music and Sound, IRCAM Institute for Research and Coordination in Acoustics/Music, Paris, France.
    Gosine, Jane
    School of Music, Memorial University, St. John's, Canada.
    Graham-Jackson, April l.
    Department of Geography, University of California, Berkeley, CA, USA.
    Grahn, Jessica A.
    Department of Psychology and Brain, Mind Institute, London, Ontario, Canada.
    Harrison, Klisala
    Department of Musicology and Dramaturgy, School of Communication and Culture, Aarhus University, Aarhus, Denmark.
    Ilari, Beatriz
    University of Southern California, Thornton School of Music, Los Angeles, CA, USA.
    Mollison, Sally
    SALMUTATIONS, Lutruwita/Tasmania, Australia.
    Morrison, Steven J.
    Henry & Leigh Bienen School of Music, Northwestern University, Evanston, IL, USA.
    Pérez-Acosta, Gabriela
    Faculty of Music, National Autonomous University of Mexico, Ciudad de Mexico, Mexico.
    Perkins, Rosie
    Centre for Performance Science, Royal College of Music, London, UK;Faculty of Medicine, Imperial College London, London, UK.
    Pitt, Jessica
    Department of Music Education, Royal College of Music, London, UK.
    Rabinowitch, Tal-Chen
    School of Creative Arts Therapies, University of Haifa, Haifa, Israel.
    Robledo, Juan-Pablo
    Millennium Institute for Care Research (MICARE), Santiago, Chile.
    Roginsky, Efrat
    School of Creative Arts Therapies, University of Haifa, Haifa, Israel.
    Shaughnessy, Caitlin
    Centre for Performance Science, Royal College of Music, London, UK;Faculty of Medicine, Imperial College London, London, UK.
    Sunderland, Naomi
    Creative Arts Research Institute and School of Health Sciences and Social Work, Griffith University, Queensland, Australia.
    Talmage, Alison
    School of Music and Centre for Brain Research, The University of Auckland – Waipapa Taumata Rau, Auckland, New Zealand.
    Tsiris, Giorgos
    Queen Margaret University, Edinburgh, UK;St Columba's Hospice Care, Edinburgh, UK.
    de Wit, Krista
    Music in Context, Hanze University of Applied Sciences, Groningen, The Netherlands.
    Perspectives on Musical Care Throughout the Life Course: Introducing the Musical Care International Network2023In: Music & Science, E-ISSN 2059-2043, Vol. 6Article in journal (Refereed)
    Abstract [en]

    In this paper we report on the inaugural meetings of the Musical Care International Network held online in 2022. The term “musical care” is defined by Spiro and Sanfilippo (2022) as “the role of music—music listening as well as music-making—in supporting any aspect of people's developmental or health needs” (pp. 2–3). Musical care takes varied forms in different cultural contexts and involves people from different disciplines and areas of expertise. Therefore, the Musical Care International Network takes an interdisciplinary and international approach and aims to better reflect the disciplinary, geographic, and cultural diversity relevant to musical care. Forty-two delegates participated in 5 inaugural meetings over 2 days, representing 24 countries and numerous disciplines and areas of practice. Based on the meetings, the aims of this paper are to (1) better understand the diverse practices, applications, contexts, and impacts of musical care around the globe and (2) introduce the Musical Care International Network. Transcriptions of the recordings, alongside notes taken by the hosts, were used to summarise the conversations. The discussions developed ideas in three areas: (a) musical care as context-dependent and social, (b) musical care's position within the broader research and practice context, and (c) debates about the impact of and evidence for musical care. We can conclude that musical care refers to context-dependent and social phenomena. The term musical care was seen as useful in talking across boundaries while not minimizing individual disciplinary and professional expertise. The use of the term was seen to help balance the importance and place of multiple disciplines, with a role to play in the development of a collective identity. This collective identity was seen as important in advocacy and in helping to shape policy. The paper closes with proposed future directions for the network and its emerging mission statement.

  • 45.
    Stojanovski, Todor
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Zhang, Hui
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Chhatre, Kiran
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Samuels, Ivor
    Univ Birmingham, Urban Morphol Res Grp, Birmingham, W Midlands, England..
    Sanders, Paul
    Deakin Univ, Melbourne, Vic, Australia..
    Partanen, Jenni
    Tallinn Univ Technol, Tallinn, Estonia..
    Lefosse, Deborah
    Sapienza, Rome, Italy..
    Rethinking Computer-Aided Architectural Design (CAAD) - From Generative Algorithms and Architectural Intelligence to Environmental Design and Ambient Intelligence2022In: Computer-Aided Architectural Design: Design Imperatives: The Future Is Now / [ed] Gerber, D Pantazis, E Bogosian, B Nahmad, A Miltiadis, C, Springer Nature , 2022, Vol. 1465, p. 62-83Conference paper (Refereed)
    Abstract [en]

    Computer-Aided Architectural Design (CAAD) finds its historical precedents in technological enthusiasm for generative algorithms and architectural intelligence. Current developments in Artificial Intelligence (AI) and paradigms in Machine Learning (ML) bring new opportunities for creating innovative digital architectural tools, but in practice this is not happening. CAAD enthusiasts revisit generative algorithms, while professional architects and urban designers remain reluctant to use software that automatically generates architecture and cities. This paper looks at the history of CAAD and digital tools for Computer Aided Design (CAD), Building Information Modeling (BIM) and Geographic Information Systems (GIS) in order to reflect on the role of AI in future digital tools and professional practices. Architects and urban designers have diagrammatic knowledge and work with design problems on symbolic level. The digital tools gradually evolved from CAD to BIM software with symbolical architectural elements. The BIM software works like CAAD (CAD systems for Architects) or digital board for drawing and delivers plans, sections and elevations, but without AI. AI has the capability to process data and interact with designers. The AI in future digital tools for CAAD and Computer-Aided Urban Design (CAUD) can link to big data and develop ambient intelligence. Architects and urban designers can harness the benefits of analytical ambient intelligent AIs in creating environmental designs, not only for shaping buildings in isolated virtual cubicles. However there is a need to prepare frameworks for communication between AIs and professional designers. If the cities of the future integrate spatially analytical AI, are to be made smart or even ambient intelligent, AI should be applied to improving the lives of inhabitants and help with their daily living and sustainability.

  • 46.
    Svahn, Maria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Hölling, Josefine
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    Royal Academy of Music, Stockholm, Sweden.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Collaborative music-making with Special Education Needs students and their assistants: A study on music playing among preverbal individuals with the Funki instruments2023In: NNDR 16th Research Conference Nordic Network on Disability Research (NNDR), 2023Conference paper (Refereed)
    Abstract [en]

    The field of research dedicated to Accessible Digital Musical Instruments (ADMIs) is growingand there is an increased interest in how different accessible music technologies can beused to promote diversity and inclusion in music-making. Researchers currently voice theneed to move away from a techno-centric view of musical expression and to focus more onthe sociocultural contexts in which ADMIs are used. In this study, we explore how “Funki”, aset of ADMIs developed for students with Profound and Multiple Learning Disabilities(PMLD) can be used in a collaborative music-making setting in a Special Educational Needs(SEN) school, together with assistants. Previous findings have suggested that the musicalinteractions taking place, as well as the group dynamics, were highly dependent on thesession assistants and their level of participation. It is therefore important to consider theactive role of assistants, who may have little or no prior music training. The instrumentsprovided should allow the assistant to not only help the students in making music but alsoenable the assistants themselves to create sounds without interfering or disturbing thesounds produced by the students. In the current work, we show how the Funki instrumentscould be expanded with WebAudioXML (waxml) for mapping user interactions to controlmusic and audio parameters and make it possible for assistants to control musical aspectslike the tonality, rhythmic density, or structure of the composition. The system was tested in acase study with four students and their assistants at a SEN school, including semi-structuredinterviews on how Funki supported inclusive music-making and the assistant’s role in thiscontext. The findings of this work highlight how ADMIs could be conceptualized anddesigned to include special education teachers, teaching assistants, and other carers moreactively in collaborative music-making. 

1 - 46 of 46
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf