kth.sePublications
Change search
Refine search result
123456 1 - 50 of 294
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Fernandez-Martín, Claudio
    et al.
    CVBLab, Instituto Universitario de Investigacin en Tecnologa Centrada en el Ser Humano (HUMAN-tech), addressline=Universitat Politcnica de Valncia, city=Valencia, postcode=46022, state=Valencia, country=Spain, Valencia.
    Colomer, Adrian
    CVBLab, Instituto Universitario de Investigacin en Tecnologa Centrada en el Ser Humano (HUMAN-tech), addressline=Universitat Politcnica de Valncia, city=Valencia, postcode=46022, state=Valencia, country=Spain, Valencia; organization=ValgrAI - Valencian Graduate School and Research Network for Artificial Intelligence, addressline=Universitat Politcnica de Valncia, city=Valencia, postcode=46022, state=Valencia, country=Spain, Valencia.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Naranjo, Valery
    CVBLab, Instituto Universitario de Investigacin en Tecnologa Centrada en el Ser Humano (HUMAN-tech), addressline=Universitat Politcnica de Valncia, city=Valencia, postcode=46022, state=Valencia, country=Spain, Valencia.
    Choosing only the best voice imitators: Top-K many-to-many voice conversion with StarGAN2024In: Speech Communication, ISSN 0167-6393, E-ISSN 1872-7182, Vol. 156, article id 103022Article in journal (Refereed)
    Abstract [en]

    Voice conversion systems have become increasingly important as the use of voice technology grows. Deep learning techniques, specifically generative adversarial networks (GANs), have enabled significant progress in the creation of synthetic media, including the field of speech synthesis. One of the most recent examples, StarGAN-VC, uses a single pair of generator and discriminator to convert voices between multiple speakers. However, the training stability of GANs can be an issue. The Top-K methodology, which trains the generator using only the best K generated samples that “fool” the discriminator, has been applied to image tasks and simple GAN architectures. In this work, we demonstrate that the Top-K methodology can improve the quality and stability of converted voices in a state-of-the-art voice conversion system like StarGAN-VC. We also explore the optimal time to implement the Top-K methodology and how to reduce the value of K during training. Through both quantitative and qualitative studies, it was found that the Top-K methodology leads to quicker convergence and better conversion quality compared to regular or vanilla training. In addition, human listeners perceived the samples generated using Top-K as more natural and were more likely to believe that they were produced by a human speaker. The results of this study demonstrate that the Top-K methodology can effectively improve the performance of deep learning-based voice conversion systems.

  • 2.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Inst Rech & Coordinat Acoust Mus IRCAM, Sci & Technol Mus & Son STMS, UMR9912, Paris, France..
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Agres, Kat
    Natl Univ Singapore, Ctr Mus & Hlth, Yong Siew Toh Conservatory Mus, Singapore, Singapore..
    Lucas, Alex
    Queens Univ Belfast, Son Arts Res Ctr, Belfast, North Ireland..
    Editorial: New advances and novel applications of music technologies for health, well-being, and inclusion2024In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 6, article id 1358454Article in journal (Refereed)
  • 3.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    From Motion Pictures to Robotic Features: Adopting film sound design practices to foster sonic expression in social robotics through interactive sonification2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This dissertation investigates the role of sound design in social robotics, drawing inspiration from robot depictions in science-fiction films. It addresses the limitations of robots’ movements and expressive behavior by integrating principles from film sound design, seeking to improve human-robot interaction through expressive gestures and non-verbal sounds.

    The compiled works are structured into two parts. The first part focuses on perceptual studies, exploring how people perceive non-verbal sounds displayed by a Pepper robot related to its movement. These studies highlighted preferences for more refined sound models, subtle sounds that blend with ambient sounds, and sound characteristics matching the robot’s visual attributes. This part also resulted in a programming interface connecting the Pepper robot with sound production tools.

    The second part focuses on a structured analysis of robot sounds in films, revealing three narrative themes related to robot sounds in films with implications for social robotics. The first theme involves sounds associated with the physical attributes of robots, encompassing sub-themes of sound linked to robot size, exposed mechanisms, build quality, and anthropomorphic traits. The second theme delves into sounds accentuating robots’ internal workings, with sub-themes related to learning and decision-making processes. Lastly, the third theme revolves around sounds utilized in robots’ interactions with other characters within the film scenes.

    Based on these works, the dissertation discusses sound design recommendations for social robotics inspired by practices in film sound design. These recommendations encompass selecting the appropriate sound materials and sonic characteristics such as pitch and timbre, employing movement sound for effective communication and emotional expression, and integrating narrative and context into the interaction.

    Download full text (pdf)
    Kappa
  • 4.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Royal College of Music, Stockholm, Sweden.
    Interactive Sound and Music Technology for Everyone: Designing Inclusive Standards for Web Audio Applications2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this compilation thesis, I examine how systems and formats can be designed to include more people in creating interactive sound and music applications. I contribute knowledge, aiming to include everyone, but focus specifically on musicians with no programming skills and little interest in technical challenges. 

    I have designed, developed, and evaluated a system – WebAudioXML (WAXML) – for implementing interactive sound and music in web pages using native web technologies. The system’s design is novel, and the work contributes knowledge about how markup language and spreadsheet concepts can describe sound and music structures. The results give insights into how high-level musical representation can be structured, named, and designed to be understood by those without prior programming experience. 

    I also use WAXML to address musical diversity in interactive applications. I identify and solve technical challenges where current systems struggle to implement traditionally performed music. Novel solutions are designed, evaluated, discussed, and presented in the included papers. 

    The system is finally implemented in applications aimed at education and inclusion, where I evaluate them through a series of case studies. The results confirm Web Audio as a solid platform for accessible learning, sharing, and distribution of audio applications and suggest that collective efforts shaping an ecosystem with a universal format would enable even more creators to make interactive sound and music applications. 

    I research FOR the art THROUGH design. The knowledge output is valid for any interactive sound and music system but specifically addresses the design of Web Audio applications. 

    Download (pdf)
    PhD Thesis - Interactive Sound and Music Technology for Everyone
  • 5.
    Favero, Federico
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Light Rhythms: Exploring the Perceptual and Behavioural Effects of Daylight and Artificial Light Conditions in a Scandinavian Context2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This compilation thesis collects multidisciplinary work on the study of the impact of light rhythms on perception and behaviour. The thesis was structured to answer and discuss the questions: “How does a person feel and behave inan illuminated space?” and “Do variable light conditions influence perception, appraisal and motion?”. In order to answer the questions, I applied methods from design, psychology and behavioural science, conducted literature reviews and performed two experimental studies. In response to the first question, the outcome of the five papers included in the thesis show that light and lighting rhythms elicit specific acute and long-term effects. These effects impact on these categories of aspects: visual and perceptual, appraisal and experience, behavioural and physiological. To structure and visualize these diverse aspects, I introduce the CLAPP framework: Context Light(ing) Action (behaviour), Perception, Person. The framework highlights the complex interplay between light, environment, and human response, by displaying features related to spatial and light rhythms, effects of light on mind and body, and personal features. The framework can provide structure and direction for education and research activities within the scope of Architectural Lighting Design. In response to the second research question, results from the experimental studies reveal that, even after eliminating view and sunlight, variable daylight conditions elicit better mood, higher pleasure, and influence motion, compared to artificial light conditions. The results of this thesis may contribute to achieving the UN sustainability goals, specifically to improve the well-being of the population (SDG3), to design a built-environment that is safe and resilient (SDG 11), and to promote the uses of affordable and clean energy (SDG 7). Building on the experience gained during this thesis work, I am confident that multidisciplinary collaboration will enable to integrate the diverse aspects included in the CLAPP framework, paving the way for the design of spaces that are both resilient and supportive of health.

    Download full text (pdf)
    Favero_Kappa
  • 6.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Royal College of Music.
    Towards a Standard for Interactive Music2024In: Innovation in Music: Technology and Creativity / [ed] Jan-Olof Gullö, Russ Hepworth-Sawyer, Justin Paterson, Rob Toulson, Mark Marrington, London: Taylor & Francis, 2024, 1st, p. 1-16Chapter in book (Refereed)
    Abstract [en]

    Interactive Media is a rapidly growing market and an increasingly important target for music production. The field has attracted a lot of focus from both the industry and academia, but there is still a great potential for further development of tools and formats for content creation and implementation.

    Even if music is an important component in media production, there is still no open file format for delivering and sharing interactive musical content between different applications, and the terminology varies between different applications. This study aims at finding useful terminologies and requirements for such a format.

    Over a period of eight years, students, teachers, and researchers from the Royal College of Music in Stockholm (KMH) have participated with artistic visions, prototyping and testing in the development of a JavaScript framework called “iMusicXML”. In this exploratory design study, the current state of iMusicXML is analyzed to reveal important key concepts and features drawn from more than 100 student projects. Several features and solutions that have been proven useful are presented but also critiqued due to limited perspectives. It is also suggested that a wide range of users, genres, and applications should be invited to a continuing discussion about standards for interactive music.

  • 7.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bouvier, Baptiste
    STMS IRCAM CNRS SU, Paris, France.
    Fraticelli, Matthieu
    Département d’études cognitives ENS, Paris, France.
    A Dual-Task Experimental Methodology for Exploration of Saliency of Auditory Notifications in a Retail Soundscape2023In: Proceedings of the 28th International Conference on Auditory Display (ICAD2023): Sonification for the Masses, 2023, 2023Conference paper (Refereed)
    Abstract [en]

    This paper presents an experimental design of a dual-task experiment aimed at exploring the salience of auditory notifications. The first task is a Sustained Attention to Response Task (SART) and the second task involves listening to a complex store soundscape that includes ambient sounds, background music and auditory notifications. In this task, subjects are asked to press a button when an auditory notification is detected. The proposed method is based on a triangulation approach in which quantitative variables are combined with perceptual ratings and free-text question replies to obtain a holistic picture of how the sound environment is perceived. Results from this study can be used to inform the design of systems presenting music and peripheral auditory notifications in a retail environment.

    Download full text (pdf)
    fulltext
  • 8.
    Misgeld, Olof
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Holzapfel, Andre
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Accessible sonification of movement: A case in Swedish folk dance2023In: Proceedings of SMC 2023 - Sound and Music Computing Conference, Sound and Music Computing Network , 2023, p. 201-208Conference paper (Refereed)
    Abstract [en]

    This study presents a sonification tool – SonifyFOLK –designed for intuitive access by musicians and dancers in their sonic explorations of movements in dance performances. It is implemented as a web-based application to facilitate accessible audio parameter mapping of movement data for non-experts, and applied and evaluated with Swedish folk musicians and dancers in their exploration of sonifying dance. SonifyFOLK is based on the WebAudioXML Sonification Toolkit and is designed within a group of artists and engineers using artistic goals as drivers for the sound design. The design addresses challenges of providing an accessible interface for mapping movement data to audio parameters, managing multi-dimensional data and creating audio mapping templates for a contextually grounded sound design. The evaluation documents a diversity of sonification outcomes, reflections by participants that imply curiosity for further work on sonification, as well as the importance of the immediacy of the both visual and acoustic feedback of parameter choices. 

  • 9.
    Lindetorp, Hans
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal Coll Mus, Dept Mus & Media Prod, Stockholm, Sweden.
    Svahn, Maria
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Hölling, Josefine
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Inst Res & Coordinat Acoust Mus IRCAM, Sci & Technol Mus & Sound STMS, Paris, France..
    Collaborative music-making: special educational needs school assistants as facilitators in performances with accessible digital musical instruments2023In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 5, article id 1165442Article in journal (Refereed)
    Abstract [en]

    The field of research dedicated to Accessible Digital Musical Instruments (ADMIs) is growing and there is an increased interest in promoting diversity and inclusion in music-making. We have designed a novel system built into previously tested ADMIs that aims at involving assistants, students with Profound and Multiple Learning Disabilities (PMLD), and a professional musician in playing music together. In this study the system is evaluated in a workshop setting using quantitative as well as qualitative methods. One of the main findings was that the sounds from the ADMIs added to the musical context without making errors that impacted the music negatively even when the assistants mentioned experiencing a split between attending to different tasks, and a feeling of insecurity toward their musical contribution. We discuss the results in terms of how we perceive them as drivers or barriers toward reaching our overarching goal of organizing a joint concert that brings together students from the SEN school with students from a music school with a specific focus on traditional orchestral instruments. Our study highlights how a system of networked and synchronized ADMIs could be conceptualized to include assistants more actively in collaborative music-making, as well as design considerations that support them as facilitators.

  • 10.
    Svahn, Maria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Hölling, Josefine
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    Royal Academy of Music, Stockholm, Sweden.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Collaborative music-making with Special Education Needs students and their assistants: A study on music playing among preverbal individuals with the Funki instruments2023In: NNDR 16th Research Conference Nordic Network on Disability Research (NNDR), 2023Conference paper (Refereed)
    Abstract [en]

    The field of research dedicated to Accessible Digital Musical Instruments (ADMIs) is growingand there is an increased interest in how different accessible music technologies can beused to promote diversity and inclusion in music-making. Researchers currently voice theneed to move away from a techno-centric view of musical expression and to focus more onthe sociocultural contexts in which ADMIs are used. In this study, we explore how “Funki”, aset of ADMIs developed for students with Profound and Multiple Learning Disabilities(PMLD) can be used in a collaborative music-making setting in a Special Educational Needs(SEN) school, together with assistants. Previous findings have suggested that the musicalinteractions taking place, as well as the group dynamics, were highly dependent on thesession assistants and their level of participation. It is therefore important to consider theactive role of assistants, who may have little or no prior music training. The instrumentsprovided should allow the assistant to not only help the students in making music but alsoenable the assistants themselves to create sounds without interfering or disturbing thesounds produced by the students. In the current work, we show how the Funki instrumentscould be expanded with WebAudioXML (waxml) for mapping user interactions to controlmusic and audio parameters and make it possible for assistants to control musical aspectslike the tonality, rhythmic density, or structure of the composition. The system was tested in acase study with four students and their assistants at a SEN school, including semi-structuredinterviews on how Funki supported inclusive music-making and the assistant’s role in thiscontext. The findings of this work highlight how ADMIs could be conceptualized anddesigned to include special education teachers, teaching assistants, and other carers moreactively in collaborative music-making. 

  • 11.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Converging Creativity: Intertwining Music and Code2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This compilation thesis is a collection of case studies that presents examples of creative coding in various contexts, focusing on how such practice led to the creation and exploration of musical expressions, and how I in- interact with the design of the code itself. My own experience as a music composer influences this thesis work. By saying so, I mean that although the thesis places itself in the Sound and Music Computing academic tradition, it is also profoundly founded upon a personal artistic perspective. This perspective has been the overarching view that has informed the studies included in the thesis, despite all being quite different. The first part of the thesis describes the practice of creative coding, creativity models, and the interaction between code and coder. Then I propose a perspective on creative coding based on the idea of asymptotic convergence of creativity. This is followed by a presentation of five papers and three music works, all inspected through my stance on this creative practice. Finally, I examine and discuss these works in detail, concluding by suggesting that the asymptotic convergence of creativity framework might serve as a useful tool that adds to the literature on creative coding practice, especially for situations in which such work is carried out in an academic research setting. 

    Download full text (pdf)
    Claudio Panariello - PhD Thesis
  • 12.
    Falkenberg, Kjetil
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Oestreicher, Lars
    Educating for inclusion: Teaching Design for all in the wild as a motivator2023In: Proceedings of the Nordic Network for Disability Research, 2023Conference paper (Refereed)
  • 13.
    Telang, Sargam
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Marques, Malin
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Emotional Feedback of Robots: Comparing the perceived emotional feedback by an audience between masculine and feminine voices in robots in popular media2023In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 434-436Conference paper (Refereed)
    Abstract [en]

    The sound design of different fantastical aspects can tell the audience much about characters and things. Robots are one of the common fantastical characters that need to be sonified to indicate different aspects of their character. Often, one or more of these traits are an indication of gender and behavior. We investigated these traits in a survey where we asked both quantitative and qualitative questions about the participants' perceptions. We found that participants indicated a bias towards certain robots depending on perceived femininity and masculinity.

  • 14.
    Zhang, Brian J.
    et al.
    Oregon State Univ, Collaborat Robot & Intelligent Syst Inst, Corvallis, OR 97331 USA..
    Orthmann, Bastian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Fick, Jason
    Oregon State Univ, Mus Dept, Corvallis, OR 97331 USA..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Fitter, Naomi T.
    Oregon State Univ, Collaborat Robot & Intelligent Syst Inst, Corvallis, OR 97331 USA..
    Hearing it Out: Guiding Robot Sound Design through Design Thinking2023In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2064-2071Conference paper (Refereed)
    Abstract [en]

    Sound can benefit human-robot interaction, but little work has explored questions on the design of nonverbal sound for robots. The unique confluence of sound design and robotics expertise complicates these questions, as most roboticists do not have sound design expertise, necessitating collaborations with sound designers. We sought to understand how roboticists and sound designers approach the problem of robot sound design through two qualitative studies. The first study followed discussions by robotics researchers in focus groups, where these experts described motivations to add robot sound for various purposes. The second study guided music technology students through a generative activity for robot sound design; these sound designers in-training demonstrated high variability in design intent, processes, and inspiration. To unify the two perspectives, we structured recommendations through the design thinking framework, a popular design process. The insights provided in this work may aid roboticists in implementing helpful sounds in their robots, encourage sound designers to enter into collaborations on robot sound, and give key tips and warnings to both.

  • 15.
    Rafi, Ayesha Kajol
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Murdeshwar, Akshata
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Investigating the Role of Robot Voices and Sounds in Shaping Perceived Intentions2023In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 425-427Conference paper (Refereed)
    Abstract [en]

    This study explores if, and how, the choices made regarding a robot's speaking voice and characteristic body sounds influence viewers' perceptions of its intent i.e., whether the robot's intention is positive or negative. The analysis focuses on robot representations and sounds in three films: "Robots"(2005) [1], "NextGen"(2018) [2], and "Love, Death, and Robots - Three Robots"(2019) [3]. In eight qualitative interviews, five parameters (tonality, intonation, volume, pitch, and speed) were used to understand robot sounds and the participant's perception of a robot's attitude and intentions. The study culminates in a set of recommendations and considerations for human-robot interaction designers to consider while sound coding for body, physiology, and movement.

  • 16.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS Science and Technology of Music and Sound, IRCAM Institute for Research and Coordination in Acoustics/Music, Paris, France.
    Musical Robots: Overview and Methods for Evaluation2023In: Sound and Robotics: Speech, Non-Verbal Audio and Robotic Musicianship / [ed] Richard Savery, Boca Raton, FL, USA: Informa UK Limited , 2023, p. 1-42Chapter in book (Refereed)
    Abstract [en]

    Musical robots are complex systems that require the integration of several different functions to successfully operate. These processes range from sound analysis and music representation to mapping and modeling of musical expression. Recent advancements in Computational Creativity (CC) and Artificial Intelligence (AI) have added yet another level of complexity to these settings, with aspects of Human–AI Interaction (HAI) becoming increasingly important. The rise of intelligent music systems raises questions not only about the evaluation of Human-Robot Interaction (HRI) in robot musicianship but also about the quality of the generated musical output. The topic of evaluation has been extensively discussed and debated in the fields of Human–Computer Interaction (HCI) and New Interfaces for Musical Expression (NIME) throughout the years. However, interactions with robots often have a strong social or emotional component, and the experience of interacting with a robot is therefore somewhat different from that of interacting with other technologies. Since musical robots produce creative output, topics such as creative agency and what is meant by the term "success" when interacting with an intelligent music system should also be considered. The evaluation of musical robots thus expands beyond traditional evaluation concepts such as usability and user experience. To explore which evaluation methodologies might be appropriate for musical robots, this chapter first presents a brief introduction to the field of research dedicated to robotic musicianship, followed by an overview of evaluation methods used in the neighboring research fields of HCI, HRI, HAI, NIME, and CC. The chapter concludes with a review of evaluation methods used in robot musicianship literature and a discussion of prospects for future research.

  • 17.
    Goina, Maurizio
    et al.
    KTH.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Rodela, Romina
    Södertörn University, Södertörn University.
    Our sound space (oss): an installation for participatory and interactive exploration of soundscapes2023In: SMC 2023: Proceedings of the Sound and Music Computing Conference 2023, Sound and Music Computing Network , 2023, p. 255-260Conference paper (Refereed)
    Abstract [en]

    This paper describes the development of an interactive tool which allows playing different soundscapes by mixing diverse environmental sounds on demand. This tool is titled Our Sound Space (OSS) and has been developed as part of an ongoing project where we test methods and tools for the participation of young people in spatial planning. As such OSS is meant to offer new opportunities to engage youth in talks about planning, placemaking and more sustainable living environments. In this paper, we describe an implementation of OSS that we are using as an interactive soundscape installation sited in a public place daily visited by people from a diversity of entities (e.g. university, a gymnasium, a restaurant, start-ups). The OSS installation is designed to allow simultaneous activation of several prerecorded sounds broadcast through four loudspeakers. The installation is interactive, meaning that it can be activated and operated by anyone via smartphones and is designed to allow interaction among multiple people at the same time and space.

  • 18.
    Latupeirissa, Adrian B.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    PepperOSC: enabling interactive sonification of a robot's expressive movement2023In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 17, no 4, p. 231-239Article in journal (Refereed)
    Abstract [en]

    This paper presents the design and development of PepperOSC, an interface that connects Pepper and NAO robots with soundproduction tools to enable the development of interactive sonification in human-robot interaction (HRI). The interface usesOpen Sound Control (OSC) messages to stream kinematic data from robots to various sound design and music productiontools. The goals of PepperOSC are twofold: (i) to provide a tool for HRI researchers in developing multimodal user interfacesthrough sonification, and (ii) to lower the barrier for sound designers to contribute to HRI. To demonstrate the potential useof PepperOSC, this paper also presents two applications we have conducted: (i) a course project by two master’s studentswho created a robot sound model in Pure Data, and (ii) a museum installation of Pepper robot, employing sound modelsdeveloped by a sound designer and a composer/researcher in music technology usingMaxMSP and SuperCollider respectively.Furthermore, we discuss the potential use cases of PepperOSC in social robotics and artistic contexts. These applicationsdemonstrate the versatility of PepperOSC and its ability to explore diverse aesthetic strategies for robot movement sonification,offering a promising approach to enhance the effectiveness and appeal of human-robot interactions.

  • 19.
    Spiro, Neta
    et al.
    Centre for Performance Science, Royal College of Music, London, UK;Faculty of Medicine, Imperial College London, London, UK.
    Sanfilippo, Katie Rose M.
    Centre for Healthcare Innovation Research, School of Health and Psychological Sciences, City, University of London, UK.
    McConnell, Bonnie B.
    College of Arts and Social Sciences, Australian National University, Canberra, Australia.
    Pike-Rowney, Georgia
    Centre for Classical Studies, Australian National University, Canberra, Australia.
    Bonini Baraldi, Filippo
    Instituto de Etnomusicologia – Centro de Estudos em Música e Dança (INET-md), Faculty of Social and Human Sciences, NOVA University Lisbon, Portugal;Centre de Recherche en Ethnomusicologie (CREM-LESC), Paris Nanterre University, Nanterre, France.
    Brabec, Bernd
    Institute of Musicology, University of Innsbruck, Innsbruck, Austria.
    Van Buren, Kathleen
    Humanities in Medicine, Mayo Clinic.
    Camlin, Dave
    Department of Music Education, Royal College of Music, London, UK.
    Cardoso, Tânya Marques
    Musicoterapia (Music Therapy Undergraduate Course), Universidade Federal de Goiás (Federal University of Goiás), Goiania, Brasil (Brazil).
    Çifdalöz, Burçin Uçaner
    Ankara Haci Bayram Veli University, Ankara, Türkiye.
    Cross, Ian
    Faculty of Music, Centre for Music & Science, Cambridge, UK.
    Dumbauld, Ben
    Marimba Band, New York, New York, USA.
    Ettenberger, Mark
    Music Therapy Service Clínica Colsanitas, Bogotá, Colombia;SONO – Centro de Musicoterapia, Bogotá, Colombia.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Fouché, Sunelle
    University of Pretoria, Pretoria, South Africa;MusicWorks, Clareinch, South Africa.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS Science and Technology of Music and Sound, IRCAM Institute for Research and Coordination in Acoustics/Music, Paris, France.
    Gosine, Jane
    School of Music, Memorial University, St. John's, Canada.
    Graham-Jackson, April l.
    Department of Geography, University of California, Berkeley, CA, USA.
    Grahn, Jessica A.
    Department of Psychology and Brain, Mind Institute, London, Ontario, Canada.
    Harrison, Klisala
    Department of Musicology and Dramaturgy, School of Communication and Culture, Aarhus University, Aarhus, Denmark.
    Ilari, Beatriz
    University of Southern California, Thornton School of Music, Los Angeles, CA, USA.
    Mollison, Sally
    SALMUTATIONS, Lutruwita/Tasmania, Australia.
    Morrison, Steven J.
    Henry & Leigh Bienen School of Music, Northwestern University, Evanston, IL, USA.
    Pérez-Acosta, Gabriela
    Faculty of Music, National Autonomous University of Mexico, Ciudad de Mexico, Mexico.
    Perkins, Rosie
    Centre for Performance Science, Royal College of Music, London, UK;Faculty of Medicine, Imperial College London, London, UK.
    Pitt, Jessica
    Department of Music Education, Royal College of Music, London, UK.
    Rabinowitch, Tal-Chen
    School of Creative Arts Therapies, University of Haifa, Haifa, Israel.
    Robledo, Juan-Pablo
    Millennium Institute for Care Research (MICARE), Santiago, Chile.
    Roginsky, Efrat
    School of Creative Arts Therapies, University of Haifa, Haifa, Israel.
    Shaughnessy, Caitlin
    Centre for Performance Science, Royal College of Music, London, UK;Faculty of Medicine, Imperial College London, London, UK.
    Sunderland, Naomi
    Creative Arts Research Institute and School of Health Sciences and Social Work, Griffith University, Queensland, Australia.
    Talmage, Alison
    School of Music and Centre for Brain Research, The University of Auckland – Waipapa Taumata Rau, Auckland, New Zealand.
    Tsiris, Giorgos
    Queen Margaret University, Edinburgh, UK;St Columba's Hospice Care, Edinburgh, UK.
    de Wit, Krista
    Music in Context, Hanze University of Applied Sciences, Groningen, The Netherlands.
    Perspectives on Musical Care Throughout the Life Course: Introducing the Musical Care International Network2023In: Music & Science, E-ISSN 2059-2043, Vol. 6Article in journal (Refereed)
    Abstract [en]

    In this paper we report on the inaugural meetings of the Musical Care International Network held online in 2022. The term “musical care” is defined by Spiro and Sanfilippo (2022) as “the role of music—music listening as well as music-making—in supporting any aspect of people's developmental or health needs” (pp. 2–3). Musical care takes varied forms in different cultural contexts and involves people from different disciplines and areas of expertise. Therefore, the Musical Care International Network takes an interdisciplinary and international approach and aims to better reflect the disciplinary, geographic, and cultural diversity relevant to musical care. Forty-two delegates participated in 5 inaugural meetings over 2 days, representing 24 countries and numerous disciplines and areas of practice. Based on the meetings, the aims of this paper are to (1) better understand the diverse practices, applications, contexts, and impacts of musical care around the globe and (2) introduce the Musical Care International Network. Transcriptions of the recordings, alongside notes taken by the hosts, were used to summarise the conversations. The discussions developed ideas in three areas: (a) musical care as context-dependent and social, (b) musical care's position within the broader research and practice context, and (c) debates about the impact of and evidence for musical care. We can conclude that musical care refers to context-dependent and social phenomena. The term musical care was seen as useful in talking across boundaries while not minimizing individual disciplinary and professional expertise. The use of the term was seen to help balance the importance and place of multiple disciplines, with a role to play in the development of a collective identity. This collective identity was seen as important in advocacy and in helping to shape policy. The paper closes with proposed future directions for the network and its emerging mission statement.

  • 20.
    Zojaji, Sahba
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Persuasive polite robots in free-standing conversational groups2023In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Politeness is at the core of the common set of behavioral norms that regulate human communication and is therefore of significant interest in the design of Human-Robot Interactions. In this paper, we investigate how the politeness behaviors of a humanoid robot impact human decisions about where to join a group of two robots. We also evaluate the resulting impact on the perception of the robot's politeness. In a study involving 59 participants, the main (Pepper) robot in the group invited participants to join using six politeness behaviors derived from Brown and Levinson's politeness theory. It requests participants to join the group at the furthest side of the group which involves more effort to reach than a closer side that is also available to the participant but would ignore the request of the robot. We evaluated the robot's effectiveness in terms of persuasiveness, politeness, and clarity. We found that more direct and explicit politeness strategies derived from the theory have a higher level of success in persuading participants to join at the furthest side of the group. We also evaluated participants' adherence to social norms i.e. not walking through the center, or \textit{o-space}, of the group when joining it. Our results showed that participants tended to adhere to social norms when joining at the furthest side by not walking through the center of the group of robots, even though they were informed that the robots were fully automated. 

    Download full text (pdf)
    Preprint
  • 21. Atienza, Ricardo
    et al.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Playing the design: Creating soundscapes through playful interaction2023In: SMC 2023 - Proceedings of the Sound and Music Computing Conference 2023, Sound and Music Computing Network , 2023, p. 362-369Conference paper (Refereed)
    Abstract [en]

    This study takes inspiration from provocative design methods to gain knowledge on sound preferences regarding future vehicles’ designed sounds. A particular population subset was a triggering component of this study: people with hearing impairments. To that aim, we have developed a public installation in which to test a hypothetical futuristic city square. It includes three electrical vehicles whose sound can be designed by the visitor. The interface allows the user to interact and play with a number of provided sonic textures within a real-time web application, thus “playing” the design. This opens a design space of three distinct sounds that are mixed into an overall soundscape presented in a multichannel immersive environment. The paper describes the design processes involved. 

  • 22.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification2023In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522Article in journal (Refereed)
    Abstract [en]

    This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models.

    We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement.

    We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study.

    Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).

  • 23. McHugh, Laura
    et al.
    Wu, Chih-Wei
    Xu, Xuanling
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Salient sights and sounds: Comparing visual and auditory stimuli remembrance using Audio Set and sonic mapping2023In: Proceedings of the Sound and Music Computing Confererence, 2023Conference paper (Refereed)
  • 24.
    McHugh, Laura
    et al.
    KTH.
    Wu, Chih Wei
    KTH.
    Xu, Xuanling
    KTH.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Salient sights and sounds: comparing visual and auditory stimuli remembrance using audio set ontology and sonic mapping2023In: SMC 2023: Proceedings of the Sound and Music Computing Conference 2023, Sound and Music Computing Network , 2023, p. 426-432Conference paper (Refereed)
    Abstract [en]

    In this study, we explore how store customers recall their perceptual experience with a focus on comparing the remembrance of auditory and visual stimuli. The study was carried out using a novel mixed-methods approach that involved Deep Hanging Out, field study and interviews, including drawing sonic mind maps. The data collected was analysed using thematic analysis, sound classification with the Audio Set ontology, counting occurrences of different auditory and visual elements attended in the store, and rating the richness of their descriptions. The results showed that sights were more salient than sounds and that participants recalled music more frequently compared to the Deep Hanging Out observations, but remembered fewer varieties of sounds in general.

  • 25.
    Núñez-Pacheco, Claudia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS IRCAM.
    Sharing Earthquake Narratives: Making Space for Others in our Autobiographical Design Process2023In: CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems / [ed] Albrecht Schmidt, Kaisa Väänänen,Tesh Goyal, Per Ola Kristensson,Anicia Peters, Stefanie Mueller, Julie R. Williamson, Max L. Wilson, New York, NY, United States, 2023, article id 685Conference paper (Refereed)
    Abstract [en]

    As interaction designers are venturing to design for others based on autobiographical experiences, it becomes particularly relevant to critically distinguish the designer’s voice from others’ experiences. However, few reports go into detail about how self and others mutually shape the design process and how to incorporate external evaluation into these designs. We describe a one-year process involving the design and evaluation of a prototype combining haptics and storytelling, aiming to materialise and share somatic memories of earthquakes experienced by a designer and her partner. We contribute with three strategies for bringing others into our autobiographical processes, avoiding the dilution of frst-person voices while critically addressing design faws that might hinder the representation of our stories. 

  • 26.
    Mattias, Sköld
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Kungl. Musikhögskolan, Institutionen för komposition, dirigering och musikteori.
    Sound Notation: The visual representation of sound for composition and analysis2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This compilation thesis details steps taken to develop and evaluate a new Sound Notation system for composition, analysis, and transcription with the capacity to describe all types of sound. Ideas from electroacoustic music analysis are combined with traditional notation to form a hybrid system. In this notation, all symbols are related to physical qualities in the sound, so that a person or a computer can identify the symbols from their sonification or musical interpretation.

    Pierre Schaeffer early identified musique concrète's lack of music theory and music vocabulary as a major problem for its integration with music theory and musicology. Schaeffer, Denis Smalley, and later Lasse Thoresen would go a long way to provide the genre of electroacoustic music with classification, terminology, and graphical symbols for the benefit of its study. But if we are to think of music as a language, it becomes apparent that the lack of a shared inter-subjective notation system is a problem. Such a notation system would provide sound-based music with possibilities that were previously only afforded music based on pitch structures. This includes transcriptions and re-interpretations of musical works, notation-based ear-training and (computer-aided) composition.

    Download full text (pdf)
    Mattias Sköld - Sound Notation
  • 27.
    Orthmann, Bastian
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction2023In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 4, article id 49Article in journal (Refereed)
    Abstract [en]

    Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in five online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.

  • 28.
    Favero, Federico
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Architecture, Lighting Design.
    Lowden, Arne
    Stress Research Institute at the Department of Psychology, Stockholm University.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Ejhed, Jan
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Study of the Effects of Daylighting and Artificial Lighting at 59° Latitude on Mental States, Behaviour and Perception2023In: Sustainability, E-ISSN 2071-1050, Vol. 15, no 2, article id 1144Article in journal (Refereed)
    Abstract [en]

    Although there is a documented preference for daylighting over artificial electric lighting indoors, there are comparatively few investigations of behaviour and perception in indoor day-lit spaces at high latitudes during winter. We report a pilot study designed to examine the effects of static artificial lighting conditions (ALC) and dynamic daylighting conditions (DLC) on the behaviour and perception of two groups of participants. Each group (n = 9 for ALC and n = 8 for DLC) experienced one of the two conditions for three consecutive days, from sunrise to sunset. The main results of this study show the following: indoor light exposure in February in Stockholm can be maintained over 1000 lx only with daylight for most of the working day, a value similar to outdoor workers’ exposure in Scandinavia; these values can be over the recommended Melanopic Equivalent Daylight Illuminance threshold; and this exposure reduces sleepiness and increases amount of activity compared to a static artificial lighting condition. Mood and feeling of time passing are also affected, but we do not exactly know by which variable, either personal or group dynamics, view or variation of the lighting exposure. The small sample size does not support inferential statistics; however, these significant effects might be large enough to be of importance in practice. From a sustainability point of view, daylighting can benefit energy saving strategies and well-being, even in the Scandinavian winter.

    Download full text (pdf)
    PaperIII_Sustainability
  • 29.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    SuperOM: a SuperCollider class to generate music scores in OpenMusic2023In: Proceedings of the 8th International Conference on Technologies for Music Notation and Representation (TENOR) / [ed] Anthony Paul De Ritis, Victor Zappi, Jeremy Van Buskirk and John Mallia, Boston, MA, USA: Northeastern University Library , 2023, p. 68-75Conference paper (Refereed)
    Abstract [en]

    This paper introduces SuperOM, a class built for the software SuperCollider in order to create a bridge to OpenMu- sic and thus facilitate the creation of musical scores from SuperCollider patches. SuperOM is primarily intended to be used as a tool for SuperCollider users who make use of assisted composition techniques and want the output of such processes to be captured through automatic notation transcription. This paper first presents an overview of existing transcription tools for SuperCollider, followed by a detailed description of SuperOM and its implementation, as well as examples of how it can be used in practice. Finally, a case study in which the transcription tool was used as an assistive composition tool to generate the score of a sonification – which later was turned into a piano piece – is discussed. 

    Download full text (pdf)
    fulltext
  • 30.
    Maranhao, Tiago
    et al.
    KTH.
    Berrez, Philip
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Kihl, Martin
    KTH.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    What is the color of choro?: Color preferences for an instrumental brazilian popular music genre2023In: SMC 2023: Proceedings of the Sound and Music Computing Conference 2023, Sound and Music Computing Network , 2023, p. 370-376Conference paper (Refereed)
    Abstract [en]

    This project explores how a synesthetic experience related to music perception and color association varies across cultures, and whether music with more energetic expressions elicits richer color responses. A total of 206 participants took part in a survey using a customized web page. The participants got to listen to excerpts of Brazilian music in the genre Choro and got to choose one or more color that matched the music the most. The music excerpts were chosen based on the their portrayal of the emotions joy, tender and sorrow. The results showed differences in color preferences for each emotional expression studied across different groups. Furthermore, a correlation between the subjective intensity of the excerpt (considering that, in terms of intensity, Joy > Tender > Sorrow) and the variety of colors chosen by the participants was observed. In general the results supports previous research in this field with happiness or joy is often correlated to the color yellow and sorrow to the color blue. The excerpts that portrayed tenderness had most participants choosing the color yellow but also green for non-Brazilians. Due to the limits of the study, the results are not conclusive. More research is needed to get a better understanding of the impact of utilizing color combination rather than single colors to match music or emotional expressions.

  • 31.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Percivati, Chiara
    University of Antwerp, AP Hogeschool Antwerpen, Antwerp, Belgium.
    “WYPYM”: A Study for Feedback-Augmented Bass Clarinet2023Conference paper (Refereed)
  • 32.
    Favero, Federico
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Architecture, Lighting Design.
    Besenecker, Ute
    KTH, School of Architecture and the Built Environment (ABE), Architecture, Lighting Design.
    Artificial light(ing) or electric light(ing)?2022In: The 8th International Light Symposium: Re-thinking Lighting Design in a Sustainable Future, Copenhagen, Denmark / [ed] IOP, Bristol: Institute of Physics Publishing (IOPP), 2022, Vol. 1099, p. 1-11Conference paper (Refereed)
    Abstract [en]

    Researchers and designers use the words "artificial" or "electric" to describe lighting products, design, or research related practices, and there appear to be differing opinions about which is the more appropriate term. Generally, there are challenges with a common use of language and vocabulary in interdisciplinary research and this might be also valid for design and research in lighting design across different disciplines. The authors were educated in opposing practices of using "electric" lighting vs "artificial" lighting; this started a discussion and the conceptualization of this article. The paper explores, summarizes and discusses through literature review and a survey the concepts described and conveyed by both terms in different disciplines. Interestingly we could find differences among and between disciplines and professional backgrounds. This might indicate that the education and nomenclature in the field influences the use of terms. We found a tendency to refer to light sources either in terms of the energy used to generate the light, e.g. electric light or gaslight, but also in terms of the effect that it evokes, e.g. candle light is defined natural. Generally, a common lighting glossary could be developed through continuous discussion and studies. As today's complex questions are discussed in interdisciplinary teams, a common language might promote effective communication and stimulate sustainable solutions.

    Download full text (pdf)
    fulltext
  • 33.
    van den Broek, Gion
    et al.
    Eindhoven University of Technology.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Concurrent sonification of different percentage values: the case of database values about statistics of employee engagement2022In: Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022, 2022Conference paper (Refereed)
    Abstract [en]

    The quality of employee engagement at work is an important factor that can have effects on health, give indications on the quality of leadership, and save costs for companies. Gallup firm has defined three categories of employees that every organization in the world has: Engaged, Not engaged, Actively disengaged. Data collected with about 155000 interviews by Gallup across 155 countries around the world show that only 15% of employees worldwide are engaged in their job, 67% are not engaged, and 18% are actively disengaged. This large amount of data provides the context for reflecting on workplace conditions and engagement at work across global regions. In this paper we present a study in which we use interactive sonification strategies for representing the above three employee categories in order to explore, understand, and reflect on workplace conditions. For the sound design we applied principles of communication of emotional expression in music performance. By leveraging on the strong emotional component offered by expressive interactive sonification it was possible to create sonifications which could help participants in an experiment to identify the three different employees categories and to design the soundscape of their workplace.

  • 34.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. IRCAM, STMS Sci & Technol Mus & Son UMR9912, 1 Pl Igor Stravinsky, F-75004 Paris, France..
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Núñez-Pacheco, Claudia
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Customizing and Evaluating Accessible Multisensory Music Experiences with Pre-Verbal Children: A Case Study on the Perception of Musical Haptics Using Participatory Design with Proxies2022In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 6, no 7, article id 55Article in journal (Refereed)
    Abstract [en]

    Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD) can be involved in such processes. In this paper, we apply in-depth qualitative and mixed methodologies in a case study with four students with PMLD. Using Participatory Design with Proxies (PDwP), we assess how these students can be involved in the customization and evaluation of the design of a multisensory music experience intended for a large-scale ADMI. Results from an experiment focused on communication of musical haptics highlighted the diversity in employed interaction strategies used by the children, accessibility limitations of the current multisensory experience design, and the importance of using a multifaceted variety of qualitative and quantitative methods to arrive at more informed conclusions when applying a design with proxies methodology.

  • 35.
    Lindetorp, Hans
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Department of Music Production, Royal College of Music, Stockholm, Sweden.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Evaluating Web Audio for learning, accessibility and distribution2022In: Journal of The Audio Engineering Society, ISSN 1549-4950, Vol. 70, no 11, p. 962-978Article in journal (Refereed)
    Abstract [en]

    Web Audio has a great potential for interactive audio content in which an open standard and easy integration with other web-based tools makes it particularly interesting. From earlier studies, obstacles for students to materialize creative ideas through programming were identified; focus shifted from artistic ambition to solving technical issues. This study builds upon 20 years of experience from teaching sound and music computing and evaluates how Web Audio contributes to the learning experience. Data was collected from different student projects through analysis of source code, reflective texts, group discussions, and online self-evaluation forms. The result indicates that Web Audio serves well as a learning platform and that an XML abstraction of the API helped the students to stay focused on the artistic output. It is also concluded that an online tool can reduce the time for getting started with Web Audio to less than 1 h. Although many obstacles have been successfully removed, the authors argue that there is still a great potential for new online tools targeting audio application development in which the accessibility and sharing features contribute to an even better learning experience. 

  • 36.
    Larson Holmgren, David
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Särnell, Adam
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Facilitating reflection on climate change using interactive sonification2022In: Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022, 2022Conference paper (Refereed)
    Abstract [en]

    This study explores the possibility of using musical soundscapes to facilitate reflection on the impacts of climate change. By sonifying historic and future climate data, an interactive timeline was created where the user can explore a soundscape changing in time. A prototype was developed and tested in a user study with 15 participants. Results indicate that the prototype successfully elicits the emotions that it was designed to communicate and that it does influence the participants’ reflections. However, it remains uncertain how much the prototype actually helped them while reflecting.

  • 37.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS IRCAM.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Haptic Music Players for Children with Profound and Multiple Learning Dis-abilities (PMLD): Exploring Different Modes of Interaction for Felt Sound2022In: Proceedings of the 24th International Congress on Acoustics (ICA2022): A10 -05 Physiological Acoustics - Multi-modal solutions to enhance hearing / [ed] Jeremy Marozeau, Sebastian Merchel, Gyeongju, South Korea: Acoustic Society of Korea , 2022, article id ABS-0021Conference paper (Refereed)
    Abstract [en]

    This paper presents a six-month exploratory case study on the evaluation of three Haptic Music Players (HMPs) with four pre-verbal children with Profound and Multiple Learning Disabilities (PMLD). The evaluated HMPs were 1) a commercially available haptic pillow, 2) a haptic device embedded in a modified plush-toy backpack, and 3) a custom-built plush toy with a built-in speaker and tactile shaker. We evaluated the HMPs through qualitative interviews with a teacher who served as a proxy for the preverbal children participating in the study; the teacher augmented the students’ communication by reporting observations from each test session. The interviews explored functionality, accessibility, versus user experience aspects of respective HMP and revealed significant differences between devices. Our findings highlighted the influence of physical affordances provided by the HMP designs and the importance of a playful design in this context. Results suggested that sufficient time should be allocated to HMP familiarization prior to any evaluation procedure, since experiencing musical haptics through objects is a novel experience that might require some time to get used to. We discuss design considerations for Haptic Music Players and provide suggestions for future developments of multimodal systems dedicated to enhancing music listening in special education settings. 

    Download full text (pdf)
    fulltext
  • 38.
    Mattias, Sköld
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Kungl. Musikhögskolan, Institutionen för komposition, dirigering och musikteori.
    Notation as visual representation of sound-based music2022In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 51, no 2-3, p. 186-202Article in journal (Refereed)
    Abstract [en]

    This text describes the musical evaluation of a hybrid music notation system that combines traditional notation with symbols and concepts from spectromorphological analysis. During three academic years from 2017 to 2020, three groups of composition students learned to work with sound notation, recreating and interpreting short electroacoustic music sketches based solely on their notation transcriptions – they had not heard the original sketches. The students’ score interpretations bore obvious similarities to the original music sketches and their written reflections showed that there were no major difficulties in understanding the notation although some difficulties existed concerning finding suitable sounds, especially sounds with stable pitch.

  • 39.
    Snarberg, Hanna
    et al.
    KTH.
    Pantigoso Velasquez, Ävelin
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Johansson, Stefan
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Begripsam AB, KTH Royal Institute of Technology (SWEDEN).
    Preparing for the future for all: The state of accessibility education at technical universities2022In: EDULEARN22 Proceedings, IATED , 2022, p. 7799-7805Conference paper (Refereed)
  • 40.
    Stojanovski, Todor
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Zhang, Hui
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Chhatre, Kiran
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Samuels, Ivor
    Univ Birmingham, Urban Morphol Res Grp, Birmingham, W Midlands, England..
    Sanders, Paul
    Deakin Univ, Melbourne, Vic, Australia..
    Partanen, Jenni
    Tallinn Univ Technol, Tallinn, Estonia..
    Lefosse, Deborah
    Sapienza, Rome, Italy..
    Rethinking Computer-Aided Architectural Design (CAAD) - From Generative Algorithms and Architectural Intelligence to Environmental Design and Ambient Intelligence2022In: Computer-Aided Architectural Design: Design Imperatives: The Future Is Now / [ed] Gerber, D Pantazis, E Bogosian, B Nahmad, A Miltiadis, C, Springer Nature , 2022, Vol. 1465, p. 62-83Conference paper (Refereed)
    Abstract [en]

    Computer-Aided Architectural Design (CAAD) finds its historical precedents in technological enthusiasm for generative algorithms and architectural intelligence. Current developments in Artificial Intelligence (AI) and paradigms in Machine Learning (ML) bring new opportunities for creating innovative digital architectural tools, but in practice this is not happening. CAAD enthusiasts revisit generative algorithms, while professional architects and urban designers remain reluctant to use software that automatically generates architecture and cities. This paper looks at the history of CAAD and digital tools for Computer Aided Design (CAD), Building Information Modeling (BIM) and Geographic Information Systems (GIS) in order to reflect on the role of AI in future digital tools and professional practices. Architects and urban designers have diagrammatic knowledge and work with design problems on symbolic level. The digital tools gradually evolved from CAD to BIM software with symbolical architectural elements. The BIM software works like CAAD (CAD systems for Architects) or digital board for drawing and delivers plans, sections and elevations, but without AI. AI has the capability to process data and interact with designers. The AI in future digital tools for CAAD and Computer-Aided Urban Design (CAUD) can link to big data and develop ambient intelligence. Architects and urban designers can harness the benefits of analytical ambient intelligent AIs in creating environmental designs, not only for shaping buildings in isolated virtual cubicles. However there is a need to prepare frameworks for communication between AIs and professional designers. If the cities of the future integrate spatially analytical AI, are to be made smart or even ambient intelligent, AI should be applied to improving the lives of inhabitants and help with their daily living and sustainability.

  • 41.
    Sköld, Mattias
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal College of Music in Stockholm.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sonification of Complex Spectral Structures2022In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Frontiers in Neuroscience, ISSN 1662-4548, Vol. 16Article, review/survey (Refereed)
    Abstract [en]

    In this article, we present our work on the sonification of notated complex spectral structures. It is part of a larger research project about the design of a new notation system for representing sound-based musical structures. Complex spectral structures are notated with special symbols in the scores, which can be digitally rendered so that the user can hear key aspects of what has been notated. This hearing of the notated data is significantly different from reading the same data, and reveals the complexity hidden in its simplified notation. The digitally played score is not the music itself but can provide essential information about the music in ways that can only be obtained in sounding form. The playback needs to be designed so that the user can make relevant sonic readings of the sonified data. The sound notation system used here is an adaptation of Thoresen and Hedman’s spectromorphological analysis notation. Symbols originally developed by Lasse Thoresen from Pierre Schaeffer’s typo-morphology have in this system been adapted to display measurable spectral features of timbrel structure for the composition and transcription of sound-based musical structures. Spectrum category symbols are placed over a spectral grand-staff that combines indications of pitch and frequency values for the combined display of music related to pitch-based and spectral values. Spectral features of a musical structure such as spectral width and density are represented as graphical symbols and sonically rendered. In perceptual experiments we have verified that users can identify spectral notation parameters based on their sonification. This confirms the main principle of sonification that is that the data/dimensions relations in one domain, in our case notated representation of spectral features, are transformed in perceived relations in the audio domain, and back.

  • 42.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sonification of Computer Processes: The Cases of Computer Shutdown and Idle Mode2022In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 16, article id 862663Article in journal (Refereed)
    Abstract [en]

    Software is intangible, invisible, and at the same time pervasive in everyday devices, activities, and services accompanying our life. Therefore, citizens hardly realize its complexity, power, and impact in many aspects of their daily life. In this study, we report on one experiment that aims at letting citizens make sense of software presence and activity in their everyday lives, through sound: the invisible complexity of the processes involved in the shutdown of a personal computer. We used sonification to map information embedded in software events into the sound domain. The software events involved in a shutdown have names related to the physical world and its actions: write events (information is saved into digital memories), kill events (running processes are terminated), and exit events (running programs are exited). The research study presented in this article has a "double character. " It is an artistic realization that develops specific aesthetic choices, and it has also pedagogical purposes informing the causal listener about the complexity of software behavior. Two different sound design strategies have been applied: one strategy is influenced by the sonic characteristics of the Glitch music scene, which makes deliberate use of glitch-based sound materials, distortions, aliasing, quantization noise, and all the "failures " of digital technologies; and a second strategy based on the sound samples of a subcontrabass Paetzold recorder, an unusual and special acoustic instrument which unique sound has been investigated in the contemporary art music scene. Analysis of quantitative ratings and qualitative comments of 37 participants revealed that the sound design strategies succeeded in communicating the nature of the computer processes. Participants also showed in general an appreciation of the aesthetics of the peculiar sound models used in this study.

  • 43. Kantan, Prithvi Ravi
    et al.
    Dahl, Sofia
    Spaich, Erika G
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sonifying Walking: A Perceptual Comparison of Swing Phase Mapping Schemes2022In: Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022, 2022Conference paper (Refereed)
    Abstract [en]

    ast research on the interactive sonification of footsteps has shown that the signal properties of digitally generated or processed footstep sounds can affect the perceived congruence between sensory channel inputs, leading to measurable changes in gait characteristics. In this study, we designed musical and nonmusical swing phase sonification schemes with signal characteristics corresponding to high and low ‘energy’ timbres (in terms of the levels of physical exertion and arousal they expressed), and assessed their perceived arousal, valence, intrusiveness, and congruence with fast (5 km/h) and slow (1.5 km/h) walking . In a web-based perceptual test with 52 participants, we found that the nonmusical high energy scheme received higher arousal ratings, and the musical equivalent received more positive valence ratings than the respective lowenergy counterparts. All schemes received more positive arousal and valence ratings when applied to fast walking than slow walking data. Differences in perceived movement-sound congruence among the schemes were more evident for slow walking than fast walking. Lastly, the musical schemes were rated to be less intrusive to listen to for both slow and fast walking than their nonmusicalcounterparts. With some modifications, the designed schemes will be used during walking to assess their effects on gait qualities. 

  • 44. Misdariis, N.
    et al.
    Özcan, E.
    Grassi, M.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Barrass, S.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Susini, P.
    Sound experts’ perspectives on astronomy sonification projects2022In: Nature Astronomy, E-ISSN 2397-3366, Vol. 6, no 11, p. 1249-1255Article in journal (Refereed)
    Abstract [en]

    The Audible Universe project aims to create dialogue between two scientific domains investigating two distinct research objects: stars and sound. It has been instantiated within a collaborative workshop that began to mutually acculturate the two communities, by sharing and transmitting respective knowledge, skills and practices. One main outcome of this exchange was a global view on the astronomical data sonification paradigm for observing the diversity of tools, uses and users (including visually impaired people), but also the current limitations and potential methods of improvement. From this viewpoint, here we present basic elements gathered and contextualized by sound experts in their respective fields (sound perception/cognition, sound design, psychoacoustics, experimental psychology), to anchor sonification for astronomy in a more well informed, methodological and creative process.

  • 45. Ljungdahl Eriksson, Martin
    et al.
    Otterbring, Tobias
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sounds and Satisfaction: A Novel Conceptualization of the Soundscape in Sales and Service Settings2022In: Proceedings of the Nordic Retail and Wholesale Conference, 2022Conference paper (Refereed)
  • 46.
    Mattias, Sköld
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Kungl. Musikhögskolan, Institutionen för komposition, dirigering och musikteori.
    The Visual Representation of Timbre2022In: Organised Sound, ISSN 1355-7718, E-ISSN 1469-8153, Vol. 27, no 3, p. 387-400Article in journal (Refereed)
    Abstract [en]

    This text deals with the difficult task of notating timbre by addressing how it can be classified, synthesised, recognised and related to visual correspondences, and then looking at the relevance of these topics for notational purposes. Timbre is understood as dependent on both spectral and time-dependent features that can be notated in ways that make sense in relation to both perception and acoustics. This is achieved by taking the starting point in Lasse Thoresen’s spectromorphological analysis. Symbols originally developed for perception-based analysis are adapted for use over a hybrid spectrum-staff system to indicate the spectral qualities of timbre. To test the system, it was used to transcribe excerpts of three classic electroacoustic music works. In addition to the benefit of being able to compare the three excerpts transcribed with the same system, there is the advantage that the visual representation is based on spectral measurable qualities in the music. The notation system’s intuitiveness was also explored in listening tests, showing that it was possible to understand spectral notation symbols placed over a staff system, particularly for examples with two sound objects instead of one.

  • 47.
    Misgeld, Olof
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal College of Music.
    Gulz, Torbjörn
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal College of Music.
    Holzapfel, Andre
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Miniotaitė, Jūra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    A case study of deep enculturation and sensorimotor synchronization to real music2021In: Proceedings of the 22nd International Conference on Music Information Retrieval, ISMIR 2021, International Society for Music Information Retrieval, 2021, p. 460-467Conference paper (Refereed)
    Abstract [en]

    Synchronization of movement to music is a behavioural capacity that separates humans from most other species. Whereas such movements have been studied using a wide range of methods, only few studies have investigated synchronisation to real music stimuli in a cross-culturally comparative setting. The present study employs beat tracking evaluation metrics and accent histograms to analyze the differences in the ways participants from two cultural groups synchronize their tapping with either familiar or unfamiliar music stimuli. Instead of choosing two apparently remote cultural groups, we selected two groups of musicians that share cultural backgrounds, but that differ regarding the music style they specialize in. The employed method to record tapping responses in audio format facilitates a fine-grained analysis of metrical accents that emerge from the responses. The identified differences between groups are related to the metrical structures inherent to the two musical styles, such as non-isochronicity of the beat, and differences between the groups document the influence of the deep enculturation of participants to their style of expertise. Besides these findings, our study sheds light on a conceptual weakness of a common beat tracking evaluation metric, when applied to human tapping instead of machine generated beat estimations.

  • 48.
    Lindetorp, Hans
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Kungl. Musikhögskolan, Institutionen för musik- och medieproduktion.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Audio Parameter Mapping Made Explicit Using WebAudioXML2021In: Proceedings of the Sound and Music Computing Conference / [ed] Sound and Music Computing Conference, Torino, 2021Conference paper (Refereed)
    Abstract [en]

    Sonification using audio parameter mapping involves both aesthetic and technical challenges and requires interdisciplinary skills on a high level to produce a successful result. With the aim to lower the barrier for students to enter the field of sonification, we developed and presented WebAudioXML at SMC2020. Since then, more than 40 student projects has successfully proven that the technology is highly beneficial for non-programmers to learn how to create interactive web audio applications. With this study, we present new feature for WebAudioXML that also makes advanced audio parameter mapping, data interpolation and value conversion more accessible and easy to assess. Three student projects act as base for the syntax definition and by using an annotated portfolio and video recorded interviews with experts from the sound and music computing community, we present important insights from the project. The participants contributed with critical feedback and questions that helped us to better understand the strengths and weaknesses with the proposed syntax. We conclude that the technology is robust and useful and present new ideas that emerged from this study.

  • 49.
    Falkenberg, Kjetil
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Ljungdahl Eriksson, Martin
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Otterbring, Tobias
    Daunfeldt, Sven-Olov
    Auditory notification of customer actions in a virtual retail environment: Sound design, awareness and attention2021In: Proceedings of International Conference on Auditory Displays ICAD 2021, 2021Conference paper (Refereed)
  • 50.
    Anindita, Puspita Parahita
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Design Approaches to Alert Sounds for Interactions in Shops2021In: Nordic Sound and Music Computing Conference, Zenodo , 2021Conference paper (Refereed)
123456 1 - 50 of 294
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf