Change search
Refine search result
1 - 19 of 19
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bresin, Roberto
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Favero, Federico
    KTH, School of Architecture and the Built Environment (ABE).
    Annersten, Lars
    Musikverket.
    Berner, David
    Musikverket.
    Morreale, Fabio
    Queen Mary University of London.
    SOUND FOREST/LJUDSKOGEN: A LARGE-SCALE STRING-BASED INTERACTIVE MUSICAL INSTRUMENT2016In: Sound and Music Computing 2016, SMC Sound&Music Computing NETWORK , 2016, p. 79-84Conference paper (Refereed)
    Abstract [en]

     In this paper we present a string-based, interactive, largescale installation for a new museum dedicated to performing arts, Scenkonstmuseet, which will be inaugurated in 2017 in Stockholm, Sweden. The installation will occupy an entire room that measures 10x5 meters. We aim to create a digital musical instrument (DMI) that facilitates intuitive musical interaction, thereby enabling visitors to quickly start creating music either alone or together. The interface should be able to serve as a pedagogical tool; visitors should be able to learn about concepts related to music and music making by interacting with the DMI. Since the lifespan of the installation will be approximately five years, one main concern is to create an experience that will encourage visitors to return to the museum for continued instrument exploration. In other words, the DMI should be designed to facilitate long-term engagement. Finally, an important aspect in the design of the installation is that the DMI should be accessible and provide a rich experience for all museum visitors, regardless of age or abilities.

  • 2.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Accessible Digital Musical Instruments: A Review of Musical Interfaces in Inclusive Music Practice2019In: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 3, no 3, article id 57Article in journal (Refereed)
    Abstract [en]

    Current advancements in music technology enable the creation of customized Digital Musical Instruments (DMIs). This paper presents a systematic review of Accessible Digital Musical Instruments (ADMIs) in inclusive music practice. History of research concerned with facilitating inclusion in music-making is outlined, and current state of developments and trends in the field are discussed. Although the use of music technology in music therapy contexts has attracted more attention in recent years, the topic has been relatively unexplored in Computer Music literature. This review investigates a total of 113 publications focusing on ADMIs. Based on the 83 instruments in this dataset, ten control interface types were identified: tangible controllers, touchless controllers, Brain–Computer Music Interfaces (BCMIs), adapted instruments, wearable controllers or prosthetic devices, mouth-operated controllers, audio controllers, gaze controllers, touchscreen controllers and mouse-controlled interfaces. The majority of the AMDIs were tangible or physical controllers. Although the haptic modality could potentially play an important role in musical interaction for many user groups, relatively few of the ADMIs (15.6%) incorporated vibrotactile feedback. Aspects judged to be important for successful ADMI design were instrument adaptability and customization, user participation, iterative prototyping, and interdisciplinary development teams.

  • 3. Frid, Emma
    Accessible Digital Musical Instruments: A Survey of Inclusive Instruments Presented at the NIME, SMC and ICMC Conferences2018In: Proceedings of the International Computer Music Conference 2018: Daegu, South Korea / [ed] Tae Hong Park, Doo-Jin Ahn, San Francisco: The International Computer Music Association , 2018, p. 53-59Conference paper (Refereed)
    Abstract [en]

    This paper describes a survey of accessible Digital Musical Instruments (ADMIs) presented at the NIME, SMC and ICMC conferences. It outlines the history of research concerned with facilitating inclusion in music making and discusses advances, current state of developments and trends in the field. Based on a systematic analysis of DMIs presented at the three conferences, seven control interface types could be identified: tangible, nontangible, audio, touch-screen, gaze, BCMIs and adapted instruments. Most of the ADMIs were tangible interfaces or physical controllers. Many of the instruments were designed for persons with physical disabilities or children with health conditions or impairments. Little attention was paid to DMIs for blind users. Although the haptic modality could play an important role in musical interaction in this context, relatively few of the ADMIs (26.7%) incorporated vibrotactile feedback. A discussion on future directions for inclusive design of DMIs is presented.

  • 4.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Diverse Sounds: Enabling Inclusive Sonic Interaction2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This compilation thesis collects a series of publications on designing sonic interactions for diversity and inclusion. The presented papers focus on case studies in which musical interfaces were either developed or reviewed. While the described studies are substantially different in their nature, they all contribute to the thesis by providing reflections on how musical interfaces could be designed to enable inclusion rather than exclusion. Building on this work, I introduce two terms: inclusive sonic interaction design and Accessible Digital Musical Instruments (ADMIs). I also define nine properties to consider in the design and evaluation of ADMIs: expressiveness, playability, longevity, customizability, pleasure, sonic quality, robustness, multimodality and causality. Inspired by the experience of playing an acoustic instrument, I propose to enable musical inclusion for under-represented groups (for example persons with visual- and hearing-impairments, as well as elderly people) through the design of Digital Musical Instruments (DMIs) in the form of rich multisensory experiences allowing for multiple modes of interaction. At the same time, it is important to enable customization to fit user needs, both in terms of gestural control and provided sonic output. I conclude that the computer music community has the potential to actively engage more people in music-making activities. In addition, I stress the importance of identifying challenges that people face in these contexts, thereby enabling initiatives towards changing practices.

  • 5.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Sonification of women in sound and music computing - The sound of female authorship in ICMC, SMC and NIME proceedings2017In: 2017 ICMC/EMW - 43rd International Computer Music Conference and the 6th International Electronic Music Week, Shanghai Conservatory of Music , 2017, p. 233-238Conference paper (Refereed)
    Abstract [en]

    The primary goal of this study was to approximate the number of female authors in the academic field of Sound and Music Computing. This was done through gender prediction from author names for proceedings from the ICMC, SMC and NIME conferences, and by sonifying these results. Although gender classification by first name can only serve as an estimation of the actual number of female authors in the field, some conclusions could be drawn. The total percentage of author names classified as female was 10.3% for ICMC, 11.9% for SMC and 11.9% for NIME. When merging data from all three conferences for years 2004-2016, it could be concluded that names classified as female ranged from 9.5 to 14.3%. Changes in the ratio of female vs. male authors over time were further illustrated by sonifications, allowing the reader to explore, compare and reflect upon the results by listening to sonic representations of the data. The conclusion that can be drawn from this study is that the field of Sound and Music Computing is still far from being gender-balanced.

  • 6.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Alborno, Paolo
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Interactive Sonification of Spontaneous Movement of Children€: Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound2016In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 10, article id 521Article in journal (Refereed)
    Abstract [en]

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children’s spontaneous movement in terms of energy-, smoothness- and directness index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g. expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g. energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.

  • 7.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Alexanderson, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot - Implications for Movement Sonification of Humanoids2018In: Proceedings of the 15th Sound and Music Computing Conference / [ed] Anastasia Georgaki and Areti Andreopoulou, Limassol, Cyprus, 2018Conference paper (Refereed)
    Abstract [en]

    In this paper we present a pilot study carried out within the project SONAO. The SONAO project aims to compen- sate for limitations in robot communicative channels with an increased clarity of Non-Verbal Communication (NVC) through expressive gestures and non-verbal sounds. More specifically, the purpose of the project is to use move- ment sonification of expressive robot gestures to improve Human-Robot Interaction (HRI). The pilot study described in this paper focuses on mechanical robot sounds, i.e. sounds that have not been specifically designed for HRI but are inherent to robot movement. Results indicated a low correspondence between perceptual ratings of mechanical robot sounds and emotions communicated through ges- tures. In general, the mechanical sounds themselves ap- peared not to carry much emotional information compared to video stimuli of expressive gestures. However, some mechanical sounds did communicate certain emotions, e.g. frustration. In general, the sounds appeared to commu- nicate arousal more effectively than valence. We discuss potential issues and possibilities for the sonification of ex- pressive robot gestures and the role of mechanical sounds in such a context. Emphasis is put on the need to mask or alter sounds inherent to robot movement, using for exam- ple blended sonification.

  • 8.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Moll, Jonas
    Sallnäs Pysander, Eva-Lotta
    Sonification of haptic interaction in a virtual scene2014In: Sound and Music Computing Sweden 2014, Stockholm, December 4-5, 2014 / [ed] Roberto Bresin, 2014, p. 14-16Conference paper (Refereed)
    Abstract [en]

    This paper presents a brief overview of work-in-progress for a study on correlations between visual and haptic spatial attention in a multimodal single-user application comparing different modalities. The aim is to gain insight into how auditory and haptic versus visual representations of temporal events may affect task performance and spatial attention. For this purpose, a 3D application involving one haptic model and two different sound models for interactive sonification are developed.

  • 9.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Sallnäs Pysander, Eva-Lotta
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Moll, Jonas
    Uppsala University.
    An Exploratory Study On The Effect Of Auditory Feedback On Gaze Behavior In a Virtual Throwing Task With and Without Haptic Feedback2017In: Proceedings of the 14th Sound and Music Computing Conference / [ed] Tapio Lokki, Jukka Pätynen, and Vesa Välimäki, Espoo, Finland, 2017, p. 242-249Conference paper (Refereed)
    Abstract [en]

    This paper presents findings from an exploratory study on the effect of auditory feedback on gaze behavior. A total of 20 participants took part in an experiment where the task was to throw a virtual ball into a goal in different conditions: visual only, audiovisual, visuohaptic and audio- visuohaptic. Two different sound models were compared in the audio conditions. Analysis of eye tracking metrics indicated large inter-subject variability; difference between subjects was greater than difference between feedback conditions. No significant effect of condition could be observed, but clusters of similar behaviors were identified. Some of the participants’ gaze behaviors appeared to have been affected by the presence of auditory feedback, but the effect of sound model was not consistent across subjects. We discuss individual behaviors and illustrate gaze behavior through sonification of gaze trajectories. Findings from this study raise intriguing questions that motivate future large-scale studies on the effect of auditory feedback on gaze behavior. 

  • 10.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Interactive sonification of a fluid dance movement: an exploratory study2019In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, no 3, p. 181-189Article in journal (Refereed)
    Abstract [en]

    In this paper we present three different experiments designed to explore sound properties associated with fluid movement: (1) an experiment in which participants adjusted parameters of a sonification model developed for a fluid dance movement, (2) a vocal sketching experiment in which participants sketched sounds portraying fluid versus nonfluid movements, and (3) a workshop in which participants discussed and selected fluid versus nonfluid sounds. Consistent findings from the three experiments indicated that sounds expressing fluidity generally occupy a lower register and has less high frequency content, as well as a lower bandwidth, than sounds expressing nonfluidity. The ideal sound to express fluidity is continuous, calm, slow, pitched, reminiscent of wind, water or an acoustic musical instrument. The ideal sound to express nonfluidity is harsh, non-continuous, abrupt, dissonant, conceptually associated with metal or wood, unhuman and robotic. Findings presented in this paper can be used as design guidelines for future applications in which the movement property fluidity is to be conveyed through sonification.

  • 11.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Sonification of fluidity -
An exploration of perceptual connotations of a particular movement feature2016In: Proceedings of ISon 2016, 5th Interactive Sonification Workshop, Bielefeld, Germany, 2016, p. 11-17Conference paper (Refereed)
    Abstract [en]

    In this study we conducted two experiments in order to investigate potential strategies for sonification of the expressive movement quality “fluidity” in dance: one perceptual rating experiment (1) in which five different sound models were evaluated on their ability to express fluidity, and one interactive experiment (2) in which participants adjusted parameters for the most fluid sound model in (1) and performed vocal sketching to two video recordings of contemporary dance. Sounds generated in the fluid condition occupied a low register and had darker, more muffled, timbres compared to the non-fluid condition, in which sounds were characterized by a higher spectral centroid and contained more noise. These results were further supported by qualitative data from interviews. The participants conceptualized fluidity as a property related to water, pitched sounds, wind, and continuous flow; non-fluidity had connotations of friction, struggle and effort. The biggest conceptual distinction between fluidity and non-fluidity was the dichotomy of “nature” and “technology”, “natural” and “unnatural”, or even “human” and “unhuman”. We suggest that these distinct connotations should be taken into account in future research focusing on the fluidity quality and its corresponding sonification.

  • 12.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Giordano, Marcello
    McGill University.
    Schumacher, Marlon M.
    McGill University.
    Wanderley, Marcelo M.
    McGill University.
    Physical and Perceptual Characterization of a Tactile Display for a Live-Electronics Notification System2014In: Proceedings of the ICMC|SMC|2014 Conference, McGill University , 2014Conference paper (Refereed)
    Abstract [en]

    In this paper we present a study we conducted to assess physical and perceptual properties of a tactile display for a tactile notification system within the CIRMMT Live Electronics Framework (CLEF), a Max-based 1 modular environment for composition and performance of live electronic music. Our tactile display is composed of two rotating eccentric mass actuators driven by a PWM signal generated from an Arduino microcontroller. We conducted physical measurements using an accelerometer and two user-based studies in order to evaluate: intensity and spectral peak frequency as function of duty cycle, as well as perceptual vibrotactile absolute and differential threshold. Results, obtained through the use of a logit regression model, provide us with precise design guidelines. These guidelines will enable us to ensure robust perceptual discrimination between vibrotactile stimuli at different intensities. Among with other characterizations presented in this study, these guidelines will allow us to better design tactile cues for our notification system for live-electronics performance.

  • 13. Frid, Emma
    et al.
    Gomes, Celso
    Jin, Zeyu
    Music Creation by Example2020Manuscript (preprint) (Other academic)
    Abstract [en]

    Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI music engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about humanAI interaction as well as the design and evaluation of mixedinitiative interfaces in creative practice.

  • 14.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID. KMH Royal College of Music, Stockholm, Sweden.
    Hansen, Kjetil Falkenberg
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Sound Forest - Evaluation of an Accessible Multisensory Music Installation2019In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM , 2019, p. 1-12, article id 677Conference paper (Refereed)
    Abstract [en]

    Sound Forest is a music installation consisting of a room with light-emitting interactive strings, vibrating platforms and speakers, situated at the Swedish Museum of Performing Arts. In this paper we present an exploratory study focusing on evaluation of Sound Forest based on picture cards and interviews. Since Sound Forest should be accessible for everyone, regardless age or abilities, we invited children, teens and adults with physical and intellectual disabilities to take part in the evaluation. The main contribution of this work lies in its fndings suggesting that multisensory platforms such as Sound Forest, providing whole-body vibrations, can be used to provide visitors of diferent ages and abilities with similar associations to musical experiences. Interviews also revealed positive responses to haptic feedback in this context. Participants of diferent ages used diferent strategies and bodily modes of interaction in Sound Forest, with activities ranging from running to synchronized music-making and collaborative play.

  • 15.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Moll, Jonas
    Uppsala University.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Sallnäs Pysander, Eva-Lotta
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task2018In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, no 4, p. 279-290Article in journal (Refereed)
    Abstract [en]

    In this paper we present a study on the effects of auditory- and haptic feedback in a virtual throwing task performed with a point-based haptic device. The main research objective was to investigate if and how task performance and perceived intuitiveness is affected when interactive sonification and/or haptic feedback is used to provide real-time feedback about a movement performed in a 3D virtual environment. Emphasis was put on task solving efficiency and subjective accounts of participants’ experiences of the multimodal interaction in different conditions. The experiment used a within-subjects design in which the participants solved the same task in different conditions: visual-only, visuohaptic, audiovisual and audiovisuohaptic. Two different sound models were implemented and compared. Significantly lower error rates were obtained in the audiovisuohaptic condition involving movement sonification based on a physical model of friction, compared to the visual-only condition. Moreover, a significant increase in perceived intuitiveness was observed for most conditions involving haptic and/or auditory feedback, compared to the visual-only condition. The main finding of this study is that multimodal feedback can not only improve perceived intuitiveness of an interface but that certain combinations of haptic feedback and movement sonification can also contribute with performance-enhancing properties. This highlights the importance of carefully designing feedback combinations for interactive applications.

  • 16. Giordano, M
    et al.
    Hattwick, I
    Franco, I
    Egloff, D
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Lamontagne, V
    TeZ, C
    Salter, C
    Wanderley, M
    Design and Implementation of a Whole-Body Haptic Suit for “Ilinx”, a Multisensory Art Installation2015In: Proc. of the 12th Int. Conference on Sound and Music Computing (SMC-15) / [ed] Joseph Timoney and Thomas Lysaght, Maynooth, Ireland: Maynooth University , 2015, Vol. 1, p. 169-175Conference paper (Refereed)
    Abstract [en]

    Ilinx is a multidisciplinary art/science research project focusing on the development of a multisensory art installation involving sound, visuals and haptics. In this paper we describe design choices and technical challenges behind the development of the haptic technology embedded into six augment garments. Starting from perceptual experiments, conducted to characterize the thirty vibrating actuators used in the garments, we describe hardware and software design, and the development of several haptic effects. The garments have successfully been used by over 300 people during the premiere of the installation in the TodaysArt 2014 festival in The Hague.

  • 17.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Sonic characteristics of robots in films2019In: Proceedings of the 16th Sound and Music Computing Conference, Malaga, Spain, 2019, p. 1-6, article id P2.7Conference paper (Refereed)
    Abstract [en]

    Robots are increasingly becoming an integral part of our everyday life. Expectations on robots could be influenced by how robots are represented in science fiction films. We hypothesize that sonic interaction design for real-world robots may find inspiration from sound design of fictional robots. In this paper, we present an exploratory study focusing on sonic characteristics of robot sounds in films. We believe that findings from the current study could be of relevance for future robotic applications involving the communication of internal states through sounds, as well for sonification of expressive robot movements. Excerpts from five films were annotated and analysed using Long Time Average Spectrum (LTAS). As an overall observation, we found that robot sonic presence is highly related to the physical appearance of robots. Preliminary results show that most of the robots analysed in this study have “metallic” voice qualities, matching the material of their physical form. Characteristics of robot voices show significant differences compared to voices of human characters; fundamental frequency of robotic voices is either shifted to higher or lower values, and the voices span over a broader frequency band.

  • 18.
    Paloranta, Jimmie
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Lundström, Anders
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Elblaus, Ludvig
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Interaction with a large sized augmented string instrument intended for a public setting2016In: Sound and Music Computing 2016 / [ed] Großmann, Rolf and Hajdu, Georg, Hamburg: Zentrum für Mikrotonale Musik und Multimediale Komposition (ZM4) , 2016, p. 388-395Conference paper (Refereed)
    Abstract [en]

    In this paper we present a study of the interaction with a large sized string instrument intended for a large installation in a museum, with focus on encouraging creativity,learning, and providing engaging user experiences. In the study, nine participants were video recorded while interacting with the string on their own, followed by an interview focusing on their experiences, creativity, and the functionality of the string. In line with previous research, our results highlight the importance of designing for different levels of engagement (exploration, experimentation, challenge). However, results additionally show that these levels need to consider the users age and musical background as these profoundly affect the way the user plays with and experiences the string.

  • 19.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Sköld, Sköld
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID. KMH Royal College of Music.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    From vocal sketching to sound models by means of a sound-based musical transcription system2019In: Proceedings of the 16th Sound and Music Computing Conference, Malaga, Spain, 2019, p. 1-7, article id S2.5Conference paper (Refereed)
    Abstract [en]

    This paper explores how notation developed for the representation of sound-based musical structures could be used for the transcription of vocal sketches representing expressive robot movements. A mime actor initially produced expressive movements which were translated to a humanoid robot. The same actor was then asked to illustrate these movements using vocal sketching. The vocal sketches were transcribed by two composers using sound-based notation. The same composers later synthesized new sonic sketches from the annotated data. Different transcriptions and synthesized versions of these were compared in order to investigate how the audible outcome changes for different transcriptions and synthesis routines. This method provides a palette of sound models suitable for the sonification of expressive body movements.

1 - 19 of 19
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf