kth.sePublications
Change search
Refine search result
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bresin, Roberto
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. IRCAM STMS Lab.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound2021Conference paper (Refereed)
    Abstract [en]

    The aim of the SONAO project is to establish new methods basedon sonification of expressive movements for achieving a robust interaction between users and humanoid robots. We want to achievethis by combining competences of the research team members inthe fields of social robotics, sound and music computing, affective computing, and body motion analysis. We want to engineersound models for implementing effective mappings between stylized body movements and sound parameters that will enable anagent to express high-level body motion qualities through sound.These mappings are paramount for supporting feedback to andunderstanding robot body motion. The project will result in thedevelopment of new theories, guidelines, models, and tools forthe sonic representation of high-level body motion qualities in interactive applications. This work is part of the growing researchfield known as data sonification, in which we combine methodsand knowledge from the fields of interactive sonification, embodied cognition, multisensory perception, non-verbal and gesturalcommunication in robots.

  • 2.
    Falkenberg, Kjetil
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Creating digital musical instruments with and for children: Including vocal sketching as a method for engaging in codesign2020In: Human Technology, E-ISSN 1795-6889, Vol. 16, no 3, p. 348-371Article in journal (Refereed)
    Abstract [en]

    A class of master of science students and a group of preschool children codesigned new digital musical instruments based on workshop interviews involving vocal sketching, a method for imitating and portraying sounds. The aim of the study was to explore how the students and children would approach vocal sketching as one of several design methods. The children described musical instruments to the students using vocal sketching and other modalities (verbal, drawing, gestures). The resulting instruments built by the students were showcased at the Swedish Museum of Performing Arts in Stockholm. Although all the children tried vocal sketching during preparatory tasks, few employed the method during the workshop. However, the instruments seemed to meet the children’s expectations. Consequently, even though the vocal sketching method alone provided few design directives in the given context, we suggest that vocal sketching, under favorable circumstances, can be an engaging component that complements other modalities in codesign involving children.

  • 3.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lindetorp, Hans
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal Acad Mus, Mus & Media Prod, Stockholm, Sweden.
    Unproved methods from the frontier in the course curriculum: A bidirectional and mutually beneficial research challenge2020In: INTED2020 Proceedings, IATED , 2020, p. 7033-7038Conference paper (Refereed)
  • 4.
    Latupeirissa, Adrian B.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    PepperOSC: enabling interactive sonification of a robot's expressive movement2023In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 17, no 4, p. 231-239Article in journal (Refereed)
    Abstract [en]

    This paper presents the design and development of PepperOSC, an interface that connects Pepper and NAO robots with soundproduction tools to enable the development of interactive sonification in human-robot interaction (HRI). The interface usesOpen Sound Control (OSC) messages to stream kinematic data from robots to various sound design and music productiontools. The goals of PepperOSC are twofold: (i) to provide a tool for HRI researchers in developing multimodal user interfacesthrough sonification, and (ii) to lower the barrier for sound designers to contribute to HRI. To demonstrate the potential useof PepperOSC, this paper also presents two applications we have conducted: (i) a course project by two master’s studentswho created a robot sound model in Pure Data, and (ii) a museum installation of Pepper robot, employing sound modelsdeveloped by a sound designer and a composer/researcher in music technology usingMaxMSP and SuperCollider respectively.Furthermore, we discuss the potential use cases of PepperOSC in social robotics and artistic contexts. These applicationsdemonstrate the versatility of PepperOSC and its ability to explore diverse aesthetic strategies for robot movement sonification,offering a promising approach to enhance the effectiveness and appeal of human-robot interactions.

  • 5.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    From Motion Pictures to Robotic Features: Adopting film sound design practices to foster sonic expression in social robotics through interactive sonification2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This dissertation investigates the role of sound design in social robotics, drawing inspiration from robot depictions in science-fiction films. It addresses the limitations of robots’ movements and expressive behavior by integrating principles from film sound design, seeking to improve human-robot interaction through expressive gestures and non-verbal sounds.

    The compiled works are structured into two parts. The first part focuses on perceptual studies, exploring how people perceive non-verbal sounds displayed by a Pepper robot related to its movement. These studies highlighted preferences for more refined sound models, subtle sounds that blend with ambient sounds, and sound characteristics matching the robot’s visual attributes. This part also resulted in a programming interface connecting the Pepper robot with sound production tools.

    The second part focuses on a structured analysis of robot sounds in films, revealing three narrative themes related to robot sounds in films with implications for social robotics. The first theme involves sounds associated with the physical attributes of robots, encompassing sub-themes of sound linked to robot size, exposed mechanisms, build quality, and anthropomorphic traits. The second theme delves into sounds accentuating robots’ internal workings, with sub-themes related to learning and decision-making processes. Lastly, the third theme revolves around sounds utilized in robots’ interactions with other characters within the film scenes.

    Based on these works, the dissertation discusses sound design recommendations for social robotics inspired by practices in film sound design. These recommendations encompass selecting the appropriate sound materials and sonic characteristics such as pitch and timbre, employing movement sound for effective communication and emotional expression, and integrating narrative and context into the interaction.

    Download full text (pdf)
    Kappa
  • 6.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Understanding non-verbal sound of humanoid robots in films2020Conference paper (Refereed)
    Abstract [en]

    People’s mental model of robots is of importance since it can influence their expectations of how a robot should appear and behave, which in turn will affect the interaction between human and robot.The current mental model of robots is influenced by the presence ofrobots in films. Thus, understanding the principles of the design ofrobots in film would benefit the design of robots in the real world.

    This extended abstract presents an ongoing investigation of the use of non-verbal sounds of robot in films. Specifically, the investigation looks into the purpose of the sounds, how they are designed, and how the sound design has changed throughout the history of films. Preliminary result suggests the presence of a categoriza-tion of robotic sounds in films: inner workings, communication of movement, and expression of emotion.

    While further sound design principles are still being formulated,we would argue that having this historical perspective would benefitthe understanding of the current expectation of how a robot shouldsound, thus laying the groundwork for further research in the useof sound in HRI.

    Download full text (pdf)
    fulltext
  • 7.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sonic characteristics of robots in films2019In: Proceedings of the 16th Sound and Music Computing Conference, Malaga, Spain, 2019, p. 1-6, article id P2.7Conference paper (Refereed)
    Abstract [en]

    Robots are increasingly becoming an integral part of our everyday life. Expectations on robots could be influenced by how robots are represented in science fiction films. We hypothesize that sonic interaction design for real-world robots may find inspiration from sound design of fictional robots. In this paper, we present an exploratory study focusing on sonic characteristics of robot sounds in films. We believe that findings from the current study could be of relevance for future robotic applications involving the communication of internal states through sounds, as well for sonification of expressive robot movements. Excerpts from five films were annotated and analysed using Long Time Average Spectrum (LTAS). As an overall observation, we found that robot sonic presence is highly related to the physical appearance of robots. Preliminary results show that most of the robots analysed in this study have “metallic” voice qualities, matching the material of their physical form. Characteristics of robot voices show significant differences compared to voices of human characters; fundamental frequency of robotic voices is either shifted to higher or lower values, and the voices span over a broader frequency band.

  • 8.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Murdeshwar, Akshata
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Semiotic analysis of robot sounds in films: implications for sound design in social roboticsManuscript (preprint) (Other academic)
    Abstract [en]

    This paper investigates the sound design of robots in films and their potential influence on the field of social robotics. Cinematic robot portrayals have inspired researchers and practitioners in Human-Robot Interaction (HRI). While the non-verbal sounds of iconic film robots like R2-D2 and Wall-E have been explored, this study takes a more comprehensive approach. We explore a broader selection of 15 films featuring humanoid robots across decades through a semiotic analysis of their non-verbal communication sounds, including those related to movements and internal mechanisms. Our analysis, guided by Bateman and Schmidt’s multimodal film analysis framework following Saussure’s organization of signs through paradigmatic and syntagmatic relations, interprets the paradigmatic axis as the examination of the sound and the syntagmatic axis as the examination of the events surrounding the sound. The findings uncover two primary film robot sound materials: mechanical and synthetic. Additionally, contextual analysis reveals three narrative themes and several sub-themes related to the physical attributes of robots, their internal workings, and their interactions with other characters. The discussion section explores the implications of these findings for social robotics, including the importance of sound materials, the role of movement sounds in communication and emotional expression, and the significance of narrative and context in human-robot interaction. The paper also acknowledges the challenges in translating film sound design into practical applications in social robotics. This study provides valuable insights for HRI researchers, practitioners, and sound designers seeking to enhance non-verbal auditory expressions in social robots.

  • 9.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Exploring emotion perception in sonic HRI2020In: 17th Sound and Music Computing Conference, Torino: Zenodo , 2020, p. 434-441Conference paper (Refereed)
    Abstract [en]

    Despite the fact that sounds produced by robots can affect the interaction with humans, sound design is often an overlooked aspect in Human-Robot Interaction (HRI). This paper explores how different sets of sounds designed for expressive robot gestures of a humanoid Pepper robot can influence the perception of emotional intentions. In the pilot study presented in this paper, it has been asked to rate different stimuli in terms of perceived affective states. The stimuli were audio, audio-video and video only and contained either Pepper’s original servomotors noises, sawtooth, or more complex designed sounds. The preliminary results show a preference for the use of more complex sounds, thus confirming the necessity of further exploration in sonic HRI.

  • 10.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification2023In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522Article in journal (Refereed)
    Abstract [en]

    This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models.

    We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement.

    We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study.

    Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).

  • 11.
    Rafi, Ayesha Kajol
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Murdeshwar, Akshata
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Investigating the Role of Robot Voices and Sounds in Shaping Perceived Intentions2023In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 425-427Conference paper (Refereed)
    Abstract [en]

    This study explores if, and how, the choices made regarding a robot's speaking voice and characteristic body sounds influence viewers' perceptions of its intent i.e., whether the robot's intention is positive or negative. The analysis focuses on robot representations and sounds in three films: "Robots"(2005) [1], "NextGen"(2018) [2], and "Love, Death, and Robots - Three Robots"(2019) [3]. In eight qualitative interviews, five parameters (tonality, intonation, volume, pitch, and speed) were used to understand robot sounds and the participant's perception of a robot's attitude and intentions. The study culminates in a set of recommendations and considerations for human-robot interaction designers to consider while sound coding for body, physiology, and movement.

  • 12.
    Telang, Sargam
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Marques, Malin
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Emotional Feedback of Robots: Comparing the perceived emotional feedback by an audience between masculine and feminine voices in robots in popular media2023In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 434-436Conference paper (Refereed)
    Abstract [en]

    The sound design of different fantastical aspects can tell the audience much about characters and things. Robots are one of the common fantastical characters that need to be sonified to indicate different aspects of their character. Often, one or more of these traits are an indication of gender and behavior. We investigated these traits in a survey where we asked both quantitative and qualitative questions about the participants' perceptions. We found that participants indicated a bias towards certain robots depending on perceived femininity and masculinity.

  • 13.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    McGinn, Conor
    Trinity College Dublin.
    How context shapes the appropriateness of a robot’s voice2020In: 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020, Vol. 9223449, p. 215-222Conference paper (Refereed)
    Abstract [en]

    Social robots have a recognizable physical appearance, a distinct voice, and interact with users in specific contexts. Previous research has suggested a 'matching hypothesis', which seeks to rationalise how people judge a robot's appropriateness for a task by its appearance. Other research has extended this to cover combinations of robot voices and appearances. In this paper, we examine the missing connection between robot voice, robot appearance, and deployment context. In so doing, we asked participants to match a robot image to a voice within a defined interaction context. We selected widely available social robots, identified task contexts they are used in, and manipulated the voices in terms of gender, naturalness, and accent. We found that the task context mediates the 'matching hypothesis'. People consistently selected a robot based on a vocal feature for a certain context, and a different robot based on the same vocal feature for another context. We suggest that robot voice design should take advantage of current technology that enables the creation and tuning of custom voices. They are a flexible tool to increase perception of appropriateness, which has a positive influence on Human-Robot Interaction. 

    Download full text (pdf)
    fulltext
  • 14.
    Zojaji, Sahba
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Persuasive polite robots in free-standing conversational groups2023In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Politeness is at the core of the common set of behavioral norms that regulate human communication and is therefore of significant interest in the design of Human-Robot Interactions. In this paper, we investigate how the politeness behaviors of a humanoid robot impact human decisions about where to join a group of two robots. We also evaluate the resulting impact on the perception of the robot's politeness. In a study involving 59 participants, the main (Pepper) robot in the group invited participants to join using six politeness behaviors derived from Brown and Levinson's politeness theory. It requests participants to join the group at the furthest side of the group which involves more effort to reach than a closer side that is also available to the participant but would ignore the request of the robot. We evaluated the robot's effectiveness in terms of persuasiveness, politeness, and clarity. We found that more direct and explicit politeness strategies derived from the theory have a higher level of success in persuading participants to join at the furthest side of the group. We also evaluated participants' adherence to social norms i.e. not walking through the center, or \textit{o-space}, of the group when joining it. Our results showed that participants tended to adhere to social norms when joining at the furthest side by not walking through the center of the group of robots, even though they were informed that the robots were fully automated. 

    Download full text (pdf)
    Preprint
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf