kth.sePublications KTH
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
Link to record
Permanent link

Direct link
Latupeirissa, Adrian BenignoORCID iD iconorcid.org/0000-0003-3572-6429
Publications (10 of 16) Show all publications
Zheng, C. Y., Latupeirissa, A. B., Chen, Y., Woodward, K., Balaam, M. & Andrikopoulos, G. (2025). A Wearable E-Textile Force-Sensing Garment for Characterising Caring Touch. IEEE Sensors Journal, 1-1
Open this publication in new window or tab >>A Wearable E-Textile Force-Sensing Garment for Characterising Caring Touch
Show others...
2025 (English)In: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, p. 1-1Article in journal (Refereed) Epub ahead of print
Abstract [en]

To enable better human-machine and human-robot interaction in healthcare applications, there is a need to improve haptic devices, assistive and companion robots to touch humans in a way that can be perceived as comfortable, safe and trusting. Towards this vision, we developed a flexible, wearable matrix force-sensing garment capable of recording dynamic force behaviour of caring touch from four healthcare experts in their natural therapy environment, and enable the touch recipient to provide feedback on the perceived quality of the same touches. Instead of assuming particular touch gestures are caring, we explored the application of the sensing and analysing system to characterise a caring quality of touch across different gestures. We were able to characterise distinctive features for touch aimed for sensory aspects and for moving the body. Our findings demonstrate that caring touch is not limited to specific gestures or a single parameter but is rather characterised by how the touch is performed across multiple features, with deliberateness - the careful and smoothness in the increasing and decreasing of force, emerging as a shared characteristic of caring touch regardless of the specific gesture employed. The study also demonstrates the feasibility of using raw tactile sensor data to assess subjective touch quality.

Place, publisher, year, edition, pages
IEEE, 2025
Keywords
caring touch, affective haptics, machine learning, assistive robotics, care haptics, force sensing e-textile
National Category
Embedded Systems
Identifiers
urn:nbn:se:kth:diva-376401 (URN)10.1109/jsen.2025.3640512 (DOI)2-s2.0-105025399378 (Scopus ID)
Note

QC 20260204

Available from: 2026-02-04 Created: 2026-02-04 Last updated: 2026-02-04Bibliographically approved
Zheng, C. Y., Chen, Y., Latupeirissa, A. B., Andrikopoulos, G., Ståhl, A. & Balaam, M. (2025). Towards Caring Touch From Technologies: Knowledge From Healthcare Practitioners. In: CHI 2025: CHI Conference on Human Factors in Computing Systems. Paper presented at The 2025 CHI Conference on Human Factors in Computing Systems, CHI 2025, Yokohama, Japan, 26 April 2025- 1 May 2025. Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Towards Caring Touch From Technologies: Knowledge From Healthcare Practitioners
Show others...
2025 (English)In: CHI 2025: CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery (ACM) , 2025Conference paper, Published paper (Refereed)
Abstract [en]

We present a qualitative study with five healthcare experts specialised in different types of touch practice to gain insight in how caring touch can be enacted. Through our analysis we focus on how to transfer this learning into design considerations towards enacting caring touch from technologies. Despite the rapidly growing expectation for and design interest in touch from technologies intending to enhance care and well-being, the knowledge on how to design caring touch is still fragmented. How caring touch is enacted in inter-personal touch is under-explored and such expertise from healthcare practitioners has not been engaged from the perspective of HCI design research. We propose designers to consider caring as an experiential quality instead of a division between instrumental types of touch and caring types. We recommend when designing for a caring quality in technology-initiated touch that designers create a progression of touch with dynamic sensitivity and adapt the materiality of actuating devices to the plural dimensions of the body’s textures.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2025
National Category
Interdisciplinary Studies in Humanities and Arts
Identifiers
urn:nbn:se:kth:diva-364635 (URN)10.1145/3706598.3713736 (DOI)2-s2.0-105005741195 (Scopus ID)
Conference
The 2025 CHI Conference on Human Factors in Computing Systems, CHI 2025, Yokohama, Japan, 26 April 2025- 1 May 2025
Note

Part of ISBN 9798400713941

QC 20250702

Available from: 2025-06-16 Created: 2025-06-16 Last updated: 2025-07-02Bibliographically approved
Latupeirissa, A. B. (2024). From Motion Pictures to Robotic Features: Adopting film sound design practices to foster sonic expression in social robotics through interactive sonification. (Doctoral dissertation). Stockholm, Sweden: KTH Royal Institute of Technology
Open this publication in new window or tab >>From Motion Pictures to Robotic Features: Adopting film sound design practices to foster sonic expression in social robotics through interactive sonification
2024 (English)Doctoral thesis, comprehensive summary (Other academic) [Artistic work]
Alternative title[sv]
Från filmer till robotfunktioner : Användning av praxis inom filmljuddesign för att främja ljuduttryck i social robotik genom interaktiv sonifiering
Abstract [en]

This dissertation investigates the role of sound design in social robotics, drawing inspiration from robot depictions in science-fiction films. It addresses the limitations of robots’ movements and expressive behavior by integrating principles from film sound design, seeking to improve human-robot interaction through expressive gestures and non-verbal sounds.

The compiled works are structured into two parts. The first part focuses on perceptual studies, exploring how people perceive non-verbal sounds displayed by a Pepper robot related to its movement. These studies highlighted preferences for more refined sound models, subtle sounds that blend with ambient sounds, and sound characteristics matching the robot’s visual attributes. This part also resulted in a programming interface connecting the Pepper robot with sound production tools.

The second part focuses on a structured analysis of robot sounds in films, revealing three narrative themes related to robot sounds in films with implications for social robotics. The first theme involves sounds associated with the physical attributes of robots, encompassing sub-themes of sound linked to robot size, exposed mechanisms, build quality, and anthropomorphic traits. The second theme delves into sounds accentuating robots’ internal workings, with sub-themes related to learning and decision-making processes. Lastly, the third theme revolves around sounds utilized in robots’ interactions with other characters within the film scenes.

Based on these works, the dissertation discusses sound design recommendations for social robotics inspired by practices in film sound design. These recommendations encompass selecting the appropriate sound materials and sonic characteristics such as pitch and timbre, employing movement sound for effective communication and emotional expression, and integrating narrative and context into the interaction.

Abstract [sv]

Denna avhandling undersöker ljuddesignens roll i social robotik, med inspiration från robotskildringar i science fiction filmer. Avhandlingen diskuterar begränsningar i robotars uttrycksfulla beteenden genom att integrera principer från filmljuddesign. Arbetet syftar till att främja interaktionen mellan människa och robot genom att förse robotar med uttrycksfulla gester och icke-verbala ljud.

Denna sammanläggningsavhandling inkluderar ett antal artiklar som är strukturerade i två separata delar. Den första delen fokuserar på perceptuella studier och undersöker hur människor uppfattar de icke-verbala ljud som roboten Pepper producerar i samband med sina rörelser. Dessa studier belyste preferenser för mer förfinade ljudmodeller, subtila ljud som blandas med omgivande ljud, och ljudegenskaper som matchar robotens visuella attribut. Denna del resulterade också i ett programmeringsgränssnitt som sammankopplar Pepper-roboten och ljudproduktionsverktyg.

Den andra delen fokuserar på en strukturerad analys av robotljud i filmer och avslöjar tre narrativa teman relaterade till robotljud i filmer med implikationer för social robotik. Det första temat handlar om ljud som förknippas med robotarnas fysiska attribut och omfattar underteman av ljud som är kopplade till robotstorlek, exponerade mekanismer, byggkvalitet, och antropomorfa drag. Det andra temat fördjupar sig i ljud som betonar robotarnas interna arbete, med underteman relaterade till inlärnings- och beslutsprocesser. Slutligen kretsar det tredje temat kring ljud som används i robotarnas interaktion med andra karaktärer i filmscenerna.

Baserat på ovan beskrivna arbeten diskuterar denna avhandling rekommendationer för ljuddesign inom social robotik inspirerade av praxis inom filmljuddesign. Dessa rekommendationer omfattar att välja lämpliga ljudmaterial och ljudegenskaper såsom tonhöjd och klangfärg, att använda rörelseljud för effektiv kommunikation och känslomässiga uttryck, samt att integrera narrativ och sammanhang i interaktionen.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2024. p. xiii, 54
Series
TRITA-EECS-AVL ; 2024:13
Keywords
human-robot interaction, social robotics, film sound design, robot sound, interactive sonification, människa-robotinteraktion, social robotik, filmljuddesign, robotljud, interaktiv sonifiering
National Category
Robotics and automation Human Computer Interaction Studies on Film
Research subject
Media Technology
Identifiers
urn:nbn:se:kth:diva-342759 (URN)978-91-8040-831-8 (ISBN)
Public defence
2024-02-22, https://kth-se.zoom.us/j/61765490226, Kollegiesalen, Brinellvägen 8, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20240131

Available from: 2024-01-31 Created: 2024-01-30 Last updated: 2025-02-05Bibliographically approved
Telang, S., Marques, M., Latupeirissa, A. B. & Bresin, R. (2023). Emotional Feedback of Robots: Comparing the perceived emotional feedback by an audience between masculine and feminine voices in robots in popular media. In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction: . Paper presented at 11th Conference on Human-Agent Interaction, HAI 2023, Gothenburg, Sweden, Dec 4 2023 - Dec 11 2023 (pp. 434-436). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Emotional Feedback of Robots: Comparing the perceived emotional feedback by an audience between masculine and feminine voices in robots in popular media
2023 (English)In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 434-436Conference paper, Published paper (Refereed)
Abstract [en]

The sound design of different fantastical aspects can tell the audience much about characters and things. Robots are one of the common fantastical characters that need to be sonified to indicate different aspects of their character. Often, one or more of these traits are an indication of gender and behavior. We investigated these traits in a survey where we asked both quantitative and qualitative questions about the participants' perceptions. We found that participants indicated a bias towards certain robots depending on perceived femininity and masculinity.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Gender, Gender stereotypes, Human-robot interaction, Perception, Robots, Science Fiction, Social Robots
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-341673 (URN)10.1145/3623809.3623953 (DOI)001148034200071 ()2-s2.0-85180130385 (Scopus ID)
Conference
11th Conference on Human-Agent Interaction, HAI 2023, Gothenburg, Sweden, Dec 4 2023 - Dec 11 2023
Note

Part of ISBN 9798400708244

QC 20231229

Available from: 2023-12-29 Created: 2023-12-29 Last updated: 2024-03-05Bibliographically approved
Rafi, A. K., Murdeshwar, A., Latupeirissa, A. B. & Bresin, R. (2023). Investigating the Role of Robot Voices and Sounds in Shaping Perceived Intentions. In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction: . Paper presented at 11th Conference on Human-Agent Interaction, HAI 2023, Gothenburg, Sweden, Dec 4 2023 - Dec 11 2023 (pp. 425-427). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Investigating the Role of Robot Voices and Sounds in Shaping Perceived Intentions
2023 (English)In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 425-427Conference paper, Published paper (Refereed)
Abstract [en]

This study explores if, and how, the choices made regarding a robot's speaking voice and characteristic body sounds influence viewers' perceptions of its intent i.e., whether the robot's intention is positive or negative. The analysis focuses on robot representations and sounds in three films: "Robots"(2005) [1], "NextGen"(2018) [2], and "Love, Death, and Robots - Three Robots"(2019) [3]. In eight qualitative interviews, five parameters (tonality, intonation, volume, pitch, and speed) were used to understand robot sounds and the participant's perception of a robot's attitude and intentions. The study culminates in a set of recommendations and considerations for human-robot interaction designers to consider while sound coding for body, physiology, and movement.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Human Perception, Movies, Qualitative Study, Robot sounds, Sound Design
National Category
Robotics and automation Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-341677 (URN)10.1145/3623809.3623949 (DOI)001148034200068 ()2-s2.0-85180124967 (Scopus ID)
Conference
11th Conference on Human-Agent Interaction, HAI 2023, Gothenburg, Sweden, Dec 4 2023 - Dec 11 2023
Note

Part of ISBN 9798400708244

QC 20231229

Available from: 2023-12-29 Created: 2023-12-29 Last updated: 2025-02-05Bibliographically approved
Zojaji, S., Latupeirissa, A. B., Leite, I., Bresin, R. & Peters, C. (2023). Persuasive polite robots in free-standing conversational groups. In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023): . Paper presented at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) (pp. 1-8). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Persuasive polite robots in free-standing conversational groups
Show others...
2023 (English)In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1-8Conference paper, Published paper (Refereed)
Abstract [en]

Politeness is at the core of the common set of behavioral norms that regulate human communication and is therefore of significant interest in the design of Human-Robot Interactions. In this paper, we investigate how the politeness behaviors of a humanoid robot impact human decisions about where to join a group of two robots. We also evaluate the resulting impact on the perception of the robot's politeness. In a study involving 59 participants, the main (Pepper) robot in the group invited participants to join using six politeness behaviors derived from Brown and Levinson's politeness theory. It requests participants to join the group at the furthest side of the group which involves more effort to reach than a closer side that is also available to the participant but would ignore the request of the robot. We evaluated the robot's effectiveness in terms of persuasiveness, politeness, and clarity. We found that more direct and explicit politeness strategies derived from the theory have a higher level of success in persuading participants to join at the furthest side of the group. We also evaluated participants' adherence to social norms i.e. not walking through the center, or \textit{o-space}, of the group when joining it. Our results showed that participants tended to adhere to social norms when joining at the furthest side by not walking through the center of the group of robots, even though they were informed that the robots were fully automated. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Social robotics, Politeness, Persuasiveness, Social norms, Human-Robot interaction, free-standing conversational groups
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-338180 (URN)10.1109/IROS55552.2023.10341830 (DOI)001133658803003 ()2-s2.0-85182524342 (Scopus ID)
Conference
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023)
Note

Part of proceedings ISBN 978-1-6654-9190-7

QC 20231016

Available from: 2023-10-16 Created: 2023-10-16 Last updated: 2024-03-04Bibliographically approved
Latupeirissa, A. B., Panariello, C. & Bresin, R. (2023). Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification. ACM Transactions on Human-Robot Interaction, 12(4), Article ID 52.
Open this publication in new window or tab >>Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification
2023 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 4, article id 52Article in journal (Refereed) Published
Abstract [en]

This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models.

We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement.

We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study.

Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
SONAO
National Category
Human Computer Interaction Robotics and automation
Identifiers
urn:nbn:se:kth:diva-324962 (URN)10.1145/3585277 (DOI)001153514400008 ()2-s2.0-85170233153 (Scopus ID)
Note

QC 20260119

Available from: 2023-03-21 Created: 2023-03-21 Last updated: 2026-01-19Bibliographically approved
Bresin, R., Frid, E., Latupeirissa, A. B. & Panariello, C. (2021). Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound. In: : . Paper presented at Workshop on Sound in Human-Robot Interaction at HRI 2021.
Open this publication in new window or tab >>Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound
2021 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

The aim of the SONAO project is to establish new methods basedon sonification of expressive movements for achieving a robust interaction between users and humanoid robots. We want to achievethis by combining competences of the research team members inthe fields of social robotics, sound and music computing, affective computing, and body motion analysis. We want to engineersound models for implementing effective mappings between stylized body movements and sound parameters that will enable anagent to express high-level body motion qualities through sound.These mappings are paramount for supporting feedback to andunderstanding robot body motion. The project will result in thedevelopment of new theories, guidelines, models, and tools forthe sonic representation of high-level body motion qualities in interactive applications. This work is part of the growing researchfield known as data sonification, in which we combine methodsand knowledge from the fields of interactive sonification, embodied cognition, multisensory perception, non-verbal and gesturalcommunication in robots.

National Category
Human Computer Interaction Computer and Information Sciences
Research subject
Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-293349 (URN)
Conference
Workshop on Sound in Human-Robot Interaction at HRI 2021
Projects
SONAO
Note

QC 20211116

Available from: 2021-04-22 Created: 2021-04-22 Last updated: 2025-02-18Bibliographically approved
Falkenberg, K., Lindetorp, H., Latupeirissa, A. B. & Frid, E. (2020). Creating digital musical instruments with and for children: Including vocal sketching as a method for engaging in codesign. Human Technology, 16(3), 348-371
Open this publication in new window or tab >>Creating digital musical instruments with and for children: Including vocal sketching as a method for engaging in codesign
2020 (English)In: Human Technology, E-ISSN 1795-6889, Vol. 16, no 3, p. 348-371Article in journal (Refereed) Published
Abstract [en]

A class of master of science students and a group of preschool children codesigned new digital musical instruments based on workshop interviews involving vocal sketching, a method for imitating and portraying sounds. The aim of the study was to explore how the students and children would approach vocal sketching as one of several design methods. The children described musical instruments to the students using vocal sketching and other modalities (verbal, drawing, gestures). The resulting instruments built by the students were showcased at the Swedish Museum of Performing Arts in Stockholm. Although all the children tried vocal sketching during preparatory tasks, few employed the method during the workshop. However, the instruments seemed to meet the children’s expectations. Consequently, even though the vocal sketching method alone provided few design directives in the given context, we suggest that vocal sketching, under favorable circumstances, can be an engaging component that complements other modalities in codesign involving children.

Keywords
Vocal sketching, digital musical instruments, codesign, children, performance, prototype building
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-279856 (URN)10.17011/ht/urn.202011256768 (DOI)2-s2.0-85100025862 (Scopus ID)
Note

QC 20200922

Available from: 2020-08-31 Created: 2020-08-31 Last updated: 2023-07-25Bibliographically approved
Latupeirissa, A. B., Panariello, C. & Bresin, R. (2020). Exploring emotion perception in sonic HRI. In: 17th Sound and Music Computing Conference: . Paper presented at Sound and Music Computing Conference, Torino, 24-26 June 2020 (pp. 434-441). Torino: Zenodo
Open this publication in new window or tab >>Exploring emotion perception in sonic HRI
2020 (English)In: 17th Sound and Music Computing Conference, Torino: Zenodo , 2020, p. 434-441Conference paper, Published paper (Refereed)
Abstract [en]

Despite the fact that sounds produced by robots can affect the interaction with humans, sound design is often an overlooked aspect in Human-Robot Interaction (HRI). This paper explores how different sets of sounds designed for expressive robot gestures of a humanoid Pepper robot can influence the perception of emotional intentions. In the pilot study presented in this paper, it has been asked to rate different stimuli in terms of perceived affective states. The stimuli were audio, audio-video and video only and contained either Pepper’s original servomotors noises, sawtooth, or more complex designed sounds. The preliminary results show a preference for the use of more complex sounds, thus confirming the necessity of further exploration in sonic HRI.

Place, publisher, year, edition, pages
Torino: Zenodo, 2020
National Category
Computer and Information Sciences Human Computer Interaction Computer graphics and computer vision Other Computer and Information Science
Research subject
Media Technology; Art, Technology and Design; Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-277947 (URN)10.5281/ZENODO.3898928 (DOI)2-s2.0-85101259342 (Scopus ID)
Conference
Sound and Music Computing Conference, Torino, 24-26 June 2020
Projects
SONAO
Funder
Swedish Research Council, 2017-03979
Note

QC 20200722

Available from: 2020-07-02 Created: 2020-07-02 Last updated: 2025-02-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3572-6429

Search in DiVA

Show all publications