kth.sePublikationer KTH
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Matviienko, Andrii, Assistant ProfessorORCID iD iconorcid.org/0000-0002-6571-0623
Publikationer (10 of 85) Visa alla publikationer
Al-Taie, A., Matviienko, A., O'Hagan, J., Pollick, F. & Brewster, S. A. (2025). Around the World in 60 Cyclists: Evaluating Autonomous Vehicle-Cyclist Interfaces Across Cultures. In: Proceedings Of The 2025 Chi Conference On Human Factors In Computing Sytems, Chi 2025: . Paper presented at 2025 Conference on Human Factors in Computing Systems-CHI, APR 26-MAY 01, 2025, Yokohama, JAPAN. Association for Computing Machinery (ACM), Article ID 217.
Öppna denna publikation i ny flik eller fönster >>Around the World in 60 Cyclists: Evaluating Autonomous Vehicle-Cyclist Interfaces Across Cultures
Visa övriga...
2025 (Engelska)Ingår i: Proceedings Of The 2025 Chi Conference On Human Factors In Computing Sytems, Chi 2025, Association for Computing Machinery (ACM) , 2025, artikel-id 217Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Cultural differences influence how cyclists and drivers interact, affecting global autonomous vehicle (AV) adoption. AV-cyclist interfaces are needed to clarify AV intentions and resolve ambiguities when no human driver is present. These must adapt across cultures and road infrastructure. We conducted the first cross-cultural AV-cyclist user study across Stockholm (high segregation of cyclists from drivers), Glasgow (some segregation), and Muscat (no segregation). Cyclists used an AR simulator to cycle in physical space and experienced three holistic AV-cyclist interfaces. These integrated multiple interfaces into a larger ecosystem, e.g., a smartwatch synchronised with on-vehicle eHMI. Interfaces communicated AV location, intentions, or both. Riders from all cities preferred combined AV location and intention information but used it differently. Stockholm cyclists focused on location, validating intentions with driving behaviour. Glasgow riders valued both cues equally. Muscat cyclists trusted interfaces, prioritising intentions without relying on driving behaviour. These insights are key for global AV adoption.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery (ACM), 2025
Nyckelord
Autonomous Vehicle-Cyclist Interaction, Cross-Cultural Study, Augmented Reality
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Identifikatorer
urn:nbn:se:kth:diva-372725 (URN)10.1145/3706598.3713407 (DOI)001496957100324 ()2-s2.0-105005753595 (Scopus ID)979-8-4007-1394-1 (ISBN)
Konferens
2025 Conference on Human Factors in Computing Systems-CHI, APR 26-MAY 01, 2025, Yokohama, JAPAN
Anmärkning

QC 20251126

Tillgänglig från: 2025-11-26 Skapad: 2025-11-26 Senast uppdaterad: 2025-11-26Bibliografiskt granskad
Hedlund, M., Müller, F., Schmitz, M., Bogdan, C. M., Rey, R., Ghavamian, P., . . . Matviienko, A. (2025). BroomBroom! Evaluation of Leaning and Controller-based Locomotion for Flying in Virtual Reality. Paper presented at VRST '25.
Öppna denna publikation i ny flik eller fönster >>BroomBroom! Evaluation of Leaning and Controller-based Locomotion for Flying in Virtual Reality
Visa övriga...
2025 (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

Virtual Reality (VR) locomotion methods are mainly ground-based, room-scale, or discrete, making them ill-suited for flying experiences. Although leaning- and controller-based techniques are promising for flying in VR, we lack empirical evidence of their advantages. We compared combinations of leaning- and controller-based methods for steering and velocity in a user study (N = 24) using a broom metaphor to integrate these methods into an understandable locomotion reference. The steering methods were: 1) controller-pointing (CP) and 2) headset-leaning (HL); and for velocity control: 1) controller linear displacement (CLD) and 2) headset linear displacement (HLD). Results indicate that HL increase presence compared to CP. However, combining HL with CLD worsens coin collection rate, completion time, mental load, control factor ratings, and enjoyment. In contrast, HLD worked well when paired with either steering method. CP-CLD led to the highest coin collection rate and lowest mental load. All methods had comparable feelings of flying.

Nyckelord
Locomotion, Leaning, Controller, Embodied, Flying, Virtual Reality, Broom
Nationell ämneskategori
Data- och informationsvetenskap
Forskningsämne
Människa-datorinteraktion; Människa-datorinteraktion
Identifikatorer
urn:nbn:se:kth:diva-371591 (URN)
Konferens
VRST '25
Anmärkning

Will be published as DOI 10.1145/3756884.3766017 in 31st ACM Symposium on Virtual Reality Software and Technology (VRST '25), Nov 12--14, 2025,  Montreal, QC, Canada

QC 20251014

Tillgänglig från: 2025-10-14 Skapad: 2025-10-14 Senast uppdaterad: 2025-10-14Bibliografiskt granskad
Chhatre, K., Guarese, R., Matviienko, A. & Peters, C. (2025). Evaluating Speech and Video Models for Face-Body Congruence. In: I3D Companion '25: Companion Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games: . Paper presented at ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games-I3D 2025, NJIT, Jersey City, NJ, USA, 7-9 May 2025. Association for Computing Machinery (ACM)
Öppna denna publikation i ny flik eller fönster >>Evaluating Speech and Video Models for Face-Body Congruence
2025 (Engelska)Ingår i: I3D Companion '25: Companion Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, Association for Computing Machinery (ACM) , 2025Konferensbidrag, Poster (med eller utan abstract) (Refereegranskat)
Abstract [en]

Animations produced by generative models are often evaluated using objective quantitative metrics that do not fully capture perceptual effects in immersive virtual environments. To address this gap, we present a preliminary perceptual evaluation of generative models for animation synthesis, conducted via a VR-based user study (N = 48). Our investigation specifically focuses on animation congruency—ensuring that generated facial expressions and body gestures are both congruent with and synchronized to driving speech. We evaluated two state-of-the-art methods: a speech-driven full-body animation model and a video-driven full-body reconstruction model, assessing their capability to produce congruent facial expressions and body gestures. Our results demonstrate a strong user preference for combined facial and body animations, highlighting that congruent multimodal animations significantly enhance perceived realism compared to animations featuring only a single modality. By incorporating VR-based perceptual feedback into training pipelines, our approach provides a foundation for developing more engaging and responsive virtual characters.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery (ACM), 2025
Nyckelord
Computer graphics, Animation
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:kth:diva-363248 (URN)10.1145/3722564.3728374 (DOI)001502592200005 ()
Konferens
ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games-I3D 2025, NJIT, Jersey City, NJ, USA, 7-9 May 2025
Anmärkning

Part of ISBN 9798400718335

QC 20250509

Tillgänglig från: 2025-05-09 Skapad: 2025-05-09 Senast uppdaterad: 2025-08-15Bibliografiskt granskad
Chhatre, K., Guarese, R., Matviienko, A. & Peters, C. (2025). Evaluation of generative models for emotional 3D animation generation in VR. Frontiers in Computer Science, 7, Article ID 1598099.
Öppna denna publikation i ny flik eller fönster >>Evaluation of generative models for emotional 3D animation generation in VR
2025 (Engelska)Ingår i: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 7, artikel-id 1598099Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Introduction: Social interactions incorporate various nonverbal signals to convey emotions alongside speech, including facial expressions and body gestures. Generative models have demonstrated promising results in creating full-body nonverbal animations synchronized with speech; however, evaluations using statistical metrics in 2D settings fail to fully capture user-perceived emotions, limiting our understanding of the effectiveness of these models. Methods: To address this, we evaluate emotional 3D animation generative models within an immersive Virtual Reality (VR) environment, emphasizing user—centric metrics-emotional arousal realism, naturalness, enjoyment, diversity, and interaction quality—in a real-time human-agent interaction scenario. Through a user study (N = 48), we systematically examine perceived emotional quality for three state-of-the-art speech-driven 3D animation methods across two specific emotions: happiness (high arousal) and neutral (mid arousal). Additionally, we compare these generative models against real human expressions obtained via a reconstruction-based method to assess both their strengths and limitations and how closely they replicate real human facial and body expressions. Results: Our results demonstrate that methods explicitly modeling emotions lead to higher recognition accuracy compared to those focusing solely on speech-driven synchrony. Users rated the realism and naturalness of happy animations significantly higher than those of neutral animations, highlighting the limitations of current generative models in handling subtle emotional states. Discussion: Generative models underperformed compared to reconstruction-based methods in facial expression quality, and all methods received relatively low ratings for animation enjoyment and interaction quality, emphasizing the importance of incorporating user-centric evaluations into generative model development. Finally, participants positively recognized animation diversity across all generative models.

Ort, förlag, år, upplaga, sidor
Frontiers Media SA, 2025
Nyckelord
3D emotional animation, generative models, nonverbal communication, user-centric evaluation, virtual reality
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign) Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:kth:diva-369923 (URN)10.3389/fcomp.2025.1598099 (DOI)001549678200001 ()2-s2.0-105013367950 (Scopus ID)
Anmärkning

QC 20250918

Tillgänglig från: 2025-09-18 Skapad: 2025-09-18 Senast uppdaterad: 2025-09-18Bibliografiskt granskad
Wang, H. & Matviienko, A. (2025). Experiencing Art Museum with a Generative Artificial Intelligence Chatbot. In: IMX 2025 - Proceedings of the 2025 ACM International Conference on Interactive Media Experiences: . Paper presented at 2025 ACM International Conference on Interactive Media Experiences, IMX 2025, Niteroi, Brazil, Jun 3 2025 - Jun 6 2025 (pp. 430-436). Association for Computing Machinery (ACM)
Öppna denna publikation i ny flik eller fönster >>Experiencing Art Museum with a Generative Artificial Intelligence Chatbot
2025 (Engelska)Ingår i: IMX 2025 - Proceedings of the 2025 ACM International Conference on Interactive Media Experiences, Association for Computing Machinery (ACM) , 2025, s. 430-436Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Generative Artificial Intelligence (GenAI) chatbots start changing experiences with art for museum visitors by making them more interactive and engaging. However, it remains underexplored how GenAI chatbots influence visitors' in-field experience and interaction at art museums regarding finding information, engagement, and enjoyment compared to existing museum tour-guide applications. In this paper, we contribute the design and implementation of a smartphone-based chatbot that detects artwork, generates textual and auditory information, and interactively answers visitors' questions. To explore visitors' experience with it, we conducted a field experiment (N=30) at the National Art Museum, comparing it to the existing museum application. Our results indicate that visitors showed higher artwork engagement with the chatbot than the museum application. Moreover, they enjoyed an interactive experience using the chatbot to learn about the art collection and have equally preferred textual and auditory information representation.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery (ACM), 2025
Nyckelord
Chatbot, Generative AI, Museum experience, Tour guide
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign) Systemvetenskap, informationssystem och informatik med samhällsvetenskaplig inriktning
Identifikatorer
urn:nbn:se:kth:diva-368516 (URN)10.1145/3706370.3731650 (DOI)001527547700047 ()2-s2.0-105007992419 (Scopus ID)
Konferens
2025 ACM International Conference on Interactive Media Experiences, IMX 2025, Niteroi, Brazil, Jun 3 2025 - Jun 6 2025
Anmärkning

 Part of ISBN 9798400713910

QC 20250818

Tillgänglig från: 2025-08-18 Skapad: 2025-08-18 Senast uppdaterad: 2025-08-18Bibliografiskt granskad
Zojaji, S., Schiött, J., Ivegren, W., Matviienko, A. & Peters, C. (2025). Influence of Floor Type on Social Navigation with Small Free-Standing Groups in Virtual Reality. In: Virtual, Augmented and Mixed Reality - 17th International Conference, VAMR 2025, Held as Part of the 27th HCI International Conference, HCII 2025, Proceedings: . Paper presented at 17th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2025, held as part of the 27th HCI International Conference, HCII 2025, Gothenburg, Sweden, Jun 22 2025 - Jun 27 2025 (pp. 280-298). Springer Nature
Öppna denna publikation i ny flik eller fönster >>Influence of Floor Type on Social Navigation with Small Free-Standing Groups in Virtual Reality
Visa övriga...
2025 (Engelska)Ingår i: Virtual, Augmented and Mixed Reality - 17th International Conference, VAMR 2025, Held as Part of the 27th HCI International Conference, HCII 2025, Proceedings, Springer Nature , 2025, s. 280-298Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Human footsteps play a significant role in everyday life, allowing individuals to discern the emotions, gender, and intentions of others solely from the sound of their footsteps. However, the influence of footstep sounds made when walking on different floor types in virtual reality (VR) environments when joining conversational groups remains unclear. In this paper, we present a controlled study (N=50) to assess the impact of five different floor types, associated with specific footstep sounds and visuals, on the persuasiveness of Embodied Conversational Agents (ECAs) when inviting participants to join a free-standing conversational group. We analyze routes taken by participants and the positions at which they join the group, which may be compliant or not with the agent’s request when approaching the group while walking on different virtual floor types. Our findings reveal that the type of floor being walked upon, defined by footstep sounds and visual appearance, significantly impacts the persuasiveness of ECAs and the trajectories taken by participants to join the group. Participants took longer paths and joined the group in the presence of more pleasant footstep sounds. Further, they tended to adhere to social norms by avoiding walking through the group’s center.

Ort, förlag, år, upplaga, sidor
Springer Nature, 2025
Nyckelord
floor type, joining behavior, small free-standing groups, sound, virtual reality
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign) Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:kth:diva-368519 (URN)10.1007/978-3-031-93712-5_17 (DOI)001544399900015 ()2-s2.0-105008003094 (Scopus ID)
Konferens
17th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2025, held as part of the 27th HCI International Conference, HCII 2025, Gothenburg, Sweden, Jun 22 2025 - Jun 27 2025
Anmärkning

Part of ISBN 9783031937118

QC 20250818

Tillgänglig från: 2025-08-18 Skapad: 2025-08-18 Senast uppdaterad: 2025-12-08Bibliografiskt granskad
Zhang, Y., Rajabi, N., Taleb, F., Matviienko, A., Ma, Y., Björkman, M. & Kragic, D. (2025). Mind Meets Robots: A Review of EEG-Based Brain-Robot Interaction Systems. International Journal of Human-Computer Interaction, 1-32
Öppna denna publikation i ny flik eller fönster >>Mind Meets Robots: A Review of EEG-Based Brain-Robot Interaction Systems
Visa övriga...
2025 (Engelska)Ingår i: International Journal of Human-Computer Interaction, ISSN 1044-7318, E-ISSN 1532-7590, s. 1-32Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Brain-robot interaction (BRI) empowers individuals to control (semi-)automated machines through brain activity, either passively or actively. In the past decade, BRI systems have advanced significantly, primarily leveraging electroencephalogram (EEG) signals. This article presents an up-to-date review of 87 curated studies published between 2018 and 2023, identifying the research landscape of EEG-based BRI systems. The review consolidates methodologies, interaction modes, application contexts, system evaluation, existing challenges, and future directions in this domain. Based on our analysis, we propose a BRI system model comprising three entities: Brain, Robot, and Interaction, depicting their internal relationships. We especially examine interaction modes between human brains and robots, an aspect not yet fully explored. Within this model, we scrutinize and classify current research, extract insights, highlight challenges, and offer recommendations for future studies. Our findings provide a structured design space for human-robot interaction (HRI), informing the development of more efficient BRI frameworks.

Ort, förlag, år, upplaga, sidor
Informa UK Limited, 2025
Nyckelord
EEG based, brain-robot interaction, interaction mode, comprehensive review
Nationell ämneskategori
Farkost och rymdteknik
Identifikatorer
urn:nbn:se:kth:diva-361866 (URN)10.1080/10447318.2025.2464915 (DOI)001446721000001 ()2-s2.0-105000309480 (Scopus ID)
Anmärkning

QC 20250402

Tillgänglig från: 2025-04-02 Skapad: 2025-04-02 Senast uppdaterad: 2025-04-02Bibliografiskt granskad
Kassem, K., Gietl, P., Michahelles, F. & Matviienko, A. (2025). RoboTeach: How Student Robots' Preexisting Proficiency and Learning Rate Affect Human Teachers Demonstrating Object Placement. In: Proceedings Of The 2025 Chi Conference On Human Factors In Computing Sytems, CHI 2025: . Paper presented at 2025 Conference on Human Factors in Computing Systems-CHI, APR 26-MAY 01, 2025, Yokohama, JAPAN. Association for Computing Machinery (ACM), Article ID 909.
Öppna denna publikation i ny flik eller fönster >>RoboTeach: How Student Robots' Preexisting Proficiency and Learning Rate Affect Human Teachers Demonstrating Object Placement
2025 (Engelska)Ingår i: Proceedings Of The 2025 Chi Conference On Human Factors In Computing Sytems, CHI 2025, Association for Computing Machinery (ACM) , 2025, artikel-id 909Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Social robots are employed as companions, helping in industrial and domestic environments. Adapting robots' capabilities to user needs can be achieved through teaching from human demonstrations. However, the influence of robots' preexisting proficiency and learning rate on human teachers' self-efficacy and perception of the robots is underexplored. In this paper, we simulated four robot performance types that combine: (1) preexisting proficiency (low/high) and (2) learning rate (slow/fast). We conducted a controlled lab experiment studying the impact of robots' performance type on teachers' self-efficacy, willingness to teach the robot, and perception of the robot (N=24), in which robots placed objects in suitable locations. Fast learners were perceived as more intelligent, anthropomorphic, and likable, and this caused higher teaching self-efficacy regardless of preexisting skills. Slow learners caused frustration while teaching. Moreover, participants stopped teaching robots with low preexisting skills sooner, regardless of the learning rate, indicating potential bias caused by expectations.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery (ACM), 2025
Nyckelord
teaching robots, object placement, learning rate, existing proficiency, self-efficacy, robot perception
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:kth:diva-372720 (URN)10.1145/3706598.3713113 (DOI)001496957100031 ()2-s2.0-105005735434 (Scopus ID)979-8-4007-1394-1 (ISBN)
Konferens
2025 Conference on Human Factors in Computing Systems-CHI, APR 26-MAY 01, 2025, Yokohama, JAPAN
Anmärkning

QC 20251127

Tillgänglig från: 2025-11-27 Skapad: 2025-11-27 Senast uppdaterad: 2025-11-27Bibliografiskt granskad
Ippoliti, H. S., Colley, M., Dey, D., Wintersberger, P., Sadeghian, S., Löcken, A., . . . Boll, S. (2025). SPAT: Situational Prosocial and Aggressive Behavior Perception in Traffic Scale. In: Main Conference Proceedings - 17th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2025: . Paper presented at 17th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2025, Brisbane, Australia, September 21-25, 2025 (pp. 37-54). Association for Computing Machinery (ACM)
Öppna denna publikation i ny flik eller fönster >>SPAT: Situational Prosocial and Aggressive Behavior Perception in Traffic Scale
Visa övriga...
2025 (Engelska)Ingår i: Main Conference Proceedings - 17th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2025, Association for Computing Machinery (ACM) , 2025, s. 37-54Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Automated vehicles (AVs) reached technological maturity and will soon arrive on streets as traffic participants. Human traffic participants such as drivers, pedestrians, or cyclists will be increasingly confronted with the presence of AVs within their environment, not necessarily knowing or understanding what to expect and how to interact with them. Although AVs are designed to act safely, effective interaction in mixed traffic scenarios will depend on successful communication, interaction, or even negotiation beyond static rules and regulations. Prosocial behavior, such as yielding one's right of way, will be needed to resolve unclear traffic situations or foster traffic flow. However, what are the characteristics of such prosocial behavior, and how to measure this not only for automated vehicles but for all road users? Here, we describe a new scale to measure perceived social behavior in urban traffic scenarios. Through an online survey on N = 318 individuals and a validation study, we developed the Situational Prosocial and Aggressive Behavior in Traffic Scale and assessed it psychometrically.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery (ACM), 2025
Nyckelord
aggressive behavior, Automated vehicles, measurement methods for automated traffic, measurement of social behavior, prosocial behavior, social behavior in traffic, social perception in traffic, social perception of AVs
Nationell ämneskategori
Transportteknik och logistik Tillämpad psykologi
Identifikatorer
urn:nbn:se:kth:diva-373291 (URN)10.1145/3744333.3747812 (DOI)2-s2.0-105021397561 (Scopus ID)
Konferens
17th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2025, Brisbane, Australia, September 21-25, 2025
Anmärkning

Part of ISBN 9798400720130

QC 20251127

Tillgänglig från: 2025-11-27 Skapad: 2025-11-27 Senast uppdaterad: 2025-11-27Bibliografiskt granskad
Montoya, M., van Rheden, V., Josh, A., Smith, I., Elvitigala, D. S., Matviienko, A., . . . Zambetta, F. (2025). Surfing the Opportunities for Water Sustainability when Designing Outdoor Water Sports Experiences. In: Nuno Nunes, Valentina Nisi, Ian Oakley, Clement Zheng, Qian Yang (Ed.), Companion Publication of the 2025 ACM Designing Interactive Systems Conference, DIS 2025: . Paper presented at ACM Designing Interactive Systems Conference, DIS 2025, Funchal, Madeira, Portugal, July 5-9, 2025. New York, USA: Association for Computing Machinery (ACM)
Öppna denna publikation i ny flik eller fönster >>Surfing the Opportunities for Water Sustainability when Designing Outdoor Water Sports Experiences
Visa övriga...
2025 (Engelska)Ingår i: Companion Publication of the 2025 ACM Designing Interactive Systems Conference, DIS 2025 / [ed] Nuno Nunes, Valentina Nisi, Ian Oakley, Clement Zheng, Qian Yang, New York, USA: Association for Computing Machinery (ACM) , 2025Konferensbidrag, Muntlig presentation med publicerat abstract (Övrigt vetenskapligt)
Abstract [en]

Oceans, lakes and rivers, dynamic and vital ecosystems, face increasing threats from climate change. To ensure its sustainability, there is an urgent need for technologies that promote responsible and sustainable human-water interactions. Water sports engagement fosters mental and physical health benefits, as well as environmental care when responsible practices are encouraged. Although prior work has investigated how interactive technology can support sports practice to make it sustainable, water sports are less explored due to the unique technical challenges they pose. Hence, there is an opportunity for human-computer interaction (HCI) to explore how interactive technology can be adapted to the dynamic, unpredictable nature of outdoor water sports to foster water conservation and ocean sustainability. We argue that by exploring the design of interactive water sports experiences through a soma design lens, we will better understand the intricate synergy between our bodies and the felt and lived body of water, hence, supporting meaningful body-water interactions. We aim to engage researchers in exploring the potential of soma design in the context of water and water sports guided by preliminary posthumanist water frameworks. The workshop outcomes include a design framework supporting engagement in outdoor water sports to foster sustainability through soma design. Insights from the workshop will be documented in a future academic publication to advance the WaterHCI field.

Ort, förlag, år, upplaga, sidor
New York, USA: Association for Computing Machinery (ACM), 2025
Nyckelord
Water sports, Ocean sustainability, Soma design, Interactive technology
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-369245 (URN)10.1145/3715668.3734165 (DOI)001539407400017 ()2-s2.0-105012207323 (Scopus ID)
Konferens
ACM Designing Interactive Systems Conference, DIS 2025, Funchal, Madeira, Portugal, July 5-9, 2025
Anmärkning

Part of ISBN 979-8-4007-1486-3

QC 20250911

Tillgänglig från: 2025-09-01 Skapad: 2025-09-01 Senast uppdaterad: 2025-12-08Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-6571-0623

Sök vidare i DiVA

Visa alla publikationer

Profilsidor

Personal page