Endre søk
Link to record
Permanent link

Direct link
Publikasjoner (10 av 15) Visa alla publikasjoner
Stower, R., Gautier, A., Wozniak, M. K., Jensfelt, P., Tumova, J. & Leite, I. (2025). Take a Chance on Me: How Robot Performance and Risk Behaviour Affects Trust and Risk-Taking. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025 (pp. 391-399). Institute of Electrical and Electronics Engineers (IEEE)
Åpne denne publikasjonen i ny fane eller vindu >>Take a Chance on Me: How Robot Performance and Risk Behaviour Affects Trust and Risk-Taking
Vise andre…
2025 (engelsk)Inngår i: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, s. 391-399Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Real-world human-robot interactions often encompass uncertainty. This uncertainty can be handled in different ways, for example by designing robot planners to be more or less risk-tolerant. However, how users actually perceive different risk-taking behaviours in robots has yet to be described. Additionally, in the absence of guarantees on optimal robot performance, the interaction between risk and performance on user perceptions is also unclear. To address this gap, we conducted a user study with 84 participants investigating how robot performance and risk behaviour affects users' trust and risk-taking decisions. Participants collaborated with a Franka robot arm to perform a block-stacking task. We compared a robot which displays consistent but sub-optimal behaviours to a robot displaying risky but occasionally optimal behaviour. Risky robot behaviour led to higher trust than consistent behaviour when the robot was on average good at stacking blocks (high expectation), but lower trust when the robot was on average bad at stacking blocks (low expectation). Individual risk-willingness also predicted likelihood of selecting the risky robot over the consistent robot for future interactions, but only when the average expectation was low. These findings have implications for risk-aware planning and decision-making in mixed human-robot systems.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2025
Emneord
collaborative robot, failure, risk-taking, trust, user study
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-363768 (URN)10.1109/HRI61500.2025.10973966 (DOI)2-s2.0-105004879443 (Scopus ID)
Konferanse
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Merknad

Part of ISBN 9798350378931

QC 20250527

Tilgjengelig fra: 2025-05-21 Laget: 2025-05-21 Sist oppdatert: 2025-05-27bibliografisk kontrollert
Rahimzadagan, N., Vahs, M., Leite, I. & Stower, R. (2024). Drone Fail Me Now: How Drone Failures Afect Trust and Risk-Taking Decisions. In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 862-866). Association for Computing Machinery (ACM)
Åpne denne publikasjonen i ny fane eller vindu >>Drone Fail Me Now: How Drone Failures Afect Trust and Risk-Taking Decisions
2024 (engelsk)Inngår i: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, s. 862-866Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

So far, research on drone failures has been mostly limited to understanding the technical causes of failures and recovery strategies. In contrast, there is little work looking at how failures of drones are perceived by users. To address this gap, we conduct a real-world study where participants experience drone failures leading to monetary loss whilst navigating a drone over an obstacle course. We tested 46 participants where they experienced both a failure and failure-free (control) interaction. Participants' trust in the drone, their enjoyment of the interaction, perceived control, and future use intentions were all negatively impacted by drone failures. However, risk-taking decisions during the interaction were not affected. These findings suggest that experiencing a failure whilst operating a drone in real-time is detrimental to participants' subjective experience of the interaction.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2024
Emneord
Drone, Failure, Human-Drone Interaction, Trust, Risk-Taking, UAV
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-344808 (URN)10.1145/3610978.3640609 (DOI)001255070800183 ()2-s2.0-85188131674 (Scopus ID)
Konferanse
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Merknad

QC 20240402

Part of ISBN 9798400703232

Tilgjengelig fra: 2024-03-28 Laget: 2024-03-28 Sist oppdatert: 2024-09-03bibliografisk kontrollert
Spitale, M., Stower, R., Parreira, M. T., Yadollahi, E., Leite, I. & Gunes, H. (2024). HRI Wasn't Built In a Day: A Call To Action For Responsible HRI Research. In: 2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024: . Paper presented at 33rd IEEE International Conference on Robot and Human Interactive Communication (IEEE RO-MAN) - Embracing Human-Centered HRI, AUG 26-30, 2024, Pasadena, CA (pp. 696-702). Institute of Electrical and Electronics Engineers (IEEE)
Åpne denne publikasjonen i ny fane eller vindu >>HRI Wasn't Built In a Day: A Call To Action For Responsible HRI Research
Vise andre…
2024 (engelsk)Inngår i: 2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, s. 696-702Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In recent years, the awareness of the academy around responsible research has notably increased. For instance, with advances in machine learning and artificial intelligence, recent efforts have been made to promote ethical, fair, and inclusive AI and robotics. To better understand if and to what extent HRI is incentivizing researchers to engage in responsible research, we conducted an exploratory review of the publishing guidelines for the most popular HRI conference venues. We identified 18 conferences which published at least 7 HRI papers in 2022. From these, we discuss four themes relevant to conducting responsible HRI research in line with the Responsible Research and Innovation framework: ethical and human participant considerations, transparency and reproducibility, accessibility and inclusion, and plagiarism and LLM use. We identify several gaps and room for improvement within HRI regarding responsible research. Finally, we establish a call to action to provoke conversations among HRI researchers about the importance of conducting responsible research within emerging fields like HRI.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2024
Serie
IEEE RO-MAN, ISSN 1944-9445
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-358755 (URN)10.1109/RO-MAN60168.2024.10731210 (DOI)001348918600092 ()2-s2.0-85209780900 (Scopus ID)
Konferanse
33rd IEEE International Conference on Robot and Human Interactive Communication (IEEE RO-MAN) - Embracing Human-Centered HRI, AUG 26-30, 2024, Pasadena, CA
Merknad

Part of ISBN 979-8-3503-7503-9; 979-8-3503-7502-2

QC 20250121

Tilgjengelig fra: 2025-01-21 Laget: 2025-01-21 Sist oppdatert: 2025-02-25bibliografisk kontrollert
Civit, A., Stower, R., Leite, I., Andriella, A. & Alenyà, G. (2024). Robots as Mediators to Resolve Multi-User Preference Conflicts - Extended Abstract. In: Proceedings of ALTRUIST Workshop on SociAL RoboTs for PeRsonalized, ContinUous and AdaptIve ASsisTance, BAILAR Workshop on Behavior Adaptation and Learning for Assistive Robotics, SCRITA Workshop on Trust, Acceptance and Social Cues in Human-Robot Interaction, and WARN Workshop on Weighing the Benefits of Autonomous Robot PersoNalisation, ALTRUIST-BAILAR-SCRITA-WARN 2024 - co-located with Ro-MAN 2024: . Paper presented at 2024 ALTRUIST Workshop on SociAL RoboTs for PeRsonalized, ContinUous and AdaptIve ASsisTance, BAILAR Workshop on Behavior Adaptation and Learning for Assistive Robotics, SCRITA Workshop on Trust, Acceptance and Social Cues in Human-Robot Interaction, and WARN Workshop on Weighing the Benefits of Autonomous Robot PersoNalisation, ALTRUIST-BAILAR-SCRITA-WARN 2024, Pasadena, United States of America, Aug 26 2024. CEUR-WS
Åpne denne publikasjonen i ny fane eller vindu >>Robots as Mediators to Resolve Multi-User Preference Conflicts - Extended Abstract
Vise andre…
2024 (engelsk)Inngår i: Proceedings of ALTRUIST Workshop on SociAL RoboTs for PeRsonalized, ContinUous and AdaptIve ASsisTance, BAILAR Workshop on Behavior Adaptation and Learning for Assistive Robotics, SCRITA Workshop on Trust, Acceptance and Social Cues in Human-Robot Interaction, and WARN Workshop on Weighing the Benefits of Autonomous Robot PersoNalisation, ALTRUIST-BAILAR-SCRITA-WARN 2024 - co-located with Ro-MAN 2024, CEUR-WS , 2024Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In real-life scenarios, robots will have to make decisions that involve multiple users. The current literature does not consider scenarios where a robot interacts with users who have conflicting preferences. To address this issue, this paper proposes using the robot as a mediator. Different possible conflict resolution actions for the robot are presented, as well as the challenges and open questions arising from this proposal.

sted, utgiver, år, opplag, sider
CEUR-WS, 2024
Emneord
Conflict Resolution, Multi-User Preferences, Robot Personalisation, Social Robotics
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-360169 (URN)2-s2.0-85217165383 (Scopus ID)
Konferanse
2024 ALTRUIST Workshop on SociAL RoboTs for PeRsonalized, ContinUous and AdaptIve ASsisTance, BAILAR Workshop on Behavior Adaptation and Learning for Assistive Robotics, SCRITA Workshop on Trust, Acceptance and Social Cues in Human-Robot Interaction, and WARN Workshop on Weighing the Benefits of Autonomous Robot PersoNalisation, ALTRUIST-BAILAR-SCRITA-WARN 2024, Pasadena, United States of America, Aug 26 2024
Merknad

QC 20250221

Tilgjengelig fra: 2025-02-19 Laget: 2025-02-19 Sist oppdatert: 2025-02-21bibliografisk kontrollert
Stower, R., Kappas, A. & Sommer, K. (2024). When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task. Computers in human behavior, 157, Article ID 108229.
Åpne denne publikasjonen i ny fane eller vindu >>When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task
2024 (engelsk)Inngår i: Computers in human behavior, ISSN 0747-5632, E-ISSN 1873-7692, Vol. 157, artikkel-id 108229Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Little is known about how children perceive, trust and learn from social robots compared to humans. The goal of this study was to compare a robot and a human agent in a selective trust task across different combinations of reliability (both reliable, only human reliable, or only robot reliable). 111 children, aged 3 to 6 years, participated in an online study where they viewed videos of a human and a robot labelling both familiar and novel objects. We found that, although children preferred to endorse a novel object label from the agent who previously labelled familiar objects correctly, when both the human and the robot were reliable they were biased more towards the robot. Their social evaluations also tended much more strongly towards a general robot preference. Children's conceptualisations of the agents making a mistake also differed, such that an unreliable human was selected as doing things on purpose, but not an unreliable robot. These findings suggest that children's perceptions of a robot's reliability are separate from their evaluation of its desirability as a social interaction partner and its perceived agency. Further, they indicate that a robot making a mistake does not necessarily reduce children's desire to interact with it as a social agent.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Human–robot-interaction, Liking, Mistakes, Social cognition, Social learning, Trust
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-366548 (URN)10.1016/j.chb.2024.108229 (DOI)001239074500001 ()2-s2.0-85190256145 (Scopus ID)
Merknad

QC 20250708

Tilgjengelig fra: 2025-07-08 Laget: 2025-07-08 Sist oppdatert: 2025-07-08bibliografisk kontrollert
Morillo-Mendez, L., Stower, R., Sleat, A., Schreiter, T., Leite, I., Mozos, O. M. & Schrooten, M. G. S. (2023). Can the robot "see" what I see?: Robot gaze drives attention depending on mental state attribution. Frontiers in Psychology, 14, Article ID 1215771.
Åpne denne publikasjonen i ny fane eller vindu >>Can the robot "see" what I see?: Robot gaze drives attention depending on mental state attribution
Vise andre…
2023 (engelsk)Inngår i: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, artikkel-id 1215771Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

sted, utgiver, år, opplag, sider
Frontiers Media SA, 2023
Emneord
gaze following, cueing effect, attention, mentalizing, intentional stance, social robots
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-333787 (URN)10.3389/fpsyg.2023.1215771 (DOI)001037081700001 ()37519379 (PubMedID)2-s2.0-85166030431 (Scopus ID)
Merknad

QC 20230810

Tilgjengelig fra: 2023-08-10 Laget: 2023-08-10 Sist oppdatert: 2023-08-10bibliografisk kontrollert
Stower, R., Ligthart, M. E. .., Spitale, M., Calvo-Barajas, N. & De Droog, S. M. (2023). CRITTER: Child-Robot Interaction and Interdisciplinary Research. In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 926-928). Association for Computing Machinery (ACM)
Åpne denne publikasjonen i ny fane eller vindu >>CRITTER: Child-Robot Interaction and Interdisciplinary Research
Vise andre…
2023 (engelsk)Inngår i: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, s. 926-928Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Several recent works in human-robot-interaction (HRI) have begun to highlight the importance of the replication crisis and open science practices for our field. Yet, suggestions and recommendations tailored to child-robot-interaction (CRI) research, which poses its own additional set of challenges, remain limited. There is also an increased need within both HRI and CRI for inter and crossdisciplinary collaborations, where input from multiple different domains can contribute to better research outcomes. Consequently, this workshop aims to facilitate discussions between researchers from diverse disciplines within CRI. The workshop will open with a panel discussion between CRI researchers from different disciplines, followed by 3-minute flash talks of the accepted submissions. The second half of the workshop will consist of breakout group discussions, where both senior and junior academics from different disciplines can share their experiences of conducting CRI research. Through this workshop, we hope to create a common ground for addressing shared challenges in CRI, as well as identify a set of possible solutions going forward.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2023
Emneord
child robot interaction, experimental design, interaction design, interdisciplinary science, meta-science, open science, qualitative research
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-333367 (URN)10.1145/3568294.3579955 (DOI)001054975700206 ()2-s2.0-85150425710 (Scopus ID)
Konferanse
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Merknad

Part of ISBN 9781450399708

QC 20230801

Tilgjengelig fra: 2023-08-01 Laget: 2023-08-01 Sist oppdatert: 2025-02-05bibliografisk kontrollert
Rudaz, D., Tatarian, K., Stower, R. & Licoppe, C. (2023). From Inanimate Object to Agent: Impact of Pre-beginnings on the Emergence of Greetings with a Robot. ACM Transactions on Human-Robot Interaction, 12(3), Article ID 29.
Åpne denne publikasjonen i ny fane eller vindu >>From Inanimate Object to Agent: Impact of Pre-beginnings on the Emergence of Greetings with a Robot
2023 (engelsk)Inngår i: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, nr 3, artikkel-id 29Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

The very first moments of co-presence, during which a robot appears to a participant for the first time, are often "off-the-record" in the data collected from human-robot experiments (video recordings, motion tracking, methodology sections, etc.). Yet, this "pre-beginning" phase, well documented in the case of human-human interactions, is not an interactional vacuum: It is where interactional work from participants can take place so the production of a first speaking turn (like greeting the robot) becomes relevant and expected. We base our analysis on an experiment that replicated the interaction opening delays sometimes observed in laboratory or "in-the-wild" human-robot interaction studies-where robots can require time before springing to life after they are in co-presence with a human. Using an ethnomethodological and multimodal conversation analytic methodology (EMCA), we identify which properties of the robot's behavior were oriented to by participants as creating the adequate conditions to produce a first greeting. Our findings highlight the importance of the state in which the robot originally appears to participants: as an immobile object or, instead, as an entity already involved in preexisting activity. Participants' orientations to the very first behaviors manifested by the robot during this "pre-beginning" phase produced a priori unpredictable sequential trajectories, which configured the timing and the manner in which the robot emerged as a social agent. We suggest that these first instants of co-presence are not peripheral issues with respect to human-robot experiments but should be thought about and designed as an integral part of those.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2023
Emneord
Pre-beginning, greetings, social agent, conversation analysis, ethnomethodology, robot latencies, computers are social actors, anthropomorphism
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-334324 (URN)10.1145/3575806 (DOI)001020331600002 ()2-s2.0-85161692363 (Scopus ID)
Merknad

QC 20230818

Tilgjengelig fra: 2023-08-18 Laget: 2023-08-18 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Wozniak, M. K., Stower, R., Jensfelt, P. & Abelho Pereira, A. T. (2023). Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 1573-1580). Institute of Electrical and Electronics Engineers (IEEE)
Åpne denne publikasjonen i ny fane eller vindu >>Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality
2023 (engelsk)Inngår i: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, s. 1573-1580Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

While we can see robots in more areas of our lives, they still make errors. One common cause of failure stems from the robot perception module when detecting objects. Allowing users to correct such errors can help improve the interaction and prevent the same errors in the future. Consequently, we investigate the effectiveness of a virtual reality (VR) framework for correcting perception errors of a Franka Panda robot. We conducted a user study with 56 participants who interacted with the robot using both VR and screen interfaces. Participants learned to collaborate with the robot faster in the VR interface compared to the screen interface. Additionally, participants found the VR interface more immersive, enjoyable, and expressed a preference for using it again. These findings suggest that VR interfaces may offer advantages over screen interfaces for human-robot interaction in erroneous environments.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2023
Serie
IEEE RO-MAN, ISSN 1944-9445
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-341975 (URN)10.1109/RO-MAN57019.2023.10309446 (DOI)001108678600198 ()2-s2.0-85186968933 (Scopus ID)
Konferanse
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Merknad

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Tilgjengelig fra: 2024-01-10 Laget: 2024-01-10 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Wozniak, M. K., Stower, R., Jensfelt, P. & Abelho Pereira, A. T. (2023). What You See Is (not) What You Get: A VR Framework For Correcting Robot Errors. In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 243-247). Association for Computing Machinery (ACM)
Åpne denne publikasjonen i ny fane eller vindu >>What You See Is (not) What You Get: A VR Framework For Correcting Robot Errors
2023 (engelsk)Inngår i: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, s. 243-247Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Many solutions tailored for intuitive visualization or teleoperation of virtual, augmented and mixed (VAM) reality systems are not robust to robot failures, such as the inability to detect and recognize objects in the environment or planning unsafe trajectories. In this paper, we present a novel virtual reality (VR) framework where users can (i) recognize when the robot has failed to detect a realworld object, (ii) correct the error in VR, (iii) modify proposed object trajectories and, (iv) implement behaviors on a real-world robot. Finally, we propose a user study aimed at testing the efficacy of our framework. Project materials can be found in the OSF repository1.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2023
Emneord
AR, human-robot interaction, perception, robotics, VR
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-333372 (URN)10.1145/3568294.3580081 (DOI)001054975700044 ()2-s2.0-85150432457 (Scopus ID)
Konferanse
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Merknad

Part of ISBN 9781450399708

QC 20230801

Tilgjengelig fra: 2023-08-01 Laget: 2023-08-01 Sist oppdatert: 2025-02-05bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-6158-4818