kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards the use of mixed reality for hri design via virtual robots
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).ORCID iD: 0000-0002-7257-0761
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).ORCID iD: 0000-0002-3089-0345
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
Show others and affiliations
2018 (English)In: HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot InteractionMarch 2020, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Mixed reality, which seeks to better merge virtual objects and theirinteractions with the real environment, offers numerous potentialsfor the improved design of robots and our interactions with them. Inthis paper, we present our ongoing work towards the developmentof a mixed reality platform for designing social interactions withrobots through the use of virtual robots. We present a summaryour work thus far on the use of the platform for investigatingproxemics between humans and virtual robots, and also highlightfuture research directions. These include the consideration of moresophisticated interactions involving verbal behaviours, interactionwith small formations of virtual robots, better integration of virtualobjects into real environments and experiments comparing the realsystems with their virtual counterparts.

Place, publisher, year, edition, pages
2018.
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-287336OAI: oai:DiVA.org:kth-287336DiVA, id: diva2:1507485
Conference
1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI), Cambridge, UK, March 23, 2020
Note

QC 20201208

Available from: 2020-12-07 Created: 2020-12-07 Last updated: 2022-12-07Bibliographically approved
In thesis
1. Simulating Group Interactions through Machine Learning and Human Perception
Open this publication in new window or tab >>Simulating Group Interactions through Machine Learning and Human Perception
2020 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Human-Robot/Agent Interaction is well researched in many areas, but approaches commonly either focus on dyadic interactions or crowd simulations. However, the intermediate structure between individuals and crowds, i.e., small groups, has been studied less. In small group situations, it is challenging for mobile robots or agents to approach free-standing conversational groups in a socially acceptable manner. It requires the robot or agent to plan trajectories that avoid collisions with people and consider the perception of group members to make them feel comfortable. Previous methods are mostly procedural with handcrafted features that limit the realism and adaptation of the simulation. In this thesis, Human-Robot/Agent Interaction is investigated at multiple levels, including individuals, crowds, and small groups. Firstly, this thesis is an exploration of proxemics in dyadic interactions in virtual environments. It investigates the impact of various embodiments on human perception and sensitivities. A related toolkit is developed as a foundation for simulating virtual characters in the subsequent research. Secondly, this thesis extends proxemics to crowd simulation and trajectory prediction by proposing neighbor perception models. It then focuses on group interactions in which robots/agents approach small groups in order to join them. To address the challenges above, novel procedural models based on social space and machine learning models, including generative adversarial neural networks, state refinement LSTM, reinforcement learning, and imitation learning, are proposed to generate approach behaviors. A novel dataset of full-body motion-captured markers was also collected in order to support machine learning approaches. Finally, these methods are evaluated in scenarios involving humans, virtual agents, and physical robots.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2020
National Category
Robotics and automation Computer graphics and computer vision Social Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-287337 (URN)
Public defence
2021-01-25, VIC Studio, Lindstedtsvägen 5, plan 4, KTH, 114 28 Stockholm, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20201208

Available from: 2020-12-08 Created: 2020-12-07 Last updated: 2025-02-05Bibliographically approved

Open Access in DiVA

fulltext(1895 kB)285 downloads
File information
File name FULLTEXT01.pdfFile size 1895 kBChecksum SHA-512
d692603c1b2350b830af1c0398a4a9c8a047ce7efa3a1b8e6582245b6535c936c0e22a193b00dae52cb9f9154e76a99c6b9ebef712f0b22e04942e72c1ad586c
Type fulltextMimetype application/pdf

Other links

edings webpage

Authority records

Peters, ChristopherYang, FangkaiSaikia, HimangshuSkantze, Gabriel

Search in DiVA

By author/editor
Peters, ChristopherYang, FangkaiSaikia, HimangshuSkantze, Gabriel
By organisation
Computational Science and Technology (CST)Speech, Music and Hearing, TMH
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 286 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 638 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf