Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 30) Show all publications
Saikia, H., Yang, F. & Peters, C. (2019). Priority driven Local Optimization for Crowd Simulation. In: AAMAS '19 Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems: . Paper presented at Autonomous Agents and Multi-Agent Systems (AAMAS), Montreal QC, Canada — May 13 - 17, 2019 (pp. 2180-2182).
Open this publication in new window or tab >>Priority driven Local Optimization for Crowd Simulation
2019 (English)In: AAMAS '19 Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 2019, p. 2180-2182Conference paper, Published paper (Refereed)
Abstract [en]

We provide an initial model and preliminary findings of a lookahead based local optimization scheme for collision resolution between agents in large goal-directed crowd simulations. Considering crowd simulation to be a global optimization problem, we break down this large problem into smaller problems where each potential collision resolution step is independently optimized in terms of a criticality measure. Agents resolved earlier in order of criticality, maintain the optimized velocity obtained, for the resolution of agents that come later in that order. Hence, the problem is converted to a low dimensional optimization problem of one or two agents where all other obstacles are static or deterministically dynamic. We illustrate the performance of our method on four well known test scenarios.

National Category
Engineering and Technology
Research subject
Computer Science; Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-259706 (URN)978-1-4503-6309-9 (ISBN)
Conference
Autonomous Agents and Multi-Agent Systems (AAMAS), Montreal QC, Canada — May 13 - 17, 2019
Note

QC 20191011

Available from: 2019-09-20 Created: 2019-09-20 Last updated: 2019-10-11Bibliographically approved
Yang, F., Qureshi, A., Shabo, J. & Peters, C. (2018). Do you see groups?: The impact of crowd density and viewpoint on the perception of groups. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018: . Paper presented at 18th ACM International Conference on Intelligent Virtual Agents, IVA 2018, Western Sydney University's new Parramatta City Campus Sydney, Australia, 5 November 2018 through 8 November 2018 (pp. 313-318). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Do you see groups?: The impact of crowd density and viewpoint on the perception of groups
2018 (English)In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018, Association for Computing Machinery (ACM), 2018, p. 313-318Conference paper, Published paper (Refereed)
Abstract [en]

Agent-based crowd simulation in virtual environments is of great utility in a variety of domains, from the entertainment industry to serious applications including mobile robots and swarms. Many studies of crowd behavior simulations do not consider the fact that people tend to congregate in smaller social gatherings, such as friends, or families, rather than walking alone. Based on a real-time crowd simulator which has been implemented as a unilateral incompressible fluid and augmented with group behaviors, a perceptual study was conducted to determine the impact of groups on the perception of the crowds at various densities from different camera views. If it is not possible to see groups under certain circumstances, then it may not be necessary to simulate them, to reduce the amount of calculations, an important issue in real-time simulations. This study provides researchers with a proper reference to design better algorithms to simulate realistic behaviors.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2018
Keywords
Agent-based crowd simulation, Human-computer interaction, Perception, Virtual agents
National Category
Interaction Technologies
Identifiers
urn:nbn:se:kth:diva-241489 (URN)10.1145/3267851.3267877 (DOI)2-s2.0-85058449580 (Scopus ID)9781450360135 (ISBN)
Conference
18th ACM International Conference on Intelligent Virtual Agents, IVA 2018, Western Sydney University's new Parramatta City Campus Sydney, Australia, 5 November 2018 through 8 November 2018
Note

QC 20190123

Available from: 2019-01-23 Created: 2019-01-23 Last updated: 2019-01-23Bibliographically approved
Li, C., Androulakaki, T., Gao, A. Y., Yang, F., Saikia, H., Peters, C. & Skantze, G. (2018). Effects of Posture and Embodiment on Social Distance in Human-Agent Interaction in Mixed Reality. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents: . Paper presented at 18th International Conference on Intelligent Virtual Agents (pp. 191-196). ACM Digital Library
Open this publication in new window or tab >>Effects of Posture and Embodiment on Social Distance in Human-Agent Interaction in Mixed Reality
Show others...
2018 (English)In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, ACM Digital Library, 2018, p. 191-196Conference paper, Published paper (Refereed)
Abstract [en]

Mixed reality offers new potentials for social interaction experiences with virtual agents. In addition, it can be used to experiment with the design of physical robots. However, while previous studies have investigated comfortable social distances between humans and artificial agents in real and virtual environments, there is little data with regards to mixed reality environments. In this paper, we conducted an experiment in which participants were asked to walk up to an agent to ask a question, in order to investigate the social distances maintained, as well as the subject's experience of the interaction. We manipulated both the embodiment of the agent (robot vs. human and virtual vs. physical) as well as closed vs. open posture of the agent. The virtual agent was displayed using a mixed reality headset. Our experiment involved 35 participants in a within-subject design. We show that, in the context of social interactions, mixed reality fares well against physical environments, and robots fare well against humans, barring a few technical challenges.

Place, publisher, year, edition, pages
ACM Digital Library, 2018
National Category
Language Technology (Computational Linguistics) Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-241288 (URN)10.1145/3267851.3267870 (DOI)2-s2.0-85058440240 (Scopus ID)
Conference
18th International Conference on Intelligent Virtual Agents
Note

QC 20190122

Available from: 2019-01-18 Created: 2019-01-18 Last updated: 2019-04-09Bibliographically approved
Peters, C., Li, C., Yang, F., Avramova, V. & Skantze, G. (2018). Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality. In: Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems: . Paper presented at he 17th International Conference on Autonomous Agents and MultiAgent Systems Stockholm, Sweden — July 10 - 15, 2018 (pp. 2247-2249).
Open this publication in new window or tab >>Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality
Show others...
2018 (English)In: Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems, 2018, p. 2247-2249Conference paper, Published paper (Refereed)
Abstract [en]

Mixed reality environments offer new potentials for the design of compelling social interaction experiences with virtual characters. In this paper, we summarise initial experiments we are conducting in which we measure comfortable social distances between humans, virtual humans and virtual robots in mixed reality environments. We consider a scenario in which participants walk within a comfortable distance of a virtual character that has its appearance varied between a male and female human, and a standard- and human-height virtual Pepper robot. Our studies in mixed reality thus far indicate that humans adopt social zones with artificial agents that are similar in manner to human-human social interactions and interactions in virtual reality.

National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-241285 (URN)2-s2.0-85054717128 (Scopus ID)
Conference
he 17th International Conference on Autonomous Agents and MultiAgent Systems Stockholm, Sweden — July 10 - 15, 2018
Note

QC 20190214

Available from: 2019-01-18 Created: 2019-01-18 Last updated: 2019-03-18Bibliographically approved
Ravichandran, N. B., Yang, F., Peters, C., Lansner, A. & Herman, P. (2018). Pedestrian simulation as multi-objective reinforcement learning. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018: . Paper presented at 18th ACM International Conference on Intelligent Virtual Agents, IVA 2018; Western Sydney University's new Parramatta City Campus, Sydney; Australia; 5 November 2018 through 8 November 2018 (pp. 307-312).
Open this publication in new window or tab >>Pedestrian simulation as multi-objective reinforcement learning
Show others...
2018 (English)In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018, 2018, p. 307-312Conference paper, Published paper (Refereed)
Abstract [en]

Modelling and simulation of pedestrian crowds require agents to reach pre-determined goals and avoid collisions with static obstacles and dynamic pedestrians, while maintaining natural gait behaviour. We model pedestrians as autonomous, learning, and reactive agents employing Reinforcement Learning (RL). Typical RL-based agent simulations suffer poor generalization due to handcrafted reward function to ensure realistic behaviour. In this work, we model pedestrians in a modular framework integrating navigation and collision-avoidance tasks as separate modules. Each such module consists of independent state-spaces and rewards, but with shared action-spaces. Empirical results suggest that such modular framework learning models can show satisfactory performance without tuning parameters, and we compare it with the state-of-art crowd simulation methods.

Keywords
Agent-based simulation, Multi-objective learning, Parallel learning, Reinforcement learning
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-241487 (URN)10.1145/3267851.3267914 (DOI)2-s2.0-85058477147 (Scopus ID)9781450360135 (ISBN)
Conference
18th ACM International Conference on Intelligent Virtual Agents, IVA 2018; Western Sydney University's new Parramatta City Campus, Sydney; Australia; 5 November 2018 through 8 November 2018
Note

QC 20190123

Available from: 2019-01-23 Created: 2019-01-23 Last updated: 2019-06-03Bibliographically approved
Paetzel, M., Castellano, G., Varni, G., Hupont, I., Chetouani, M. & Peters, C. (2018). The Attribution of Emotional State - How Embodiment Features and Social Traits Affect the Perception of an Artificial Agent. In: RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication: . Paper presented at 27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018, 27 August 2018 through 31 August 2018 (pp. 495-502). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>The Attribution of Emotional State - How Embodiment Features and Social Traits Affect the Perception of an Artificial Agent
Show others...
2018 (English)In: RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 495-502Conference paper, Published paper (Refereed)
Abstract [en]

Understanding emotional states is a challenging task which frequently leads to misinterpretation even in human observers. While the perception of emotions has been studied extensively in human psychology, little is known about what factors influence the human perception of emotions in robots and virtual characters. In this paper, we build on the Brunswik lens model to investigate the influence of (a) the agent's embodiment using a 2D virtual character, a 3D blended embodiment, a recording of the 3D platform and a recording of a human, as well as (b) the level of human-likeness on people's ability to interpret emotional facial expressions in an agent. In addition, we measure social traits of the human observers and analyze how they correlate to the success in recognizing emotional expressions. We find that interpersonal differences play a minor role in the perception of emotional states. However, both embodiment and human-likeness as well as related perceptual dimensions such as perceived social presence and uncanniness have an effect on the attribution of emotional states.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Robots, Virtual reality, Artificial agents, Emotional expressions, Facial Expressions, Human perception of emotions, Human psychology, Perceptual dimensions, Social presence, Virtual character, Behavioral research
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-247070 (URN)10.1109/ROMAN.2018.8525700 (DOI)2-s2.0-85058085649 (Scopus ID)9781538679807 (ISBN)
Conference
27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018, 27 August 2018 through 31 August 2018
Note

QC 20190625

Available from: 2019-06-25 Created: 2019-06-25 Last updated: 2019-06-25Bibliographically approved
Paetzel, M., Hupont, I., Varni, G., Chetouani, M., Peters, C. & Castellano, G. (2017). Exploring the link between self-assessed mimicry and embodiment in HRI. In: ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, 6 March 2017 through 9 March 2017 (pp. 245-246). IEEE Computer Society
Open this publication in new window or tab >>Exploring the link between self-assessed mimicry and embodiment in HRI
Show others...
2017 (English)In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2017, p. 245-246Conference paper, Published paper (Refereed)
Abstract [en]

This work explores the relationship between a robot's embodiment and people's ability to mimic its behavior. It presents a study in which participants were asked to mimic a 3D mixed-embodied robotic head and a 2D version of the same character. Quantitative and qualitative analysis were performed from questionnaires. Quantitative results show no significant influence of the character's embodiment on the self-assessed ability to mimic it, while qualitative ones indicate a preference for mimicking the robotic head.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
Keywords
embodiment, human-robot interaction, mimicry, Man machine systems, Robotics, Robots, Surveys, Embodied robotics, Quantitative and qualitative analysis, Quantitative result, Robotic head, Human robot interaction
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-208032 (URN)10.1145/3029798.3038317 (DOI)2-s2.0-85016440163 (Scopus ID)9781450348850 (ISBN)
Conference
12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, 6 March 2017 through 9 March 2017
Note

QC 20170601

Available from: 2017-06-01 Created: 2017-06-01 Last updated: 2017-06-01Bibliographically approved
Yang, F., Li, C., Palmberg, R., Van der Heide, E. & Peters, C. (2017). Expressive Virtual Characters for Social Demonstration Games. In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings: . Paper presented at 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), SEP 06-08, 2017, Athens, Greece (pp. 217-224). IEEE
Open this publication in new window or tab >>Expressive Virtual Characters for Social Demonstration Games
Show others...
2017 (English)In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, IEEE, 2017, p. 217-224Conference paper, Published paper (Refereed)
Abstract [en]

Virtual characters are an integral part of many game and learning environments and have practical applications as tutors, demonstrators or even representations of the user. However, creating virtual character behaviors can be a time-consuming and complex task requiring substantial technical expertise. To accelerate and better enable the use of virtual characters in social games, we present a virtual character behavior toolkit for the development of expressive virtual characters. It is a midlleware toolkit which sits on top of the game engine with a focus on providing high-level character behaviors to quickly create social games. The toolkit can be adapted to a wide range of scenarios related to social interactions with individuals and groups at multiple distances in the virtual environment and supports customization and control of facial expressions, body animations and group formations. We describe the design of the toolkit, providing an examplar of a small game that is being created with it and our intended future work on the system.

Place, publisher, year, edition, pages
IEEE, 2017
Series
International Conference on Games and Virtual Worlds for Serious Applications, ISSN 2474-0470
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:kth:diva-224103 (URN)10.1109/VS-GAMES.2017.8056604 (DOI)000425228700038 ()2-s2.0-85029005495 (Scopus ID)978-1-5090-5812-9 (ISBN)
Conference
9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), SEP 06-08, 2017, Athens, Greece
Funder
EU, Horizon 2020, 644204
Note

QC 20180312

Available from: 2018-03-12 Created: 2018-03-12 Last updated: 2018-03-12Bibliographically approved
Paetzel, M., Varni, G., Hupont, I., Chetouani, M., Peters, C. & Castellano, G. (2017). Investigating the Influence of Embodiment on Facial Mimicry in HRI Using Computer Vision-Based Measures. In: Howard, A Suzuki, K Zollo, L (Ed.), 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN): . Paper presented at 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL (pp. 579-586). IEEE
Open this publication in new window or tab >>Investigating the Influence of Embodiment on Facial Mimicry in HRI Using Computer Vision-Based Measures
Show others...
2017 (English)In: 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) / [ed] Howard, A Suzuki, K Zollo, L, IEEE , 2017, p. 579-586Conference paper, Published paper (Refereed)
Abstract [en]

Mimicry plays an important role in social interaction. In human communication, it is used to establish rapport and bonding both with other humans, as well as robots and virtual characters. However, little is known about the underlying factors that elicit mimicry in humans when interacting with a robot. In this work, we study the influence of embodiment on participants' ability to mimic a social character. Participants were asked to intentionally mimic the laughing behavior of the Furhat mixed embodied robotic head and a 2D virtual version of the same character. To explore the effect of embodiment, we present two novel approaches to automatically assess people's ability to mimic based solely on videos of their facial expressions. In contrast to participants' self-assessment, the analysis of video recordings suggests a better ability to mimic when people interact with the 2D embodiment.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225234 (URN)000427262400091 ()2-s2.0-85045834281 (Scopus ID)978-1-5386-3518-6 (ISBN)
Conference
26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL
Note

QC 20180404

Available from: 2018-04-04 Created: 2018-04-04 Last updated: 2018-04-11Bibliographically approved
Palmberg, R., Peters, C. & Qureshi, A. (2017). When Facial Expressions Dominate Emotion Perception in Groups of Virtual Characters. In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings: . Paper presented at 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), SEP 06-08, 2017, Athens, Greece (pp. 157-160). IEEE
Open this publication in new window or tab >>When Facial Expressions Dominate Emotion Perception in Groups of Virtual Characters
2017 (English)In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, IEEE, 2017, p. 157-160Conference paper, Published paper (Refereed)
Abstract [en]

Virtual characters play a central role in populating virtual worlds, whether they act as conduits for human expressions as avatars or are automatically controlled by a machine as agents. In modern game-related scenarios, it is economical to assemble virtual characters from varying sources of appearances and motions. However, doing so may have unintended consequences with respect to how people perceive their expressions. This paper presents an initial study investigating the impact of facial expressions and full body motions from varying sources on the perception of intense positive and negative emotional expressions in small groups of virtual characters. 21 participants views a small group of three virtual characters engaged in intense animated behaviours as their face and body motions were varied between positive, neutral and negative valence expressions. While emotion perception was based on both the bodies and the faces of the characters, we found a strong impact of the valence of facial expressions on the perception of emotions in the group. We discuss these findings in relation to the combination of manually created and automatically defined motion sources, highlighting implications for the animation of virtual characters.

Place, publisher, year, edition, pages
IEEE, 2017
Series
International Conference on Games and Virtual Worlds for Serious Applications, ISSN 2474-0470
National Category
Other Humanities not elsewhere specified
Identifiers
urn:nbn:se:kth:diva-224102 (URN)10.1109/VS-GAMES.2017.8056588 (DOI)000425228700024 ()2-s2.0-85034629863 (Scopus ID)978-1-5090-5812-9 (ISBN)
Conference
9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), SEP 06-08, 2017, Athens, Greece
Note

QC 20180312

Available from: 2018-03-12 Created: 2018-03-12 Last updated: 2018-03-12Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7257-0761

Search in DiVA

Show all publications