Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 28) Show all publications
Yang, F., Qureshi, A., Shabo, J. & Peters, C. (2018). Do you see groups?: The impact of crowd density and viewpoint on the perception of groups. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018: . Paper presented at 18th ACM International Conference on Intelligent Virtual Agents, IVA 2018, Western Sydney University's new Parramatta City Campus Sydney, Australia, 5 November 2018 through 8 November 2018 (pp. 313-318). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Do you see groups?: The impact of crowd density and viewpoint on the perception of groups
2018 (English)In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018, Association for Computing Machinery (ACM), 2018, p. 313-318Conference paper, Published paper (Refereed)
Abstract [en]

Agent-based crowd simulation in virtual environments is of great utility in a variety of domains, from the entertainment industry to serious applications including mobile robots and swarms. Many studies of crowd behavior simulations do not consider the fact that people tend to congregate in smaller social gatherings, such as friends, or families, rather than walking alone. Based on a real-time crowd simulator which has been implemented as a unilateral incompressible fluid and augmented with group behaviors, a perceptual study was conducted to determine the impact of groups on the perception of the crowds at various densities from different camera views. If it is not possible to see groups under certain circumstances, then it may not be necessary to simulate them, to reduce the amount of calculations, an important issue in real-time simulations. This study provides researchers with a proper reference to design better algorithms to simulate realistic behaviors.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2018
Keywords
Agent-based crowd simulation, Human-computer interaction, Perception, Virtual agents
National Category
Interaction Technologies
Identifiers
urn:nbn:se:kth:diva-241489 (URN)10.1145/3267851.3267877 (DOI)2-s2.0-85058449580 (Scopus ID)9781450360135 (ISBN)
Conference
18th ACM International Conference on Intelligent Virtual Agents, IVA 2018, Western Sydney University's new Parramatta City Campus Sydney, Australia, 5 November 2018 through 8 November 2018
Note

QC 20190123

Available from: 2019-01-23 Created: 2019-01-23 Last updated: 2019-01-23Bibliographically approved
Li, C., Androulakaki, T., Gao, A. Y., Yang, F., Saikia, H., Peters, C. & Skantze, G. (2018). Effects of Posture and Embodiment on Social Distance in Human-Agent Interaction in Mixed Reality. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents: . Paper presented at 18th International Conference on Intelligent Virtual Agents (pp. 191-196). ACM Digital Library
Open this publication in new window or tab >>Effects of Posture and Embodiment on Social Distance in Human-Agent Interaction in Mixed Reality
Show others...
2018 (English)In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, ACM Digital Library, 2018, p. 191-196Conference paper, Published paper (Refereed)
Abstract [en]

Mixed reality offers new potentials for social interaction experiences with virtual agents. In addition, it can be used to experiment with the design of physical robots. However, while previous studies have investigated comfortable social distances between humans and artificial agents in real and virtual environments, there is little data with regards to mixed reality environments. In this paper, we conducted an experiment in which participants were asked to walk up to an agent to ask a question, in order to investigate the social distances maintained, as well as the subject's experience of the interaction. We manipulated both the embodiment of the agent (robot vs. human and virtual vs. physical) as well as closed vs. open posture of the agent. The virtual agent was displayed using a mixed reality headset. Our experiment involved 35 participants in a within-subject design. We show that, in the context of social interactions, mixed reality fares well against physical environments, and robots fare well against humans, barring a few technical challenges.

Place, publisher, year, edition, pages
ACM Digital Library, 2018
National Category
Language Technology (Computational Linguistics) Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-241288 (URN)10.1145/3267851.3267870 (DOI)2-s2.0-85058440240 (Scopus ID)
Conference
18th International Conference on Intelligent Virtual Agents
Note

QC 20190122

Available from: 2019-01-18 Created: 2019-01-18 Last updated: 2019-04-09Bibliographically approved
Peters, C., Li, C., Yang, F., Avramova, V. & Skantze, G. (2018). Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality. In: Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems: . Paper presented at he 17th International Conference on Autonomous Agents and MultiAgent Systems Stockholm, Sweden — July 10 - 15, 2018 (pp. 2247-2249).
Open this publication in new window or tab >>Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality
Show others...
2018 (English)In: Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems, 2018, p. 2247-2249Conference paper, Published paper (Refereed)
Abstract [en]

Mixed reality environments offer new potentials for the design of compelling social interaction experiences with virtual characters. In this paper, we summarise initial experiments we are conducting in which we measure comfortable social distances between humans, virtual humans and virtual robots in mixed reality environments. We consider a scenario in which participants walk within a comfortable distance of a virtual character that has its appearance varied between a male and female human, and a standard- and human-height virtual Pepper robot. Our studies in mixed reality thus far indicate that humans adopt social zones with artificial agents that are similar in manner to human-human social interactions and interactions in virtual reality.

National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-241285 (URN)2-s2.0-85054717128 (Scopus ID)
Conference
he 17th International Conference on Autonomous Agents and MultiAgent Systems Stockholm, Sweden — July 10 - 15, 2018
Note

QC 20190214

Available from: 2019-01-18 Created: 2019-01-18 Last updated: 2019-03-18Bibliographically approved
Ravichandran, N. B., Yang, F., Peters, C., Lansner, A. & Herman, P. (2018). Pedestrian simulation as multi-objective reinforcement learning. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018: . Paper presented at 18th ACM International Conference on Intelligent Virtual Agents, IVA 2018; Western Sydney University's new Parramatta City Campus, Sydney; Australia; 5 November 2018 through 8 November 2018 (pp. 307-312).
Open this publication in new window or tab >>Pedestrian simulation as multi-objective reinforcement learning
Show others...
2018 (English)In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018, 2018, p. 307-312Conference paper, Published paper (Refereed)
Abstract [en]

Modelling and simulation of pedestrian crowds require agents to reach pre-determined goals and avoid collisions with static obstacles and dynamic pedestrians, while maintaining natural gait behaviour. We model pedestrians as autonomous, learning, and reactive agents employing Reinforcement Learning (RL). Typical RL-based agent simulations suffer poor generalization due to handcrafted reward function to ensure realistic behaviour. In this work, we model pedestrians in a modular framework integrating navigation and collision-avoidance tasks as separate modules. Each such module consists of independent state-spaces and rewards, but with shared action-spaces. Empirical results suggest that such modular framework learning models can show satisfactory performance without tuning parameters, and we compare it with the state-of-art crowd simulation methods.

Keywords
Agent-based simulation, Multi-objective learning, Parallel learning, Reinforcement learning
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-241487 (URN)10.1145/3267851.3267914 (DOI)2-s2.0-85058477147 (Scopus ID)9781450360135 (ISBN)
Conference
18th ACM International Conference on Intelligent Virtual Agents, IVA 2018; Western Sydney University's new Parramatta City Campus, Sydney; Australia; 5 November 2018 through 8 November 2018
Note

QC 20190123

Available from: 2019-01-23 Created: 2019-01-23 Last updated: 2019-01-23Bibliographically approved
Paetzel, M., Hupont, I., Varni, G., Chetouani, M., Peters, C. & Castellano, G. (2017). Exploring the link between self-assessed mimicry and embodiment in HRI. In: ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, 6 March 2017 through 9 March 2017 (pp. 245-246). IEEE Computer Society
Open this publication in new window or tab >>Exploring the link between self-assessed mimicry and embodiment in HRI
Show others...
2017 (English)In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2017, p. 245-246Conference paper, Published paper (Refereed)
Abstract [en]

This work explores the relationship between a robot's embodiment and people's ability to mimic its behavior. It presents a study in which participants were asked to mimic a 3D mixed-embodied robotic head and a 2D version of the same character. Quantitative and qualitative analysis were performed from questionnaires. Quantitative results show no significant influence of the character's embodiment on the self-assessed ability to mimic it, while qualitative ones indicate a preference for mimicking the robotic head.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
Keywords
embodiment, human-robot interaction, mimicry, Man machine systems, Robotics, Robots, Surveys, Embodied robotics, Quantitative and qualitative analysis, Quantitative result, Robotic head, Human robot interaction
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-208032 (URN)10.1145/3029798.3038317 (DOI)2-s2.0-85016440163 (Scopus ID)9781450348850 (ISBN)
Conference
12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, 6 March 2017 through 9 March 2017
Note

QC 20170601

Available from: 2017-06-01 Created: 2017-06-01 Last updated: 2017-06-01Bibliographically approved
Yang, F., Li, C., Palmberg, R., Van der Heide, E. & Peters, C. (2017). Expressive Virtual Characters for Social Demonstration Games. In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings: . Paper presented at 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), SEP 06-08, 2017, Athens, Greece (pp. 217-224). IEEE
Open this publication in new window or tab >>Expressive Virtual Characters for Social Demonstration Games
Show others...
2017 (English)In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, IEEE, 2017, p. 217-224Conference paper, Published paper (Refereed)
Abstract [en]

Virtual characters are an integral part of many game and learning environments and have practical applications as tutors, demonstrators or even representations of the user. However, creating virtual character behaviors can be a time-consuming and complex task requiring substantial technical expertise. To accelerate and better enable the use of virtual characters in social games, we present a virtual character behavior toolkit for the development of expressive virtual characters. It is a midlleware toolkit which sits on top of the game engine with a focus on providing high-level character behaviors to quickly create social games. The toolkit can be adapted to a wide range of scenarios related to social interactions with individuals and groups at multiple distances in the virtual environment and supports customization and control of facial expressions, body animations and group formations. We describe the design of the toolkit, providing an examplar of a small game that is being created with it and our intended future work on the system.

Place, publisher, year, edition, pages
IEEE, 2017
Series
International Conference on Games and Virtual Worlds for Serious Applications, ISSN 2474-0470
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:kth:diva-224103 (URN)10.1109/VS-GAMES.2017.8056604 (DOI)000425228700038 ()2-s2.0-85029005495 (Scopus ID)978-1-5090-5812-9 (ISBN)
Conference
9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), SEP 06-08, 2017, Athens, Greece
Funder
EU, Horizon 2020, 644204
Note

QC 20180312

Available from: 2018-03-12 Created: 2018-03-12 Last updated: 2018-03-12Bibliographically approved
Paetzel, M., Varni, G., Hupont, I., Chetouani, M., Peters, C. & Castellano, G. (2017). Investigating the Influence of Embodiment on Facial Mimicry in HRI Using Computer Vision-Based Measures. In: Howard, A Suzuki, K Zollo, L (Ed.), 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN): . Paper presented at 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL (pp. 579-586). IEEE
Open this publication in new window or tab >>Investigating the Influence of Embodiment on Facial Mimicry in HRI Using Computer Vision-Based Measures
Show others...
2017 (English)In: 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) / [ed] Howard, A Suzuki, K Zollo, L, IEEE , 2017, p. 579-586Conference paper, Published paper (Refereed)
Abstract [en]

Mimicry plays an important role in social interaction. In human communication, it is used to establish rapport and bonding both with other humans, as well as robots and virtual characters. However, little is known about the underlying factors that elicit mimicry in humans when interacting with a robot. In this work, we study the influence of embodiment on participants' ability to mimic a social character. Participants were asked to intentionally mimic the laughing behavior of the Furhat mixed embodied robotic head and a 2D virtual version of the same character. To explore the effect of embodiment, we present two novel approaches to automatically assess people's ability to mimic based solely on videos of their facial expressions. In contrast to participants' self-assessment, the analysis of video recordings suggests a better ability to mimic when people interact with the 2D embodiment.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225234 (URN)000427262400091 ()2-s2.0-85045834281 (Scopus ID)978-1-5386-3518-6 (ISBN)
Conference
26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL
Note

QC 20180404

Available from: 2018-04-04 Created: 2018-04-04 Last updated: 2018-04-11Bibliographically approved
Palmberg, R., Peters, C. & Qureshi, A. (2017). When Facial Expressions Dominate Emotion Perception in Groups of Virtual Characters. In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings: . Paper presented at 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), SEP 06-08, 2017, Athens, Greece (pp. 157-160). IEEE
Open this publication in new window or tab >>When Facial Expressions Dominate Emotion Perception in Groups of Virtual Characters
2017 (English)In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, IEEE, 2017, p. 157-160Conference paper, Published paper (Refereed)
Abstract [en]

Virtual characters play a central role in populating virtual worlds, whether they act as conduits for human expressions as avatars or are automatically controlled by a machine as agents. In modern game-related scenarios, it is economical to assemble virtual characters from varying sources of appearances and motions. However, doing so may have unintended consequences with respect to how people perceive their expressions. This paper presents an initial study investigating the impact of facial expressions and full body motions from varying sources on the perception of intense positive and negative emotional expressions in small groups of virtual characters. 21 participants views a small group of three virtual characters engaged in intense animated behaviours as their face and body motions were varied between positive, neutral and negative valence expressions. While emotion perception was based on both the bodies and the faces of the characters, we found a strong impact of the valence of facial expressions on the perception of emotions in the group. We discuss these findings in relation to the combination of manually created and automatically defined motion sources, highlighting implications for the animation of virtual characters.

Place, publisher, year, edition, pages
IEEE, 2017
Series
International Conference on Games and Virtual Worlds for Serious Applications, ISSN 2474-0470
National Category
Other Humanities not elsewhere specified
Identifiers
urn:nbn:se:kth:diva-224102 (URN)10.1109/VS-GAMES.2017.8056588 (DOI)000425228700024 ()2-s2.0-85034629863 (Scopus ID)978-1-5090-5812-9 (ISBN)
Conference
9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), SEP 06-08, 2017, Athens, Greece
Note

QC 20180312

Available from: 2018-03-12 Created: 2018-03-12 Last updated: 2018-03-12Bibliographically approved
Paetzel, M., Peters, C., Nyström, I. & Castellano, G. (2016). Congruency Matters: How Ambiguous Gender Cues Increase a Robot's Uncanniness. In: SOCIAL ROBOTICS, (ICSR 2016): . Paper presented at 8th International Conference on Social Robotics (ICSR), NOV 01-03, 2016, Kansas City, MO (pp. 402-412). Springer
Open this publication in new window or tab >>Congruency Matters: How Ambiguous Gender Cues Increase a Robot's Uncanniness
2016 (English)In: SOCIAL ROBOTICS, (ICSR 2016), Springer, 2016, p. 402-412Conference paper, Published paper (Refereed)
Abstract [en]

Most research on the uncanny valley effect is concerned with the influence of human-likeness and realism as a trigger of an uncanny feeling in humans. There has been a lack of investigation on the effect of other dimensions, for example, gender. Back-projected robotic heads allow us to alter visual cues in the appearance of the robot in order to investigate how the perception of it changes. In this paper, we study the influence of gender on the perceived uncanniness. We conducted an experiment with 48 participants in which we used different modalities of interaction to change the strength of the gender cues in the robot. Results show that incongruence in the gender cues of the robot, and not its specific gender, influences the uncanniness of the back-projected robotic head. This finding has potential implications for both the perceptual mismatch and categorization ambiguity theory as a general explanation of the uncanny valley effect.

Place, publisher, year, edition, pages
Springer, 2016
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743 ; 9979
Keywords
Uncanny valley, Robot gender, Back-projected robotic head
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-200063 (URN)10.1007/978-3-319-47437-3_39 (DOI)000389816500039 ()2-s2.0-84992488272 (Scopus ID)978-3-319-47437-3 (ISBN)978-3-319-47436-6 (ISBN)
Conference
8th International Conference on Social Robotics (ICSR), NOV 01-03, 2016, Kansas City, MO
Note

QC 20170125

Available from: 2017-01-25 Created: 2017-01-20 Last updated: 2018-01-13Bibliographically approved
Paetzel, M., Peters, C., Nystrom, I. & Castellano, G. (2016). Effects of Multimodal Cues on Children's Perception of Uncanniness in a Social Robot. In: ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. Paper presented at 18th ACM International Conference on Multimodal Interaction (ICMI), NOV 12-16, 2016, Tokyo, JAPAN (pp. 297-301). ASSOC COMPUTING MACHINERY
Open this publication in new window or tab >>Effects of Multimodal Cues on Children's Perception of Uncanniness in a Social Robot
2016 (English)In: ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ASSOC COMPUTING MACHINERY , 2016, p. 297-301Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the influence of multimodal incongruent gender cues on the perception of a robot's uncanniness and gender in children. The back-projected robot head Furhat was equipped with a female and male face texture and voice synthesizer and the voice and facial cues were tested in congruent and incongruent combinations. 106 children between the age of 8 and 13 participated in the study. Results show that multimodal incongruent cues do not trigger the feeling of uncanniness in children. These results are significant as they support other recent research showing that the perception of uncanniness cannot be triggered by a categorical ambiguity in the robot. In addition, we found that children rely on auditory cues much stronger than on the facial cues when assigning a gender to the robot if presented with incongruent cues. These findings have implications for the robot design, as it seems possible to change the gender of a robot by only changing its voice without creating a feeling of uncanniness in a child.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2016
Keywords
Uncanny valley, child-robot interaction, multimodal voice and facial expressions
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-199809 (URN)10.1145/2993148.2993157 (DOI)000390299900046 ()2-s2.0-85016627866 (Scopus ID)978-1-4503-4556-9 (ISBN)
Conference
18th ACM International Conference on Multimodal Interaction (ICMI), NOV 12-16, 2016, Tokyo, JAPAN
Note

QC 20170119

Available from: 2017-01-19 Created: 2017-01-16 Last updated: 2018-01-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7257-0761

Search in DiVA

Show all publications