Change search
Refine search result
1 - 30 of 30
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Castellano, G.
    et al.
    Karpouzis, K.
    Martin, J. -C
    Morency, L. -P
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Riek, L. D.
    5th international workshop on affective interaction in natural environments (AFFINE): Interacting with affective artefacts in the wild2013In: Proceedings - 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013, 2013, p. 727-Conference paper (Refereed)
    Abstract [en]

    This workshop covers real-time computational techniques for the recognition and interpretation of human affective and social behaviour, and techniques for synthesis of believable social behaviour supporting real-time adaptive human-agent and human-robot interaction in real-world environments.

  • 2. Corrigan, L. J.
    et al.
    Basedow, C.
    Küster, D.
    Kappas, A.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Castellano, G.
    Mixing implicit and explicit probes: Finding a ground truth for engagement in social human-robot interactions2014In: HRI '14 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, IEEE Computer Society, 2014, p. 140-141Conference paper (Refereed)
    Abstract [en]

    In our work we explore the development of a computational model capable of automatically detecting engagement in social human-robot interactions from real-time sensory and contextual input. However, to train the model we need to establish ground truths of engagement from a large corpus of data collected from a study involving task and social-task engagement. Here, we intend to advance the current state-of-the-art by reducing the need for unreliable post-experiment questionnaires and costly time-consuming annotation with the novel introduction of implicit probes. A non-intrusive, pervasive and embedded method of collecting informative data at different stages of an interaction.

  • 3. Corrigan, L. J.
    et al.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Castellano, G.
    Identifying task engagement: Towards personalised interactions with educational robots2013In: Proceedings - 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013, 2013, p. 655-658Conference paper (Refereed)
    Abstract [en]

    The focus of this project is to design, develop and evaluate a new computational model for automatically detecting change in task engagement. This work will be applied to robotic tutors to enhance and support the learning experience, enabling timely pedagogical and empathic intervention. This work is intended to forward the current state of the art by 1) exploring how to automatically detect engagement with a learning task, 2) designing and developing new approaches to machine learning for adaptive platform-independent modelling and 3) evaluation of its effectiveness for building and maintaining learner engagement across different tutor embodiments, for example a physical and virtual embodiment.

  • 4. Corrigan, L. J.
    et al.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Küster, D.
    Castellano, G.
    Engagement perception and generation for social robots and virtual agents2016In: Toward Robotic Socially Believable Behaving Systems - Volume I, Springer Science+Business Media B.V., 2016, p. 29-51Chapter in book (Refereed)
    Abstract [en]

    Technology is the future, woven into every aspect of our lives, but how are we to interact with all this technology and what happens when problems arise? Artificial agents, such as virtual characters and social robots could offer a realistic solution to help facilitate interactions between humans and machines—if only these agents were better equipped and more informed to hold up their end of an interaction. People and machines can interact to do things together, but in order to get the most out of every interaction, the agent must to be able to make reasonable judgements regarding your intent and goals for the interaction.We explore the concept of engagement from the different perspectives of the human and the agent. More specifically, we study how the agent perceives the engagement state of the other interactant, and how it generates its own representation of engaging behaviour. In this chapter, we discuss the different stages and components of engagement that have been suggested in the literature from the applied perspective of a case study of engagement for social robotics, as well as in the context of another study that was focused on gaze-related engagement with virtual characters.

  • 5. Corrigan, Lee J.
    et al.
    Basedow, Christina
    Küster, Dennis
    Kappas, Arvid
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Castellano, Ginevra
    Uppsala University.
    Perception matters! Engagement in task orientated social robotics2015In: Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication, 2015, p. 375-380Conference paper (Refereed)
    Abstract [en]

    Engagement in task orientated social robotics is a complex phenomenon, consisting of both task and social elements. Previous work in this area tends to focus on these aspects in isolation without consideration for the positive or negative effects one might cause the other. We explore both, in an attempt to understand how engagement with the task might effect the social relationship with the robot, and vice versa. In this paper, we describe the analysis of participant self-report data collected during an exploratory pilot study used to evaluate users’ “perception of engagement”. We discuss how the results of our analysis suggest that ultimately, it was the users’ own perception of the robots’ characteristics such as friendliness, helpfulness and attentiveness which led to sustained engagement with both the task and robot

     

  • 6.
    Li, Chengjie
    et al.
    KTH.
    Androulakaki, Theofronia
    KTH.
    Gao, Alex Yuan
    Yang, Fangkai
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Saikia, Himangshu
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Skantze, Gabriel
    KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
    Effects of Posture and Embodiment on Social Distance in Human-Agent Interaction in Mixed Reality2018In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, ACM Digital Library, 2018, p. 191-196Conference paper (Refereed)
    Abstract [en]

    Mixed reality offers new potentials for social interaction experiences with virtual agents. In addition, it can be used to experiment with the design of physical robots. However, while previous studies have investigated comfortable social distances between humans and artificial agents in real and virtual environments, there is little data with regards to mixed reality environments. In this paper, we conducted an experiment in which participants were asked to walk up to an agent to ask a question, in order to investigate the social distances maintained, as well as the subject's experience of the interaction. We manipulated both the embodiment of the agent (robot vs. human and virtual vs. physical) as well as closed vs. open posture of the agent. The virtual agent was displayed using a mixed reality headset. Our experiment involved 35 participants in a within-subject design. We show that, in the context of social interactions, mixed reality fares well against physical environments, and robots fare well against humans, barring a few technical challenges.

  • 7. Mancini, M.
    et al.
    Ermilov, A.
    Castellano, G.
    Liarokapis, F.
    Varni, G.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Effects of gender mapping on the perception of emotion from upper body movement in virtual characters2014In: 6th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2014 - Held as Part of 16th International Conference on Human-Computer Interaction, HCI International 2014, 2014, no PART 1, p. 263-273Conference paper (Refereed)
    Abstract [en]

    Despite recent advancements in our understanding of the human perception of the emotional behaviour of embodied artificial entities in virtual reality environments, little remains known about various specifics relating to the effect of gender mapping on the perception of emotion from body movement. In this paper, a pilot experiment is presented investigating the effects of gender congruency on the perception of emotion from upper body movements. Male and female actors were enrolled to conduct a number of gestures within six general categories of emotion. These motions were mapped onto virtual characters with male and female embodiments. According to the gender congruency condition, the motions of male actors were mapped onto male characters (congruent) or onto female characters (incongruent) and vice-versa. A significant effect of gender mapping was found in the ratings of perception of three emotions (anger, fear and happiness), suggesting that gender may be an important aspect to be considered in the perception, and hence generation, of some emotional behaviours.

  • 8. O'Connor, S.
    et al.
    Liarokapis, F.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    A perceptual study into the behaviour of autonomous agents within a virtual urban environment2013In: 2013 IEEE 14th International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM 2013, IEEE , 2013, p. 6583494-Conference paper (Refereed)
    Abstract [en]

    Simulating vast crowds of autonomous agents within a procedurally generated virtual environment is a challenging endeavour from a technical perspective, however it becomes even more difficult when the subjective nature of perception is also taken into account. Agent behaviour is the product of artificial intelligence systems working in tandem, however the sophistication of these systems is not a guarantee of achieving believable behaviour. Within locations based upon reality such as an urban environment, the perceived realism of agent behaviour becomes even harder to achieve. This paper presents the development of a crowd simulation that is based upon a real-life urban environment, which is then subjected to perceptual experimentation to identify features of behaviour which can be linked to perceived realism. This research is predicted to feedback into the development processes of inhabited cities, especially those attempting to simulate perceptually realistic agents as it will highlight features of behaviour that are important to implement. The perceptual experimentation methodologies presented can also be adapted and potentially utilised to test other types of crowd simulation, whether it be for the purposes of computer games or even urban planning and health and safety.

  • 9. O'Connor, S.
    et al.
    Liarokapis, F.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    An initial study to assess the perceived realism of agent crowd behaviour in a virtual city2013In: Int. Conf. Games Virtual Worlds Serious Appl., VS-GAMES, 2013Conference paper (Refereed)
    Abstract [en]

    This paper examines the development of a crowd simulation in a virtual city, and a perceptual experiment to identify features of behaviour which can be linked to perceived realism. This research is expected to feedback into the development processes of simulating inhabited locations, by identifying the key features which need to be implemented to achieve more perceptually realistic crowd behaviour. The perceptual experimentation methodologies presented can be adapted and potentially utilised to test other types of crowd simulation, for application within computer games or more specific simulations such as for urban planning or health and safety purposes.

  • 10. Paetzel, M.
    et al.
    Castellano, G.
    Varni, G.
    Hupont, I.
    Chetouani, M.
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    The Attribution of Emotional State - How Embodiment Features and Social Traits Affect the Perception of an Artificial Agent2018In: RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 495-502Conference paper (Refereed)
    Abstract [en]

    Understanding emotional states is a challenging task which frequently leads to misinterpretation even in human observers. While the perception of emotions has been studied extensively in human psychology, little is known about what factors influence the human perception of emotions in robots and virtual characters. In this paper, we build on the Brunswik lens model to investigate the influence of (a) the agent's embodiment using a 2D virtual character, a 3D blended embodiment, a recording of the 3D platform and a recording of a human, as well as (b) the level of human-likeness on people's ability to interpret emotional facial expressions in an agent. In addition, we measure social traits of the human observers and analyze how they correlate to the success in recognizing emotional expressions. We find that interpersonal differences play a minor role in the perception of emotional states. However, both embodiment and human-likeness as well as related perceptual dimensions such as perceived social presence and uncanniness have an effect on the attribution of emotional states.

  • 11. Paetzel, M.
    et al.
    Hupont, I.
    Varni, G.
    Chetouani, M.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz). KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Castellano, G.
    Exploring the link between self-assessed mimicry and embodiment in HRI2017In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2017, p. 245-246Conference paper (Refereed)
    Abstract [en]

    This work explores the relationship between a robot's embodiment and people's ability to mimic its behavior. It presents a study in which participants were asked to mimic a 3D mixed-embodied robotic head and a 2D version of the same character. Quantitative and qualitative analysis were performed from questionnaires. Quantitative results show no significant influence of the character's embodiment on the self-assessed ability to mimic it, while qualitative ones indicate a preference for mimicking the robotic head.

  • 12. Paetzel, Maike
    et al.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Nystrom, Ingela
    Castellano, Ginevra
    Effects of Multimodal Cues on Children's Perception of Uncanniness in a Social Robot2016In: ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ASSOC COMPUTING MACHINERY , 2016, p. 297-301Conference paper (Refereed)
    Abstract [en]

    This paper investigates the influence of multimodal incongruent gender cues on the perception of a robot's uncanniness and gender in children. The back-projected robot head Furhat was equipped with a female and male face texture and voice synthesizer and the voice and facial cues were tested in congruent and incongruent combinations. 106 children between the age of 8 and 13 participated in the study. Results show that multimodal incongruent cues do not trigger the feeling of uncanniness in children. These results are significant as they support other recent research showing that the perception of uncanniness cannot be triggered by a categorical ambiguity in the robot. In addition, we found that children rely on auditory cues much stronger than on the facial cues when assigning a gender to the robot if presented with incongruent cues. These findings have implications for the robot design, as it seems possible to change the gender of a robot by only changing its voice without creating a feeling of uncanniness in a child.

  • 13. Paetzel, Maike
    et al.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Nyström, Ingela
    Castellano, Ginevra
    Congruency Matters: How Ambiguous Gender Cues Increase a Robot's Uncanniness2016In: SOCIAL ROBOTICS, (ICSR 2016), Springer, 2016, p. 402-412Conference paper (Refereed)
    Abstract [en]

    Most research on the uncanny valley effect is concerned with the influence of human-likeness and realism as a trigger of an uncanny feeling in humans. There has been a lack of investigation on the effect of other dimensions, for example, gender. Back-projected robotic heads allow us to alter visual cues in the appearance of the robot in order to investigate how the perception of it changes. In this paper, we study the influence of gender on the perceived uncanniness. We conducted an experiment with 48 participants in which we used different modalities of interaction to change the strength of the gender cues in the robot. Results show that incongruence in the gender cues of the robot, and not its specific gender, influences the uncanniness of the back-projected robotic head. This finding has potential implications for both the perceptual mismatch and categorization ambiguity theory as a general explanation of the uncanny valley effect.

  • 14. Paetzel, Maike
    et al.
    Varni, Giovanna
    Hupont, Isabelle
    Chetouani, Mohamed
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Castellano, Ginevra
    Investigating the Influence of Embodiment on Facial Mimicry in HRI Using Computer Vision-Based Measures2017In: 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) / [ed] Howard, A Suzuki, K Zollo, L, IEEE , 2017, p. 579-586Conference paper (Refereed)
    Abstract [en]

    Mimicry plays an important role in social interaction. In human communication, it is used to establish rapport and bonding both with other humans, as well as robots and virtual characters. However, little is known about the underlying factors that elicit mimicry in humans when interacting with a robot. In this work, we study the influence of embodiment on participants' ability to mimic a social character. Participants were asked to intentionally mimic the laughing behavior of the Furhat mixed embodied robotic head and a 2D virtual version of the same character. To explore the effect of embodiment, we present two novel approaches to automatically assess people's ability to mimic based solely on videos of their facial expressions. In contrast to participants' self-assessment, the analysis of video recordings suggests a better ability to mimic when people interact with the 2D embodiment.

  • 15.
    Palmberg, Robin
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Qureshi, Adam
    When Facial Expressions Dominate Emotion Perception in Groups of Virtual Characters2017In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, IEEE, 2017, p. 157-160Conference paper (Refereed)
    Abstract [en]

    Virtual characters play a central role in populating virtual worlds, whether they act as conduits for human expressions as avatars or are automatically controlled by a machine as agents. In modern game-related scenarios, it is economical to assemble virtual characters from varying sources of appearances and motions. However, doing so may have unintended consequences with respect to how people perceive their expressions. This paper presents an initial study investigating the impact of facial expressions and full body motions from varying sources on the perception of intense positive and negative emotional expressions in small groups of virtual characters. 21 participants views a small group of three virtual characters engaged in intense animated behaviours as their face and body motions were varied between positive, neutral and negative valence expressions. While emotion perception was based on both the bodies and the faces of the characters, we found a strong impact of the valence of facial expressions on the perception of emotions in the group. We discuss these findings in relation to the combination of manually created and automatically defined motion sources, highlighting implications for the animation of virtual characters.

  • 16.
    Peters, Christopher
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Doggett, Michael
    Foreword to special section on SIGGRAD 20152016In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 57, p. A1-A2Article in journal (Refereed)
  • 17.
    Peters, Christopher E.
    et al.
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Anderson, Eike Falk
    Bournemouth University.
    The Four I's Recipe for Cooking Up Computer Graphics Exercises and Assessments2014Conference paper (Refereed)
    Abstract [en]

    The design of meaningful student activities, such as lab exercises and assignments, is a core element of computer graphics pedagogy. Here, we briefly describe our efforts towards making the process of defining and structuring computer graphics activities more explicit. We focus on four main activity categories that are building blocks for practical course design: Independent, Iterative, Incremental and Integrative. These ``Four I's'' of computer graphics activity provide the fundamental ingredients for explicitly defining the design of activity-oriented computer graphics courses with the potential to deliver significant artefacts that may, for example, constitute a portfolio of work for assessment or presentation to employers. The categorisations are intended as the first steps towards more clearly structuring and communicating exercise specifications in collaborative course development settings.

  • 18.
    Peters, Christopher E.
    et al.
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Kjelldahl, LarsKTH, School of Computer Science and Communication (CSC).
    SIGRAD 2015: Proceedings of the annual meeting of the Swedish Computer Graphics Association (SIGRAD)2015Conference proceedings (editor) (Refereed)
  • 19.
    Peters, Christopher
    et al.
    KTH.
    Li, Chengjie
    KTH.
    Yang, Fangkai
    KTH.
    Avramova, Vanya
    KTH.
    Skantze, Gabriel
    KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
    Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality2018In: Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems, 2018, p. 2247-2249Conference paper (Refereed)
    Abstract [en]

    Mixed reality environments offer new potentials for the design of compelling social interaction experiences with virtual characters. In this paper, we summarise initial experiments we are conducting in which we measure comfortable social distances between humans, virtual humans and virtual robots in mixed reality environments. We consider a scenario in which participants walk within a comfortable distance of a virtual character that has its appearance varied between a male and female human, and a standard- and human-height virtual Pepper robot. Our studies in mixed reality thus far indicate that humans adopt social zones with artificial agents that are similar in manner to human-human social interactions and interactions in virtual reality.

  • 20. Qureshi, A.
    et al.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Apperly, I.
    How does varying gaze direction affect interaction between a virtual agent and participant in an on-line communication scenario?2014In: 6th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2014 - Held as Part of 16th International Conference on Human-Computer Interaction, HCI International 2014, 2014, Vol. 8525, no PART 1, p. 305-316Conference paper (Refereed)
    Abstract [en]

    Computer based perspective taking tasks in cognitive psychology often utilise static images and auditory instructions to assess online communication. Results are then explained in terms of theory of mind (the ability to understand that other agents have different beliefs, desires and knowledge to oneself).The current study utilises a scenario in which participants were required to select objects in a grid after listening to instructions from an on-screen director. The director was positioned behind the grid from the participants' view. As objects in some slots were concealed from the view of the director, participants needed to take the perspective of the director into account in order to respond accurately. Results showed that participants reliably made errors, attributable to not using the information from the director's perspective efficiently, rather than not being able to take the director's perspective. However, the fact that the director was represented by a static sprite meant that even for a laboratory based experiment, the level of realism was low. This could have affected the level of participant engagement with the director and the task. This study, a collaboration between computer science and psychology, advances the static sprite model by incorporating head movement into a more realistic on-screen director with the aim of a.) Improving engagement and b.) investigating whether gaze direction affects accuracy and response times of object selection. Results suggest that gaze direction can influence the speed of accurate object selection, but only slightly and in certain situations; specifically those complex enough to warrant the participant paying additional attention to gaze direction and those that highlight perspective differences between themselves and the director. This in turn suggests that engagement with a virtual agent could be improved by taking these factors into account.

  • 21. Qureshi, A.
    et al.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Apperly, I.
    Interaction and engagement between an agent and participant in an on-line communication paradigm as mediated by gaze direction2013In: Proceedings of the 2013 Inputs-Outputs Conference: An Interdisciplinary Conference on Engagement in HCI and Performance, Association for Computing Machinery (ACM), 2013, p. 2557603-Conference paper (Refereed)
    Abstract [en]

    Computer based perspective taking tasks in cognitive psychology often utilise static images to assess on-line communication [1] explaining results in terms of theory of mind (the ability to understand that other agents have different beliefs, desires and knowledge to oneself [10]). The current study utilises the method used in [1] in which participants are required to respond correctly to instructions from an on-screen director by taking the perspective of the director into account. Results showed that participants reliably made errors, attributable to not using the information from the director's perspective efficiently, rather than not being able to take the director's perspective. However, the fact that the director was represented by a static sprite could mean that participant engagement with the director and the task was low. This study, a collaboration between computer science and psychology, advances this model by incorporating head movement into a more realistic on-screen director [9], potentially improving engagement. Whether the gaze direction of the director facilitated or hindered participants in object selection was investigated, and results will be discussed in terms of the level of engagement shown by the participant with the director, as measured by their efficiency in object selection, and how this varied with gaze direction. Further adaptations of the model (body movement, blinking) will also be discussed as ways of improving engagement.

  • 22.
    Ramos Carretero, Miguel
    et al.
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Qureshi, Adam
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Evaluating the perception of group emotion from full body movements in the context of virtual crowds2014In: Proceedings of the ACM Symposium on Applied Perception, SAP 2014, ACM Digital Library, 2014, p. 7-14Conference paper (Refereed)
    Abstract [en]

    Simulating the behavior of crowds of artificial entities that have humanoid embodiments has become an important element in computer graphics and special effects. However, many important questions remain in relation to the perception of social behavior and expression of emotions in virtual crowds. Specifically, few studies have considered the role of background context on the perception of the full-body emotion expressed by sub-constituents of the crowd i.e. individuals and small groups. In this paper, we present the results of perceptual studies in which animated scenes of expressive virtual crowd behavior were rated in terms of their valence by participants. The behaviors of a task-irrelevant crowd in the background were altered between neutral, happy and sad in order to investigate effects on the perception of emotion from task-relevant individuals in the foreground. Effects of the task irrelevant background on ratings of foreground characters were found, including cases that accompanied negatively valenced stimuli.

  • 23.
    Ravichandran, Naresh Balaji
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for High Performance Computing, PDC.
    Yang, Fangkai
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Lansner, Anders
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Herman, Pawel
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Pedestrian simulation as multi-objective reinforcement learning2018In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018, 2018, p. 307-312Conference paper (Refereed)
    Abstract [en]

    Modelling and simulation of pedestrian crowds require agents to reach pre-determined goals and avoid collisions with static obstacles and dynamic pedestrians, while maintaining natural gait behaviour. We model pedestrians as autonomous, learning, and reactive agents employing Reinforcement Learning (RL). Typical RL-based agent simulations suffer poor generalization due to handcrafted reward function to ensure realistic behaviour. In this work, we model pedestrians in a modular framework integrating navigation and collision-avoidance tasks as separate modules. Each such module consists of independent state-spaces and rewards, but with shared action-spaces. Empirical results suggest that such modular framework learning models can show satisfactory performance without tuning parameters, and we compare it with the state-of-art crowd simulation methods.

  • 24.
    Romero, Mario
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Andrée, Jonas
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Thuresson, Björn
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Designing and Evaluating Embodied Sculpting: a Touching Experience2014Conference paper (Refereed)
    Abstract [en]

    We discuss the design and evaluation of embodied sculpting, the mediated experience of creating a virtual object with volume which users can see, hear, and touch as they mold the material with their body. Users’ digitized bodies share the virtual space of the digital model through a depth-sensor camera. They can use their hands, bodies, or any object to shape the sculpture. As they mold the model, they see a real-time rendering of it and receive sound and haptic feedback of the interaction. We discuss the opportunities and challenges of both designing for haptic embodiment and evaluating it through haptic experimentation.

  • 25.
    Romero, Mario
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). Georgia Institute of Technology.
    Björn, ThuressonKTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).Peters, ChristopherKTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).Landazuri, Natalia
    Expo-Based Learning (EBL): Augmenting Project-Based Learning with Large Public Presentations2015Conference proceedings (editor) (Refereed)
  • 26.
    Romero, Mario
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Thuresson, Björn
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Kis, Filip
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Coppard, J.
    Andrée, Jenny
    KTH.
    Landázuri, N.
    Augmenting PBL with large public presentations: A case study in interactive graphics pedagogy2014In: ITICSE 2014 - Proceedings of the 2014 Innovation and Technology in Computer Science Education Conference, 2014, p. 15-20Conference paper (Refereed)
    Abstract [en]

    We present a case study analyzing and discussing the effects of introducing the requirement of public outreach of original student work into the project-based learning of Advanced Graphics and Interaction (AGI) at KTH Royal Institute of Technology. We propose Expo-Based Learning as Project-Based Learning augmented with the constructively aligned goal of achieving public outreach beyond the course. We promote this outreach through three challenges: 1) large public presentations; 2) multidisciplinary collaboration; and 3) professional portfolio building. We demonstrate that the introduction of these challenges, especially the public presentations, had lasting positive impact in the intended technical learning outcomes of AGI with the added benefit of learning team work, presentation skills, timeliness, accountability, self-motivation, technical expertise, and professionalism.

  • 27. Ruhland, Kerstin
    et al.
    Andrist, Sean
    Badler, Jeremy B.
    Peters, Christopher E.
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Badler, Norman I.
    Gleicher, Michael
    Mutlu, Bilge
    McDonnell, Rachel
    Look me in the Eyes: A Survey of Eye and Gaze Animation for Virtual Agents and Artificial Systems2014Conference paper (Refereed)
    Abstract [en]

    A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ''The face is the portrait of the mind; the eyes, its informers.''. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics.

  • 28.
    Ruhland, Kerstin
    et al.
    Trinity College Dublin.
    Peters, Christopher E.
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Andrist, Sean
    University of Wisconsin–Madison.
    Badler, Jeremy B.
    Université catholique de Louvain.
    Badler, Norman I.
    University of Pennsylvania.
    Gleicher, Michael
    University of Wisconsin–Madison.
    Mutlu, Bilge
    University of Wisconsin–Madison.
    McDonnell, Rachel
    Trinity College Dublin.
    A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 6, p. 299-326Article, review/survey (Refereed)
    Abstract [en]

    A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ‘The face is the portrait of the mind; the eyes, its informers’. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human–human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human–Robot Interaction and Human–Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.

  • 29.
    Yang, Fangkai
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Li, Chengjie
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Palmberg, Robin
    KTH, School of Computer Science and Communication (CSC).
    Van der Heide, Ewoud
    KTH, School of Computer Science and Communication (CSC).
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Expressive Virtual Characters for Social Demonstration Games2017In: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, IEEE, 2017, p. 217-224Conference paper (Refereed)
    Abstract [en]

    Virtual characters are an integral part of many game and learning environments and have practical applications as tutors, demonstrators or even representations of the user. However, creating virtual character behaviors can be a time-consuming and complex task requiring substantial technical expertise. To accelerate and better enable the use of virtual characters in social games, we present a virtual character behavior toolkit for the development of expressive virtual characters. It is a midlleware toolkit which sits on top of the game engine with a focus on providing high-level character behaviors to quickly create social games. The toolkit can be adapted to a wide range of scenarios related to social interactions with individuals and groups at multiple distances in the virtual environment and supports customization and control of facial expressions, body animations and group formations. We describe the design of the toolkit, providing an examplar of a small game that is being created with it and our intended future work on the system.

  • 30.
    Yang, Fangkai
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Qureshi, A.
    Shabo, Jack
    KTH.
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Do you see groups?: The impact of crowd density and viewpoint on the perception of groups2018In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018, Association for Computing Machinery (ACM), 2018, p. 313-318Conference paper (Refereed)
    Abstract [en]

    Agent-based crowd simulation in virtual environments is of great utility in a variety of domains, from the entertainment industry to serious applications including mobile robots and swarms. Many studies of crowd behavior simulations do not consider the fact that people tend to congregate in smaller social gatherings, such as friends, or families, rather than walking alone. Based on a real-time crowd simulator which has been implemented as a unilateral incompressible fluid and augmented with group behaviors, a perceptual study was conducted to determine the impact of groups on the perception of the crowds at various densities from different camera views. If it is not possible to see groups under certain circumstances, then it may not be necessary to simulate them, to reduce the amount of calculations, an important issue in real-time simulations. This study provides researchers with a proper reference to design better algorithms to simulate realistic behaviors.

1 - 30 of 30
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf