Change search
Refine search result
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Correia, Filipa
    et al.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Mascarenhas, Samuel F.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Gomes, Samuel
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Arriaga, Patricia
    CIS IUL, Inst Univ Lisboa ISCTE IUL, Lisbon, Portugal..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Prada, Rui
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Melo, Francisco S.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Paiva, Ana
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Exploring Prosociality in Human-Robot Teams2019In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 143-151Conference paper (Refereed)
    Abstract [en]

    This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.

  • 2.
    Irfan, Bahar
    et al.
    Univ Plymouth, Ctr Robot & Neural Syst, Plymouth, Devon, England..
    Ramachandran, Aditi
    Yale Univ, Social Robot Lab, New Haven, CT 06520 USA..
    Spaulding, Samuel
    MIT, Personal Robots Grp, Media Lab, Cambridge, MA 02139 USA..
    Glas, Dylan F.
    Huawei, Futurewei Technol, Santa Clara, CA USA..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Koay, Kheng Lee
    Univ Hertfordshire, Adapt Syst Res Grp, Hatfield, Herts, England..
    Personalization in Long-Term Human-Robot Interaction2019In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 685-686Conference paper (Refereed)
    Abstract [en]

    For practical reasons, most human-robot interaction (HRI) studies focus on short-term interactions between humans and robots. However, such studies do not capture the difficulty of sustaining engagement and interaction quality across long-term interactions. Many real-world robot applications will require repeated interactions and relationship-building over the long term, and personalization and adaptation to users will be necessary to maintain user engagement and to build rapport and trust between the user and the robot. This full-day workshop brings together perspectives from a variety of research areas, including companion robots, elderly care, and educational robots, in order to provide a forum for sharing and discussing innovations, experiences, works-in-progress, and best practices which address the challenges of personalization in long-term HRI.

  • 3.
    Li, Rui
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    van Almkerk, Marc
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    van Waveren, Sanne
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Carter, Elizabeth
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Comparing Human-Robot Proxemics between Virtual Reality and the Real World2019In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 431-439Conference paper (Refereed)
    Abstract [en]

    Virtual Reality (VR) can greatly benefit Human-Robot Interaction (HRI) as a tool to effectively iterate across robot designs. However, possible system limitations of VR could influence the results such that they do not fully reflect real-life encounters with robots. In order to better deploy VR in HRI, we need to establish a basic understanding of what the differences are between HRI studies in the real world and in VR. This paper investigates the differences between the real life and VR with a focus on proxemic preferences, in combination with exploring the effects of visual familiarity and spatial sound within the VR experience. Results suggested that people prefer closer interaction distances with a real, physical robot than with a virtual robot in VR. Additionally, the virtual robot was perceived as more discomforting than the real robot, which could result in the differences in proxemics. Overall, these results indicate that the perception of the robot has to be evaluated before the interaction can be studied. However, the results also suggested that VR settings with different visual familiarities are consistent with each other in how they affect HRI proxemics and virtual robot perceptions, indicating the freedom to study HRI in various scenarios in VR. The effect of spatial sound in VR drew a more complex picture and thus calls for more in-depth research to understand its influence on HRI in VR.

  • 4.
    Sibirtseva, Elena
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Ghadirzadeh, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL. Intelligent Robotics Research Group, Aalto University, Espoo, Finland.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality2019In: Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, Springer Verlag , 2019, p. 108-123Conference paper (Refereed)
    Abstract [en]

    In collaborative tasks, people rely both on verbal and non-verbal cues simultaneously to communicate with each other. For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. We analysed the acquired data from mixed reality experiments and formulated a hypothesis that modelling temporal dependencies of events in these three modalities increases the model’s predictive power. We evaluated our model on a Bayesian framework to interpret referring expressions with and without exploiting the temporal prior.

  • 5.
    Sibirtseva, Elena
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Kontogiorgos, Dimosthenis
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Nykvist, Olov
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Karaoguz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Gustafson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction2018In: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2018Conference paper (Refereed)
    Abstract [en]

    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated realtime augmentations of the workspace in three conditions - head-mounted display, projector, and a monitor as the baseline - using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the head-mounted display condition, participants found that modality more engaging than the other two, but overall showed preference for the projector condition over the monitor and head-mounted display conditions.

  • 6.
    van Waveren, Sanne
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Björklund, Linnéa
    KTH.
    Carter, Elizabeth
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Knock on Wood: The Effects of Material Choice on the Perception of Social Robots2019In: Lecture Notes in Artificial Intelligence series (LNAI), 2019Conference paper (Refereed)
  • 7.
    van Waveren, Sanne
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Carter, Elizabeth J.
    KTH.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Take one for the team: The effects of error severity in collaborative tasks with social robots2019In: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM), 2019, p. 151-158Conference paper (Refereed)
    Abstract [en]

    We explore the effects of robot failure severity (no failure vs. lowimpact vs. high-impact) on people's subjective ratings of the robot. We designed an escape room scenario in which one participant teams up with a remotely-controlled Pepper robot.We manipulated the robot's performance at the end of the game: The robot would either correctly follow the participant's instructions (control condition), the robot would fail but people could still complete the task of escaping the room (low-impact condition), or the robot's failure would cause the game to be lost (high-impact condition). Results showed no difference across conditions for people's ratings of the robot in terms of warmth, competence, and discomfort. However, people in the low-impact condition had significantly less faith in the robot's robustness in future escape room scenarios. Open-ended questions revealed interesting trends that are worth pursuing in the future: people may view task performance as a team effort and may blame their team or themselves more for the robot failure in case of a high-impact failure as compared to the low-impact failure.

  • 8.
    Vijayan, Aravind Elanjimattathil
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Alexanderson, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Beskow, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Using Constrained Optimization for Real-Time Synchronization of Verbal and Nonverbal Robot Behavior2018In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 1955-1961Conference paper (Refereed)
    Abstract [en]

    Most of the motion re-targeting techniques are grounded on virtual character animation research, which means that they typically assume that the target embodiment has unconstrained joint angular velocities. However, because robots often do have such constraints, traditional re-targeting approaches can originate irregular delays in the robot motion. With the goal of ensuring synchronization between verbal and nonverbal behavior, this paper proposes an optimization framework for processing re-targeted motion sequences that addresses constraints such as joint angle and angular velocities. The proposed framework was evaluated on a humanoid robot using both objective and subjective metrics. While the analysis of the joint motion trajectories provides evidence that our framework successfully performs the desired modifications to ensure verbal and nonverbal behavior synchronization, results from a perceptual study showed that participants found the robot motion generated by our method more natural, elegant and lifelike than a control condition.

1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf