Endre søk
Begrens søket
1 - 11 of 11
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Gross, James
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Teknisk informationsvetenskap.
    Törngren, Martin
    KTH, Skolan för industriell teknik och management (ITM), Maskinkonstruktion, Mekatronik och inbyggda styrsystem.
    Dán, György
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Nätverk och systemteknik.
    Broman, David
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Programvaruteknik och datorsystem, SCS.
    Herzog, Erik
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Ramakrishna, Raksha
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Nätverk och systemteknik.
    Stower, Rebecca
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Thompson, Haydn
    TECoSA – Trends, Drivers, and Strategic Directions for Trustworthy Edge Computing in Industrial Applications2022Inngår i: INSIGHT, ISSN 2156-485X, Vol. 25, nr 4, s. 29-34Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    TECoSA – a university-based research center in collaboration with industry – was established early in 2020, focusing on Trustworthy Edge Computing Systems and Applications. This article summarizes and assesses the current trends and drivers regarding edge computing. In our analysis, edge computing provided by mobile network operators will be the initial dominating form of this new computing paradigm for the coming decade. These insights form the basis for the research agenda of the TECoSA center, highlighting more advanced use cases, including AR/VR/Cognitive Assistance, cyber-physical systems, and distributed machine learning. The article further elaborates on the identified strategic directions given these trends, emphasizing testbeds and collaborative multidisciplinary research.

    Fulltekst (pdf)
    TECoSA position paper
  • 2.
    Melsión, Gaspar Isaac
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Stower, Rebecca
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Winkle, Katie
    Uppsala Univ, Dept Informat Technol, Uppsala, Sweden..
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    What's at Stake?: Robot explanations matter for high but not low-stake scenarios2023Inngår i: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, s. 2421-2426Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Although the field of Explainable Artificial Intelligence (XAI) in Human-Robot Interaction is gathering increasing attention, how well different explanations compare across HRI scenarios is still not well understood. We conducted an exploratory online study with 335 participants analysing the interaction between type of explanation (counterfactual, feature-based, and no explanation), the stake of the scenario (high, low) and the application scenario (healthcare, industry). Participants viewed one of 12 different vignettes depicting a combination of these three factors and rated their system understanding and trust in the robot. Compared to no explanation, both counterfactual and feature-based explanations improved system understanding and performance trust (but not moral trust). Additionally, when no explanation was present, high-stake scenarios led to significantly worse performance trust and system understanding. These findings suggest that explanations can be used to calibrate users' perceptions of the robot in high-stake scenarios.

  • 3.
    Morillo-Mendez, Lucas
    et al.
    Örebro Univ, Ctr Appl Autonomous Sensor Syst, Örebro, Sweden..
    Stower, Rebecca
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Sleat, Alex
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Schreiter, Tim
    Örebro Univ, Ctr Appl Autonomous Sensor Syst, Örebro, Sweden..
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Mozos, Oscar Martinez
    Örebro Univ, Ctr Appl Autonomous Sensor Syst, Örebro, Sweden..
    Schrooten, Martien G. S.
    Örebro Univ, Sch Behav Social & Legal Sci, Örebro, Sweden..
    Can the robot "see" what I see?: Robot gaze drives attention depending on mental state attribution2023Inngår i: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, artikkel-id 1215771Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

  • 4.
    Rahimzadagan, Noah
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Vahs, Matti
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Stower, Rebecca
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Drone Fail Me Now: How Drone Failures Afect Trust and Risk-Taking Decisions2024Inngår i: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, s. 862-866Konferansepaper (Fagfellevurdert)
    Abstract [en]

    So far, research on drone failures has been mostly limited to understanding the technical causes of failures and recovery strategies. In contrast, there is little work looking at how failures of drones are perceived by users. To address this gap, we conduct a real-world study where participants experience drone failures leading to monetary loss whilst navigating a drone over an obstacle course. We tested 46 participants where they experienced both a failure and failure-free (control) interaction. Participants' trust in the drone, their enjoyment of the interaction, perceived control, and future use intentions were all negatively impacted by drone failures. However, risk-taking decisions during the interaction were not affected. These findings suggest that experiencing a failure whilst operating a drone in real-time is detrimental to participants' subjective experience of the interaction.

  • 5.
    Rudaz, Damien
    et al.
    Telecom Paris, Dept Econ & Social Sci, Paris, France.;Inst Polytech Paris, Paris, France..
    Tatarian, Karen
    Sorbonne Univ, Inst Intelligent Syst & Robot, Paris, France..
    Stower, Rebecca
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. Sorbonne Univ, Inst Intelligent Syst & Robot, Paris, France.
    Licoppe, Christian
    Sorbonne Univ, Inst Intelligent Syst & Robot, Paris, France..
    From Inanimate Object to Agent: Impact of Pre-beginnings on the Emergence of Greetings with a Robot2023Inngår i: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, nr 3, artikkel-id 29Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The very first moments of co-presence, during which a robot appears to a participant for the first time, are often "off-the-record" in the data collected from human-robot experiments (video recordings, motion tracking, methodology sections, etc.). Yet, this "pre-beginning" phase, well documented in the case of human-human interactions, is not an interactional vacuum: It is where interactional work from participants can take place so the production of a first speaking turn (like greeting the robot) becomes relevant and expected. We base our analysis on an experiment that replicated the interaction opening delays sometimes observed in laboratory or "in-the-wild" human-robot interaction studies-where robots can require time before springing to life after they are in co-presence with a human. Using an ethnomethodological and multimodal conversation analytic methodology (EMCA), we identify which properties of the robot's behavior were oriented to by participants as creating the adequate conditions to produce a first greeting. Our findings highlight the importance of the state in which the robot originally appears to participants: as an immobile object or, instead, as an entity already involved in preexisting activity. Participants' orientations to the very first behaviors manifested by the robot during this "pre-beginning" phase produced a priori unpredictable sequential trajectories, which configured the timing and the manner in which the robot emerged as a social agent. We suggest that these first instants of co-presence are not peripheral issues with respect to human-robot experiments but should be thought about and designed as an integral part of those.

  • 6.
    Stower, Rebecca
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Abdelghani, Rania
    Ctr INRIA Bordeaux, Talence, France..
    Tschopp, Marisa
    SCIP, Zurich, Switzerland..
    Evangelista, Keegan
    Univ Zurich, Zurich, Switzerland..
    Chetouani, Mohamed
    Sorbonne Univ, Paris, France..
    Kappas, Arvid
    Jacobs Univ, Bremen, Germany..
    Exploring space for robot mistakes in child robot interactions2022Inngår i: Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems, ISSN 1572-0373, E-ISSN 1572-0381, Vol. 23, nr 2, s. 243-288Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Understanding the impact of robot errors in child-robot-interactions (CRI) is critical, as current technological systems are still limited and may randomly present a variety of mistakes during interactions with children. In this study we manipulate a task-based error of a NAO robot during a semi-autonomous computational thinking task implemented with the Cozmo robot. Data from 72 children aged 7-10 were analysed regarding their attitudes towards NAO (social trust, competency trust, liking, and perceived agency), their behaviour towards the robot (self-disclosure, following recommendations), as well as their task performance. We did not find quantitative effects of the robot's error on children's self-reported attitudes, behaviour, or task performance. Age was also not significantly related to either social attitudes or behaviours towards NAO, although there were some age-related differences in task performance. Potential reasons behind the lack of statistical effects and limitations of the study with regards to the manipulation of robot errors are discussed and insights into the design of future CRI studies provided.

  • 7.
    Stower, Rebecca
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Ligthart, Mike E.U.
    Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
    Spitale, Micol
    Cambridge University, Cambridge, United Kingdom.
    Calvo-Barajas, Natalia
    Uppsala University, Uppsala, Sweden.
    De Droog, Simone M.
    HU University of Applied Sciences, Utrecht, The Netherlands.
    CRITTER: Child-Robot Interaction and Interdisciplinary Research2023Inngår i: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, s. 926-928Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Several recent works in human-robot-interaction (HRI) have begun to highlight the importance of the replication crisis and open science practices for our field. Yet, suggestions and recommendations tailored to child-robot-interaction (CRI) research, which poses its own additional set of challenges, remain limited. There is also an increased need within both HRI and CRI for inter and crossdisciplinary collaborations, where input from multiple different domains can contribute to better research outcomes. Consequently, this workshop aims to facilitate discussions between researchers from diverse disciplines within CRI. The workshop will open with a panel discussion between CRI researchers from different disciplines, followed by 3-minute flash talks of the accepted submissions. The second half of the workshop will consist of breakout group discussions, where both senior and junior academics from different disciplines can share their experiences of conducting CRI research. Through this workshop, we hope to create a common ground for addressing shared challenges in CRI, as well as identify a set of possible solutions going forward.

  • 8.
    Stower, Rebecca
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Tatarian, Karen
    SoftBank Robot Europe, Paris, France..
    Rudaz, Damien
    SoftBank Robot Europe, Paris, France.;Inst Polytechn Paris, Paris, France..
    Chamoux, Marine
    SoftBank Robot Europe, Paris, France..
    Chetouani, Mohamed
    Sorbonne Univ, Inst Syst Intelligents & Robot, CNRS 7222, Paris, France..
    Kappas, Arvid
    Jacobs Univ, Dept Psychol & Methods, Bremen, Germany..
    Does what users say match what they do?: Comparing self-reported attitudes and behaviours towards a social robot2022Inngår i: 2022 31ST IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2022), Institute of Electrical and Electronics Engineers (IEEE) , 2022, s. 1429-1434Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Constructs intended to capture social attitudes and behaviour towards social robots are incredibly varied, with little overlap or consistency in how they may be related. In this study we conduct exploratory analyses between participants' self-reported attitudes and behaviour towards a social robot. We designed an autonomous interaction where 102 participants interacted with a social robot (Pepper) in a hypothetical travel planning scenario, during which the robot displayed various multi-modal social behaviours. Several behavioural measures were embedded throughout the interaction, followed by a selfreport questionnaire targeting participant's social attitudes towards the robot (social trust, liking, rapport, competency trust, technology acceptance, mind perception, social presence, and social information processing). Several relationships were identified between participant's behaviour and self-reported attitudes towards the robot. Implications for how to conceptualise and measure interactions with social robots are discussed.

  • 9.
    Stower, Rebecca
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Zibetti, Elisabetta
    Paris 8 Univ, Dept Psychol, CHArt LUTIN Lab, Paris, France..
    St-Onge, David
    Ecole Technol Super, Dept Mech Engn, Montreal, PQ, Canada..
    Bots of a Feather: Exploring User Perceptions of Group Cohesiveness for Application in Robotic Swarms2022Inngår i: 2022 31ST IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2022), Institute of Electrical and Electronics Engineers (IEEE) , 2022, s. 95-100Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Behaviours of robot swarms often take inspiration from biological models, such as ant colonies and bee hives. Yet, understanding how these behaviours are actually perceived by human users has so far received limited attention. In this paper, we use animations to represent different kinds of possible swarm motions intended to communicate specific messages to a human. We explore how these animations relate to the perceived group cohesiveness of the swarm, comprised of five different parameters: synchronising, grouping, following, reacting, and shape forming. We conducted an online user study where 98 participants viewed nine animations of a swarm displaying different behaviours and rated them for perceived group cohesiveness. We found that the parameters of group cohesiveness correlated with the messages the swarm was perceived as communicating. In particular, the message of initiating communication was highly positively correlated with all group parameters, whereas broken communication was negatively correlated. In addition, the importance of specific group parameters differed within each animation. For example, the parameter of grouping was most associated with animations signalling an intervention is needed. These findings are discussed within the context of designing intuitive behaviour for robot swarms.

  • 10.
    Wozniak, Maciej K.
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Stower, Rebecca
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Abelho Pereira, André Tiago
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.
    Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality2023Inngår i: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, s. 1573-1580Konferansepaper (Fagfellevurdert)
    Abstract [en]

    While we can see robots in more areas of our lives, they still make errors. One common cause of failure stems from the robot perception module when detecting objects. Allowing users to correct such errors can help improve the interaction and prevent the same errors in the future. Consequently, we investigate the effectiveness of a virtual reality (VR) framework for correcting perception errors of a Franka Panda robot. We conducted a user study with 56 participants who interacted with the robot using both VR and screen interfaces. Participants learned to collaborate with the robot faster in the VR interface compared to the screen interface. Additionally, participants found the VR interface more immersive, enjoyable, and expressed a preference for using it again. These findings suggest that VR interfaces may offer advantages over screen interfaces for human-robot interaction in erroneous environments.

  • 11.
    Wozniak, Maciej K.
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Stower, Rebecca
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Abelho Pereira, André Tiago
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.
    What You See Is (not) What You Get: A VR Framework For Correcting Robot Errors2023Inngår i: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, s. 243-247Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Many solutions tailored for intuitive visualization or teleoperation of virtual, augmented and mixed (VAM) reality systems are not robust to robot failures, such as the inability to detect and recognize objects in the environment or planning unsafe trajectories. In this paper, we present a novel virtual reality (VR) framework where users can (i) recognize when the robot has failed to detect a realworld object, (ii) correct the error in VR, (iii) modify proposed object trajectories and, (iv) implement behaviors on a real-world robot. Finally, we propose a user study aimed at testing the efficacy of our framework. Project materials can be found in the OSF repository1.

1 - 11 of 11
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf