kth.sePublications
Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bütepage, Judith
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Poklukar, Petra
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Modeling assumptions and evaluation schemes: On the assessment of deep latent variable models2019In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE Computer Society , 2019, p. 9-12Conference paper (Refereed)
    Abstract [en]

    Recent findings indicate that deep generative models can assign unreasonably high likelihoods to out-of-distribution data points. Especially in applications such as autonomous driving, medicine and robotics, these overconfident ratings can have detrimental effects. In this work, we argue that two points contribute to these findings: 1) modeling assumptions such as the choice of the likelihood, and 2) the evaluation under local posterior distributions vs global prior distributions. We demonstrate experimentally how these mechanisms can bias the likelihood estimates of variational autoencoders. 

  • 2.
    Engel, Andreas K.
    et al.
    Univ Med Ctr Hamburg Eppendorf, Dept Neurophysiol & Pathophysiol, Hamburg, Germany..
    Verschure, Paul F. M. J.
    Fundacio Inst Bioengn Catalunya, Synthet Percept Emot Cognit Syst Lab, Barcelona, Spain..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Polani, Daniel
    Univ Hertfordshire, Ctr Comp Sci & Informat Res, Sch Comp Sci, Hatfield, Herts, England..
    Effenberg, Alfred O.
    Leibniz Univ Hannover, Inst Sports Sci, Hannover, Germany..
    Koenig, Peter
    Osnabruck Univ, Inst Cognit Sci, Osnabruck, Germany..
    Editorial: Sensorimotor Foundations of Social Cognition2022In: Frontiers in Human Neuroscience, E-ISSN 1662-5161, Vol. 16, article id 971133Article in journal (Other academic)
  • 3.
    Ingelhag, Nils
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Munkeby, Jesper
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    van Haastregt, Jonne
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Varava, Anastasiia
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Welle, Michael C.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    A Robotic Skill Learning System Built Upon Diffusion Policies and Foundation Models2024In: 2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 748-754Conference paper (Refereed)
    Abstract [en]

    In this paper, we build upon two major recent developments in the field, Diffusion Policies for visuomotor manipulation and large pre-trained multimodal foundational models to obtain a robotic skill learning system. The system can obtain new skills via the behavioral cloning approach of visuomotor diffusion policies given teleoperated demonstrations. Foundational models are being used to perform skill selection given the user's prompt in natural language. Before executing a skill the foundational model performs a precondition check given an observation of the workspace. We compare the performance of different foundational models to this end and give a detailed experimental evaluation of the skills taught by the user in simulation and the real world. Finally, we showcase the combined system on a challenging food serving scenario in the real world. Videos of all experimental executions, as well as the process of teaching new skills in simulation and the real world, are available on the project's website(1).

  • 4.
    Lippi, Martina
    et al.
    Roma Tre Univ, Rome, Italy..
    Welle, Michael C.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Wozniak, Maciej K.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Gasparri, Andrea
    Roma Tre Univ, Rome, Italy..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Low-Cost Teleoperation with Haptic Feedback through Vision-based Tactile Sensors for Rigid and Soft Object Manipulation2024In: 2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 1963-1969Conference paper (Refereed)
    Abstract [en]

    Haptic feedback is essential for humans to successfully perform complex and delicate manipulation tasks. A recent rise in tactile sensors has enabled robots to leverage the sense of touch and expand their capability drastically. However, many tasks still need human intervention/guidance. For this reason, we present a teleoperation framework designed to provide haptic feedback to human operators based on the data from camera-based tactile sensors mounted on the robot gripper. Partial autonomy is introduced to prevent slippage of grasped objects during task execution. Notably, we rely exclusively on low-cost off-the-shelf hardware to realize an affordable solution. We demonstrate the versatility of the framework on nine different objects ranging from rigid to soft and fragile ones, using three different operators on real hardware.

  • 5.
    Longhini, Alberta
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Moletta, Marco
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Reichlin, Alfredo
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Welle, Michael C.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kravberg, Alexander
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemistry, Organic chemistry.
    Wang, Yufei
    Held, David
    Erickson, Zackory
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Elastic Context: Encoding Elasticity for Data-driven Models of Textiles2023In: Proceedings - ICRA 2023: IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1764-1770Conference paper (Refereed)
    Abstract [en]

    Physical interaction with textiles, such as assistivedressing or household tasks, requires advanced dexterous skills.The complexity of textile behavior during stretching and pullingis influenced by the material properties of the yarn and bythe textile’s construction technique, which are often unknownin real-world settings. Moreover, identification of physicalproperties of textiles through sensing commonly available onrobotic platforms remains an open problem. To address this,we introduce Elastic Context (EC), a method to encode theelasticity of textiles using stress-strain curves adapted fromtextile engineering for robotic applications. We employ EC tolearn generalized elastic behaviors of textiles and examine theeffect of EC dimension on accurate force modeling of real-worldnon-linear elastic behaviors.

    Download full text (pdf)
    fulltext
  • 6.
    Lundell, Jens
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Verdoja, Francesco
    School of Electrical Engineering, Aalto University, Intelligent Robotics Group, Department of Electrical Engineering and Automation, Finland.
    Le, Tran Nguyen
    School of Electrical Engineering, Aalto University, Intelligent Robotics Group, Department of Electrical Engineering and Automation, Finland.
    Mousavian, Arsalan
    NVIDIA Corporation, USA.
    Fox, Dieter
    NVIDIA Corporation, USA; Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, USA.
    Kyrki, Ville
    School of Electrical Engineering, Aalto University, Intelligent Robotics Group, Department of Electrical Engineering and Automation, Finland.
    Constrained Generative Sampling of 6-DoF Grasps2023In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2940-2946Conference paper (Refereed)
    Abstract [en]

    Most state-of-the-art data-driven grasp sampling methods propose stable and collision-free grasps uniformly on the target object. For bin-picking, executing any of those reachable grasps is sufficient. However, for completing specific tasks, such as squeezing out liquid from a bottle, we want the grasp to be on a specific part of the object's body while avoiding other locations, such as the cap. This work presents a generative grasp sampling network, VCGS, capable of constrained 6-Degrees of Freedom (DoF) grasp sampling. In addition, we also curate a new dataset designed to train and evaluate methods for constrained grasping. The new dataset, called CONG, consists of over 14 million training samples of synthetically rendered point clouds and grasps at random target areas on 2889 objects. VCGS is benchmarked against GraspNet, a state-of-the-art unconstrained grasp sampler, in simulation and on a real robot. The results demonstrate that VCGS achieves a 10-15% higher grasp success rate than the baseline while being 2-3 times as sample efficient. Supplementary material is available on our project website.

  • 7.
    Marchetti, Giovanni Luca
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Hillar, Christopher
    Redwood Center for Theoretical Neuroscience.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Sanborn, Sophia
    UC Santa Barbara.
    Harmonics of Learning: Universal Fourier Features Emerge in Invariant NetworksManuscript (preprint) (Other academic)
    Abstract [en]

    In this work, we formally prove that, under certain conditions, if a neural network is invariant to a finite group then its weights recover the Fourier transform on that group. This provides a mathematical explanation for the emergence of Fourier features -- a ubiquitous phenomenon in both biological and artificial learning systems. The results hold even for non-commutative groups, in which case the Fourier transform encodes all the irreducible unitary group representations. Our findings have consequences for the problem of symmetry discovery. Specifically, we demonstrate that the algebraic structure of an unknown group can be recovered from the weights of a network that is at least approximately invariant within certain bounds. Overall, this work contributes to a foundation for an algebraic learning theory of invariant neural network representations.

    Download full text (pdf)
    fulltext
  • 8.
    Moletta, Marco
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Tadiello, Matteo
    KTH.
    Comparison of Collision Avoidance Algorithms for Autonomous Multi-agent Systems2020In: Proceedings - 2020 IEEE 44th Annual Computers, Software, and Applications Conference, COMPSAC 2020, Institute of Electrical and Electronics Engineers Inc. , 2020, p. 1-9Conference paper (Refereed)
    Abstract [en]

    Autonomous multi-agent systems are raising in popularity in recent years. More specifically, Unmanned Aerial Vehicles (UAVs) are involved in modern solutions for surveillance, delivering and film shooting. To carry out these tasks, the avoidance of any possible collision is a crucial matter, mostly when agents need to cooperate. In this paper, different collision avoidance algorithms are compared and analyzed for distributed multi-agent holonomic systems. Our purpose is to identify and clarify the different classes of reciprocal collision avoidance algorithms and then to compare them using meaningful metrics and test for the evaluation.

  • 9.
    Pérez Rey, Luis Armando
    et al.
    Eindhoven University of Technology, Eindhoven, The Netherlands; Eindhoven Artificial Intelligence Systems Institute, Eindhoven, The Netherlands; Prosus, Amsterdam, The Netherlands.
    Marchetti, Giovanni Luca
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jarnikov, Dmitri
    Eindhoven University of Technology, Eindhoven, The Netherlands; Prosus, Amsterdam, The Netherlands.
    Holenderski, Mike
    Eindhoven University of Technology, Eindhoven, The Netherlands.
    Equivariant Representation Learning in the Presence of Stabilizers2023In: Machine Learning and Knowledge Discovery in Databases: Research Track - European Conference, ECML PKDD 2023, Proceedings, Springer Nature , 2023, p. 693-708Conference paper (Refereed)
    Abstract [en]

    We introduce Equivariant Isomorphic Networks (EquIN) – a method for learning representations that are equivariant with respect to general group actions over data. Differently from existing equivariant representation learners, EquIN is suitable for group actions that are not free, i.e., that stabilize data via nontrivial symmetries. EquIN is theoretically grounded in the orbit-stabilizer theorem from group theory. This guarantees that an ideal learner infers isomorphic representations while trained on equivariance alone and thus fully extracts the geometric structure of data. We provide an empirical investigation on image datasets with rotational symmetries and show that taking stabilizers into account improves the quality of the representations.

    Download full text (pdf)
    fulltext
  • 10.
    Reichlin, Alfredo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Marchetti, Giovanni Luca
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Yin, Hang
    University of Copenhagen, Copenhagen, Denmark.
    Varava, Anastasiia
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Learning Geometric Representations of Objects via Interaction2023In: Machine Learning and Knowledge Discovery in Databases: Research Track - European Conference, ECML PKDD 2023, Proceedings, Springer Nature , 2023, p. 629-644Conference paper (Refereed)
    Abstract [en]

    We address the problem of learning representations from observations of a scene involving an agent and an external object the agent interacts with. To this end, we propose a representation learning framework extracting the location in physical space of both the agent and the object from unstructured observations of arbitrary nature. Our framework relies on the actions performed by the agent as the only source of supervision, while assuming that the object is displaced by the agent via unknown dynamics. We provide a theoretical foundation and formally prove that an ideal learner is guaranteed to infer an isometric representation, disentangling the agent from the object and correctly extracting their locations. We evaluate empirically our framework on a variety of scenarios, showing that it outperforms vision-based approaches such as a state-of-the-art keypoint extractor. We moreover demonstrate how the extracted representations enable the agent to solve downstream tasks via reinforcement learning in an efficient manner.

  • 11.
    Santos, Pedro P.
    et al.
    Instituto Superior Técnico, INESC-ID Lisbon, Portugal.
    Carvalho, Diogo S.
    Instituto Superior Técnico, INESC-ID Lisbon, Portugal.
    Vasco, Miguel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Sardinha, Alberto
    Pontifical Catholic University of Rio de Janeiro, INESC-ID Rio de Janeiro, Brazil.
    Santos, Pedro A.
    Instituto Superior Técnico, INESC-ID Lisbon, Portugal.
    Paiva, Ana
    Instituto Superior Técnico, INESC-ID Lisbon, Portugal.
    Melo, Francisco S.
    Instituto Superior Técnico, INESC-ID Lisbon, Portugal.
    Centralized Training with Hybrid Execution in Multi-Agent Reinforcement Learning2024In: AAMAS 2024 - Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) , 2024, p. 2453-2455Conference paper (Refereed)
    Abstract [en]

    We introduce hybrid execution in multi-agent reinforcement learning (MARL), a new paradigm in which agents aim to successfully complete cooperative tasks with arbitrary communication levels at execution time by taking advantage of information-sharing among the agents. Under hybrid execution, the communication level can range from a setting in which no communication is allowed between agents (fully decentralized), to a setting featuring full communication (fully centralized), but the agents do not know beforehand which communication level they will encounter at execution time. To formalize our setting, we define a new class of multi-agent partially observable Markov decision processes (POMDPs) that we name hybrid-POMDPs, which explicitly model a communication process between the agents.

  • 12.
    Weng, Zehang
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Zhou, Peng
    Hong Kong Polytech Univ PolyU, Kowloon, Hong Kong, Peoples R China..
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kravchenko, Alexander
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemistry, Organic chemistry.
    Varava, Anastasiia
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Navarro-Alarcon, David
    Hong Kong Polytech Univ PolyU, Kowloon, Hong Kong, Peoples R China..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Interactive Perception for Deformable Object Manipulation2024In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 9, p. 7763-7770Article in journal (Refereed)
    Abstract [en]

    Interactive perception enables robots to manipulate the environment and objects to bring them into states that benefit the perception process. Deformable objects pose challenges to this due to manipulation difficulty and occlusion in vision-based perception. In this work, we address such a problem with a setup involving both an active camera and an object manipulator. Our approach is based on a sequential decision-making framework and explicitly considers the motion regularity and structure in coupling the camera and manipulator. We contribute a method for constructing and computing a subspace, called Dynamic Active Vision Space (DAVS), for effectively utilizing the regularity in motion exploration. The effectiveness of the framework and approach are validated in both a simulation and a real dual-arm robot setup. Our results confirm the necessity of an active camera and coordinative motion in interactive perception for deformable objects.

  • 13.
    Zhang, Yuchong
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Vasco, Miguel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Will You Participate? Exploring the Potential of Robotics Competitions on Human-Centric Topics2024In: Human-Computer Interaction - Thematic Area, HCI 2024, Held as Part of the 26th HCI International Conference, HCII 2024, Proceedings, Springer Nature , 2024, p. 240-255Conference paper (Refereed)
    Abstract [en]

    This paper presents findings from an exploratory needfinding study investigating the research current status and potential participation of the competitions on the robotics community towards four human-centric topics: safety, privacy, explainability, and federated learning. We conducted a survey with 34 participants across three distinguished European robotics consortia, nearly 60% of whom possessed over five years of research experience in robotics. Our qualitative and quantitative analysis revealed that current mainstream robotic researchers prioritize safety and explainability, expressing a greater willingness to invest in further research in these areas. Conversely, our results indicate that privacy and federated learning garner less attention and are perceived to have lower potential. Additionally, the study suggests a lack of enthusiasm within the robotics community for participating in competitions related to these topics. Based on these findings, we recommend targeting other communities, such as the machine learning community, for future competitions related to these four human-centric topics.

1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf