kth.sePublications
Change search
Refine search result
1 - 12 of 12
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Khoche, Ajinkya
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Scania CV AB, S-15187 Södertälje, Sweden..
    Wozniak, Maciej K.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Duberg, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Semantic 3D Grid Maps for Autonomous Driving2022In: 2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 2681-2688Conference paper (Refereed)
    Abstract [en]

    Maps play a key role in rapidly developing area of autonomous driving. We survey the literature for different map representations and find that while the world is threedimensional, it is common to rely on 2D map representations in order to meet real-time constraints. We believe that high levels of situation awareness require a 3D representation as well as the inclusion of semantic information. We demonstrate that our recently presented hierarchical 3D grid mapping framework UFOMap meets the real-time constraints. Furthermore, we show how it can be used to efficiently support more complex functions such as calculating the occluded parts of space and accumulating the output from a semantic segmentation network.

  • 2.
    Mkhitaryan, Samvel
    et al.
    Department of Health Promotion, CAPHRI, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
    Giabbanelli, Philippe J.
    Department of Computer Science & Software Engineering, Miami University, Oxford, OH, USA.
    Wozniak, Maciej K.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    de Vries, Nanne K.
    Department of Health Promotion, CAPHRI, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
    Oenema, Anke
    Department of Health Promotion, CAPHRI, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
    Crutzen, Rik
    Department of Health Promotion, CAPHRI, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
    How to use machine learning and fuzzy cognitive maps to test hypothetical scenarios in health behavior change interventions: a case study on fruit intake2023In: BMC Public Health, E-ISSN 1471-2458, Vol. 23, no 1, article id 2478Article in journal (Refereed)
    Abstract [en]

    Background: Intervention planners use logic models to design evidence-based health behavior interventions. Logic models that capture the complexity of health behavior necessitate additional computational techniques to inform decisions with respect to the design of interventions. Objective: Using empirical data from a real intervention, the present paper demonstrates how machine learning can be used together with fuzzy cognitive maps to assist in designing health behavior change interventions. Methods: A modified Real Coded Genetic algorithm was applied on longitudinal data from a real intervention study. The dataset contained information about 15 determinants of fruit intake among 257 adults in the Netherlands. Fuzzy cognitive maps were used to analyze the effect of two hypothetical intervention scenarios designed by domain experts. Results: Simulations showed that the specified hypothetical interventions would have small impact on fruit intake. The results are consistent with the empirical evidence used in this paper. Conclusions: Machine learning together with fuzzy cognitive maps can assist in building health behavior interventions with complex logic models. The testing of hypothetical scenarios may help interventionists finetune the intervention components thus increasing their potential effectiveness.

  • 3.
    Mkhitaryan, Samvel
    et al.
    Maastricht Univ, Hlth Promot, Maastricht, Netherlands..
    Giabbanelli, Philippe
    Miami Univ Ohio, Comp Sci & Software Engn, Oxford, OH USA..
    Wozniak, Maciej K.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Napoles, Gonzalo
    Tilburg Univ, Cognit Sci & Artificial Intelligence, Tilburg, Netherlands..
    De Vries, Nanne
    Maastricht Univ, Hlth Promot, Maastricht, Netherlands..
    Crutzen, Rik
    Maastricht Univ, Hlth Promot, Maastricht, Netherlands..
    FCMpy: a python module for constructing and analyzing fuzzy cognitive maps2022In: PeerJ Computer Science, E-ISSN 2376-5992, Vol. 8, p. e1078-, article id 1078Article in journal (Refereed)
    Abstract [en]

    FCMpy is an open-source Python module for building and analyzing Fuzzy Cognitive Maps (FCMs). The module provides tools for end-to-end projects involving FCMs. It is able to derive fuzzy causal weights from qualitative data or simulating the system behavior. Additionally, it includes machine learning algorithms (e.g., Nonlinear Hebbian Learning, Active Hebbian Learning, Genetic Algorithms, and Deterministic Learning) to adjust the FCM causal weight matrix and to solve classification problems. Finally, users can easily implement scenario analysis by simulating hypothetical interventions (i.e., analyzing what-if scenarios). FCMpy is the first open-source module that contains all the functionalities necessary for FCM oriented projects. This work aims to enable researchers from different areas, such as psychology, cognitive science, or engineering, to easily and efficiently develop and test their FCM models without the need for extensive programming knowledge.

  • 4.
    Moletta, Marco
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Wozniak, Maciej K.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Welle, Michael C.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A Virtual Reality Framework for Human-Robot Collaboration in Cloth Folding2023In: 2023 IEEE-RAS 22nd International Conference on Humanoid Robots, IEEE, 2023Conference paper (Refereed)
    Abstract [en]

    We present a virtual reality (VR) framework to automate the data collection process in cloth folding tasks. The framework uses skeleton representations to help the user define the folding plans for different classes of garments, allowing for replicating the folding on unseen items of the same class. We evaluate the framework in the context of automating garment folding tasks. A quantitative analysis is performed on three classes of garments, demonstrating that the framework reduces the need for intervention by the user. We also compare skeleton representations with RGB images in a classification task on a large dataset of clothing items, motivating the use of the proposed framework for other classes of garments.

  • 5.
    Wozniak, Maciej K.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Enhancing Robot Perception with Real-World HRI2024In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 160-162Conference paper (Refereed)
    Abstract [en]

    Robot perception often fails in uncontrolled environments due to unfamiliar object classes, different domains, or hardware issues. This poses significant challenges for human-robot interaction (HRI) outside of a lab or user study settings. My work focuses on two separate approaches: improving robot perception models and developing systems where users can directly correct robot errors. My research strives to improve HRI in real-world scenarios by reducing vision errors and empowering users to address them.

  • 6.
    Wozniak, Maciej K.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Chang, Christine T.
    U. of Colorado Boulder, Colorado, USA.
    Luebbers, Matthew B.
    U. of Colorado Boulder, Colorado, USA.
    Ikeda, Bryce
    U. of North Carolina Chapel Hill, North Carolina, USA.
    Walker, Michael
    U. of North Carolina Chapel Hill, North Carolina, USA.
    Rosen, Eric
    Brown University, Rhode Island, USA.
    Groechel, Thomas Roy
    U. of Southern California, California, USA.
    Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)2023In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 938-940Conference paper (Refereed)
    Abstract [en]

    The 6 InternationalWorkshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) will bring together HRI, robotics, and mixed reality researchers to address challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include the development of robots that can interact with humans in mixed reality, the use of virtual reality for developing interactive robots, the design of augmented reality interfaces that mediate communication between humans and robots, the investigations of mixed reality interfaces for robot learning, comparisons of the capabilities and perceptions of robots and virtual agents, and best design practices. VAM-HRI 2023 will follow the success of VAM-HRI 2018-22 and advance the cause of this nascent research community.Website: https://vam-hri.github.io.

  • 7.
    Wozniak, Maciej K.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kårefjärd, Viktor
    KTH.
    Hansson, Mattias
    KTH.
    Thiel, Marko
    Hamburg Univ Technol, Hamburg, Germany..
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Applying 3D Object Detection from Self-Driving Cars to Mobile Robots: A Survey and Experiments2023In: 2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC / [ed] Lopes, AC Pires, G Pinto, VH Lima, JL Fonseca, P, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 3-9Conference paper (Refereed)
    Abstract [en]

    3D object detection is crucial for the safety and reliability of mobile robots. Mobile robots must understand dynamic environments to operate safely and successfully carry out their tasks. However, most of the open-source datasets and methods are built for autonomous driving. In this paper, we present a detailed review of available 3D object detection methods, focusing on the ones that could be easily adapted and used on mobile robots. We show that the methods do not perform well if used off-the-shelf on mobile robots or are too computationally expensive to run on mobile robotic platforms. Therefore, we propose a domain adaptation approach to use publicly available data to retrain the perception modules of mobile robots, resulting in higher performance. Finally, we run the tests on the real-world robot and provide data for testing our approach.

  • 8.
    Wozniak, Maciej K.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kårefjärd, Viktor
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Thiel, Marko
    Hamburg Univ Technol, Inst Tech Logist, D-21073 Hamburg, Germany..
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data2023In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 11, p. 7018-7025Article in journal (Refereed)
    Abstract [en]

    Multimodal sensor fusion methods for 3D object detection have been revolutionizing the autonomous driving research field. Nevertheless, most of these methods heavily rely on dense LiDAR data and accurately calibrated sensors which is often not the case in real-world scenarios. Data from LiDAR and cameras often come misaligned due to the miscalibration, decalibration, or different frequencies of the sensors. Additionally, some parts of the LiDAR data may be occluded and parts of the data may be missing due to hardware malfunction or weather conditions. This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust. Through extensive experiments, we demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data.

  • 9.
    Wozniak, Maciej K.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Liang, Luke
    Miami University, Department of Computer Science & Software Engineering, St. Oxford, OH, USA..
    Phan, Hieu
    Miami University, Department of Computer Science & Software Engineering, St. Oxford, OH, USA..
    Giabbanelli, Philippe J.
    Miami University, Department of Computer Science & Software Engineering, St. Oxford, OH, USA..
    A New Application of Machine Learning: Detecting Errors in Network Simulations2022In: Proceedings of the 2022 Winter Simulation Conference, WSC 2022, Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 653-664Conference paper (Refereed)
    Abstract [en]

    After designing a simulation and running it locally on a small network instance, the implementation can be scaled-up via parallel and distributed computing (e.g., a cluster) to cope with massive networks. However, implementation changes can create errors (e.g., parallelism errors), which are difficult to identify since the aggregate behavior of an incorrect implementation of a stochastic network simulation can fall within the distributions expected from correct implementations. In this paper, we propose the first approach that applies machine learning to traces of network simulations to detect errors. Our technique transforms simulation traces into images by reordering the network's adjacency matrix, and then training supervised machine learning models. Our evaluation on three simulation models shows that we can easily detect previously encountered types of errors and even confidently detect new errors. This work opens up numerous opportunities by examining other simulation models, representations (i.e., matrix reordering algorithms), or machine learning techniques.

  • 10.
    Wozniak, Maciej K.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Pascher, Max
    University of Duisburg-Essen Essen, Germany; TU Dortmund University Dortmund, Germany.
    Ikeda, Bryce
    U. of North Carolina Chapel Hill Chapel Hill, United States.
    Luebbers, Matthew B.
    U. of Colorado Boulder Boulder, United States.
    Jena, Ayesha
    Lund University Lund, Sweden.
    Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)2024In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 1361-1363Conference paper (Refereed)
    Abstract [en]

    The 7th International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) seeks to bring together researchers from human-robot interaction (HRI), robotics, and mixed reality (MR) to address the challenges related to mixed reality interactions between humans and robots. Key topics include the development of robots capable of interacting with humans in mixed reality, the use of virtual reality for creating interactive robots, designing augmented reality interfaces for communication between humans and robots, exploring mixed reality interfaces for enhancing robot learning, comparative analysis of the capabilities and perceptions of robots and virtual agents, and sharing best design practices. VAM-HRI 2024 will build on the success of VAM-HRI workshops held from 2018 to 2023, advancing research in this specialized community. This year's website is located at https://vamhri.github.io.

  • 11.
    Wozniak, Maciej K.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Stower, Rebecca
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Abelho Pereira, André Tiago
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality2023In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1573-1580Conference paper (Refereed)
    Abstract [en]

    While we can see robots in more areas of our lives, they still make errors. One common cause of failure stems from the robot perception module when detecting objects. Allowing users to correct such errors can help improve the interaction and prevent the same errors in the future. Consequently, we investigate the effectiveness of a virtual reality (VR) framework for correcting perception errors of a Franka Panda robot. We conducted a user study with 56 participants who interacted with the robot using both VR and screen interfaces. Participants learned to collaborate with the robot faster in the VR interface compared to the screen interface. Additionally, participants found the VR interface more immersive, enjoyable, and expressed a preference for using it again. These findings suggest that VR interfaces may offer advantages over screen interfaces for human-robot interaction in erroneous environments.

  • 12.
    Wozniak, Maciej K.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Stower, Rebecca
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Abelho Pereira, André Tiago
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    What You See Is (not) What You Get: A VR Framework For Correcting Robot Errors2023In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 243-247Conference paper (Refereed)
    Abstract [en]

    Many solutions tailored for intuitive visualization or teleoperation of virtual, augmented and mixed (VAM) reality systems are not robust to robot failures, such as the inability to detect and recognize objects in the environment or planning unsafe trajectories. In this paper, we present a novel virtual reality (VR) framework where users can (i) recognize when the robot has failed to detect a realworld object, (ii) correct the error in VR, (iii) modify proposed object trajectories and, (iv) implement behaviors on a real-world robot. Finally, we propose a user study aimed at testing the efficacy of our framework. Project materials can be found in the OSF repository1.

1 - 12 of 12
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf