kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 12) Show all publications
Wozniak, M. K. (2024). Enhancing Robot Perception with Real-World HRI. In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 160-162). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Enhancing Robot Perception with Real-World HRI
2024 (English)In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 160-162Conference paper, Published paper (Refereed)
Abstract [en]

Robot perception often fails in uncontrolled environments due to unfamiliar object classes, different domains, or hardware issues. This poses significant challenges for human-robot interaction (HRI) outside of a lab or user study settings. My work focuses on two separate approaches: improving robot perception models and developing systems where users can directly correct robot errors. My research strives to improve HRI in real-world scenarios by reducing vision errors and empowering users to address them.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
failures, multimodality, perception, real world interaction, robotics
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-344810 (URN)10.1145/3610978.3638363 (DOI)2-s2.0-85188112646 (Scopus ID)
Conference
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Note

QC 20240409

 Part of ISBN 9798400703232

Available from: 2024-03-28 Created: 2024-03-28 Last updated: 2024-04-09Bibliographically approved
Wozniak, M. K., Pascher, M., Ikeda, B., Luebbers, M. B. & Jena, A. (2024). Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI). In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 1361-1363). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)
Show others...
2024 (English)In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 1361-1363Conference paper, Published paper (Refereed)
Abstract [en]

The 7th International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) seeks to bring together researchers from human-robot interaction (HRI), robotics, and mixed reality (MR) to address the challenges related to mixed reality interactions between humans and robots. Key topics include the development of robots capable of interacting with humans in mixed reality, the use of virtual reality for creating interactive robots, designing augmented reality interfaces for communication between humans and robots, exploring mixed reality interfaces for enhancing robot learning, comparative analysis of the capabilities and perceptions of robots and virtual agents, and sharing best design practices. VAM-HRI 2024 will build on the success of VAM-HRI workshops held from 2018 to 2023, advancing research in this specialized community. This year's website is located at https://vamhri.github.io.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
AR, human-robot interaction, MR, robotics, VR
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-344805 (URN)10.1145/3610978.3638158 (DOI)2-s2.0-85188106497 (Scopus ID)
Conference
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Note

Part of ISBN 9798400703232

QC 20240405

Available from: 2024-03-28 Created: 2024-03-28 Last updated: 2024-04-05Bibliographically approved
Moletta, M., Wozniak, M. K., Welle, M. C. & Kragic, D. (2023). A Virtual Reality Framework for Human-Robot Collaboration in Cloth Folding. In: 2023 IEEE-RAS 22nd International Conference on Humanoid Robots: . Paper presented at IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), DEC 12-14, 2023, Austin, TX. IEEE
Open this publication in new window or tab >>A Virtual Reality Framework for Human-Robot Collaboration in Cloth Folding
2023 (English)In: 2023 IEEE-RAS 22nd International Conference on Humanoid Robots, IEEE, 2023Conference paper, Published paper (Refereed)
Abstract [en]

We present a virtual reality (VR) framework to automate the data collection process in cloth folding tasks. The framework uses skeleton representations to help the user define the folding plans for different classes of garments, allowing for replicating the folding on unseen items of the same class. We evaluate the framework in the context of automating garment folding tasks. A quantitative analysis is performed on three classes of garments, demonstrating that the framework reduces the need for intervention by the user. We also compare skeleton representations with RGB images in a classification task on a large dataset of clothing items, motivating the use of the proposed framework for other classes of garments.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE-RAS International Conference on Humanoid Robots, ISSN 2164-0572
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-344957 (URN)10.1109/HUMANOIDS57100.2023.10375184 (DOI)001156965200045 ()2-s2.0-85164729488 (Scopus ID)
Conference
IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), DEC 12-14, 2023, Austin, TX
Note

QC 20240408

Part of ISBN 979-8-3503-0327-8

Available from: 2024-04-08 Created: 2024-04-08 Last updated: 2024-04-08Bibliographically approved
Wozniak, M. K., Kårefjärd, V., Hansson, M., Thiel, M. & Jensfelt, P. (2023). Applying 3D Object Detection from Self-Driving Cars to Mobile Robots: A Survey and Experiments. In: Lopes, AC Pires, G Pinto, VH Lima, JL Fonseca, P (Ed.), 2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC: . Paper presented at IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), APR 26-27, 2023, Tomar, PORTUGAL (pp. 3-9). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Applying 3D Object Detection from Self-Driving Cars to Mobile Robots: A Survey and Experiments
Show others...
2023 (English)In: 2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC / [ed] Lopes, AC Pires, G Pinto, VH Lima, JL Fonseca, P, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 3-9Conference paper, Published paper (Refereed)
Abstract [en]

3D object detection is crucial for the safety and reliability of mobile robots. Mobile robots must understand dynamic environments to operate safely and successfully carry out their tasks. However, most of the open-source datasets and methods are built for autonomous driving. In this paper, we present a detailed review of available 3D object detection methods, focusing on the ones that could be easily adapted and used on mobile robots. We show that the methods do not perform well if used off-the-shelf on mobile robots or are too computationally expensive to run on mobile robotic platforms. Therefore, we propose a domain adaptation approach to use publicly available data to retrain the perception modules of mobile robots, resulting in higher performance. Finally, we run the tests on the real-world robot and provide data for testing our approach.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE International Conference on Autonomous Robot Systems and Competitions ICARSC, ISSN 2573-9360
Keywords
perception, mobile robots, object detection
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-333736 (URN)10.1109/ICARSC58346.2023.10129637 (DOI)001011040500003 ()2-s2.0-85161974029 (Scopus ID)
Conference
IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), APR 26-27, 2023, Tomar, PORTUGAL
Note

QC 20230810

Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2023-08-10Bibliographically approved
Wozniak, M. K., Stower, R., Jensfelt, P. & Abelho Pereira, A. T. (2023). Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 1573-1580). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Happily Error After: Framework Development and User Study for Correcting Robot Perception Errors in Virtual Reality
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1573-1580Conference paper, Published paper (Refereed)
Abstract [en]

While we can see robots in more areas of our lives, they still make errors. One common cause of failure stems from the robot perception module when detecting objects. Allowing users to correct such errors can help improve the interaction and prevent the same errors in the future. Consequently, we investigate the effectiveness of a virtual reality (VR) framework for correcting perception errors of a Franka Panda robot. We conducted a user study with 56 participants who interacted with the robot using both VR and screen interfaces. Participants learned to collaborate with the robot faster in the VR interface compared to the screen interface. Additionally, participants found the VR interface more immersive, enjoyable, and expressed a preference for using it again. These findings suggest that VR interfaces may offer advantages over screen interfaces for human-robot interaction in erroneous environments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-341975 (URN)10.1109/RO-MAN57019.2023.10309446 (DOI)001108678600198 ()2-s2.0-85186968933 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2024-03-22Bibliographically approved
Mkhitaryan, S., Giabbanelli, P. J., Wozniak, M. K., de Vries, N. K., Oenema, A. & Crutzen, R. (2023). How to use machine learning and fuzzy cognitive maps to test hypothetical scenarios in health behavior change interventions: a case study on fruit intake. BMC Public Health, 23(1), Article ID 2478.
Open this publication in new window or tab >>How to use machine learning and fuzzy cognitive maps to test hypothetical scenarios in health behavior change interventions: a case study on fruit intake
Show others...
2023 (English)In: BMC Public Health, E-ISSN 1471-2458, Vol. 23, no 1, article id 2478Article in journal (Refereed) Published
Abstract [en]

Background: Intervention planners use logic models to design evidence-based health behavior interventions. Logic models that capture the complexity of health behavior necessitate additional computational techniques to inform decisions with respect to the design of interventions. Objective: Using empirical data from a real intervention, the present paper demonstrates how machine learning can be used together with fuzzy cognitive maps to assist in designing health behavior change interventions. Methods: A modified Real Coded Genetic algorithm was applied on longitudinal data from a real intervention study. The dataset contained information about 15 determinants of fruit intake among 257 adults in the Netherlands. Fuzzy cognitive maps were used to analyze the effect of two hypothetical intervention scenarios designed by domain experts. Results: Simulations showed that the specified hypothetical interventions would have small impact on fruit intake. The results are consistent with the empirical evidence used in this paper. Conclusions: Machine learning together with fuzzy cognitive maps can assist in building health behavior interventions with complex logic models. The testing of hypothetical scenarios may help interventionists finetune the intervention components thus increasing their potential effectiveness.

Place, publisher, year, edition, pages
Springer Nature, 2023
Keywords
Complex interventions, Fuzzy cognitive maps, Genetic algorithms, Machine learning
National Category
Public Health, Global Health, Social Medicine and Epidemiology Computer Sciences
Identifiers
urn:nbn:se:kth:diva-341603 (URN)10.1186/s12889-023-17367-z (DOI)001123738400002 ()38082297 (PubMedID)2-s2.0-85179365439 (Scopus ID)
Note

QC 20231227

Available from: 2023-12-27 Created: 2023-12-27 Last updated: 2024-02-29Bibliographically approved
Wozniak, M. K., Kårefjärd, V., Thiel, M. & Jensfelt, P. (2023). Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data. IEEE Robotics and Automation Letters, 8(11), 7018-7025
Open this publication in new window or tab >>Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data
2023 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 11, p. 7018-7025Article in journal (Refereed) Published
Abstract [en]

Multimodal sensor fusion methods for 3D object detection have been revolutionizing the autonomous driving research field. Nevertheless, most of these methods heavily rely on dense LiDAR data and accurately calibrated sensors which is often not the case in real-world scenarios. Data from LiDAR and cameras often come misaligned due to the miscalibration, decalibration, or different frequencies of the sensors. Additionally, some parts of the LiDAR data may be occluded and parts of the data may be missing due to hardware malfunction or weather conditions. This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust. Through extensive experiments, we demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Object detection, segmentation and categorization, sensor fusion, deep learning for visual perception
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-344105 (URN)10.1109/LRA.2023.3313924 (DOI)001157878900002 ()2-s2.0-85171591218 (Scopus ID)
Note

QC 20240301

Available from: 2024-03-01 Created: 2024-03-01 Last updated: 2024-03-01Bibliographically approved
Wozniak, M. K., Chang, C. T., Luebbers, M. B., Ikeda, B., Walker, M., Rosen, E. & Groechel, T. R. (2023). Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI). In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 938-940). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)
Show others...
2023 (English)In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 938-940Conference paper, Published paper (Refereed)
Abstract [en]

The 6 InternationalWorkshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) will bring together HRI, robotics, and mixed reality researchers to address challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include the development of robots that can interact with humans in mixed reality, the use of virtual reality for developing interactive robots, the design of augmented reality interfaces that mediate communication between humans and robots, the investigations of mixed reality interfaces for robot learning, comparisons of the capabilities and perceptions of robots and virtual agents, and best design practices. VAM-HRI 2023 will follow the success of VAM-HRI 2018-22 and advance the cause of this nascent research community.Website: https://vam-hri.github.io.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
AR, human-robot interaction, MR, robotics, VR
National Category
Robotics Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-333374 (URN)10.1145/3568294.3579959 (DOI)001054975700210 ()2-s2.0-85150436087 (Scopus ID)
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Note

Part of ISBN 9781450399708

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2023-10-16Bibliographically approved
Wozniak, M. K., Stower, R., Jensfelt, P. & Abelho Pereira, A. T. (2023). What You See Is (not) What You Get: A VR Framework For Correcting Robot Errors. In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 243-247). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>What You See Is (not) What You Get: A VR Framework For Correcting Robot Errors
2023 (English)In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 243-247Conference paper, Published paper (Refereed)
Abstract [en]

Many solutions tailored for intuitive visualization or teleoperation of virtual, augmented and mixed (VAM) reality systems are not robust to robot failures, such as the inability to detect and recognize objects in the environment or planning unsafe trajectories. In this paper, we present a novel virtual reality (VR) framework where users can (i) recognize when the robot has failed to detect a realworld object, (ii) correct the error in VR, (iii) modify proposed object trajectories and, (iv) implement behaviors on a real-world robot. Finally, we propose a user study aimed at testing the efficacy of our framework. Project materials can be found in the OSF repository1.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
AR, human-robot interaction, perception, robotics, VR
National Category
Robotics Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-333372 (URN)10.1145/3568294.3580081 (DOI)001054975700044 ()2-s2.0-85150432457 (Scopus ID)
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Note

Part of ISBN 9781450399708

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2023-10-16Bibliographically approved
Wozniak, M. K., Liang, L., Phan, H. & Giabbanelli, P. J. (2022). A New Application of Machine Learning: Detecting Errors in Network Simulations. In: Proceedings of the 2022 Winter Simulation Conference, WSC 2022: . Paper presented at 2022 Winter Simulation Conference, WSC 2022, Guilin, China, Dec 11 2022 - Dec 14 2022 (pp. 653-664). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A New Application of Machine Learning: Detecting Errors in Network Simulations
2022 (English)In: Proceedings of the 2022 Winter Simulation Conference, WSC 2022, Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 653-664Conference paper, Published paper (Refereed)
Abstract [en]

After designing a simulation and running it locally on a small network instance, the implementation can be scaled-up via parallel and distributed computing (e.g., a cluster) to cope with massive networks. However, implementation changes can create errors (e.g., parallelism errors), which are difficult to identify since the aggregate behavior of an incorrect implementation of a stochastic network simulation can fall within the distributions expected from correct implementations. In this paper, we propose the first approach that applies machine learning to traces of network simulations to detect errors. Our technique transforms simulation traces into images by reordering the network's adjacency matrix, and then training supervised machine learning models. Our evaluation on three simulation models shows that we can easily detect previously encountered types of errors and even confidently detect new errors. This work opens up numerous opportunities by examining other simulation models, representations (i.e., matrix reordering algorithms), or machine learning techniques.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-333413 (URN)10.1109/WSC57314.2022.10015484 (DOI)000991872900054 ()2-s2.0-85147416696 (Scopus ID)
Conference
2022 Winter Simulation Conference, WSC 2022, Guilin, China, Dec 11 2022 - Dec 14 2022
Note

Part of ISBN 9798350309713

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2023-09-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3432-6151

Search in DiVA

Show all publications