kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 15) Show all publications
Zhou, S., Hernandez, A. c., Gomez, C., Yin, W. & Björkman, M. (2025). SmartTBD: Smart Tracking for Resource-constrained Object Detection. ACM Transactions on Embedded Computing Systems, 24(2), Article ID 24.
Open this publication in new window or tab >>SmartTBD: Smart Tracking for Resource-constrained Object Detection
Show others...
2025 (English)In: ACM Transactions on Embedded Computing Systems, ISSN 1539-9087, E-ISSN 1558-3465, Vol. 24, no 2, article id 24Article in journal (Refereed) Published
Abstract [en]

With the growing demand for video analysis on mobile devices, object tracking has demonstrated to be a suitable assistance to object detection under the Tracking-By-Detection (TBD) paradigm for reducing computational overhead and power demands. However, performing TBD with fixed hyper-parameters leads to computational inefficiency and ignores perceptual dynamics, as fixed setups tend to run suboptimally, given the variability of scenarios. In this article, we propose SmartTBD, a scheduling strategy for TBD based on multi-objective optimization of accuracy-latency metrics. SmartTBD is a novel deep reinforcement learning based scheduling architecture that computes appropriate TBD configurations in video sequences to improve the speed and detection accuracy. This involves a challenging optimization problem due to the intrinsic relation between the video characteristics and the TBD performance. Therefore, we leverage video characteristics, frame information, and the past TBD results to drive the optimization problem. Our approach surpasses baselines with fixed TBD configurations and recent research, achieving accuracy comparable to pure detection while significantly reducing latency. Moreover, it enables performance analysis of tracking and detection in diverse scenarios. The method is proven to be generalizable and highly practical in common video analytics datasets on resource-constrained devices.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2025
Keywords
Mobile vision, tracking-by-detection, scheduling
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-362957 (URN)10.1145/3703912 (DOI)001454951000008 ()2-s2.0-105003605284 (Scopus ID)
Note

QC 20250505

Available from: 2025-05-05 Created: 2025-05-05 Last updated: 2025-05-27Bibliographically approved
Yin, W. (2024). Developing Data-Driven Models for Understanding Human Motion. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Developing Data-Driven Models for Understanding Human Motion
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Humans are the primary subjects of interest in the realm of computer vision. Specifically, perceiving, generating, and understanding human activities have long been a core pursuit of machine intelligence. Over the past few decades, data-driven methods for modeling human motion have demonstrated great potential across various interactive media and social robotics domains. Despite its impressive achievements, challenges still remain in analyzing multi-agent/multi-modal behaviors and in producing high-fidelity and highly varied motions. This complexity arises because human motion is inherently dynamic, uncertain, and intertwined with its environment. This thesis aims to introduce challenges and data-driven methods of understanding human motion and then elaborate on the contributions of the included papers. We present this thesis mainly in ascending order of complexity: recognition, synthesis, and transfer, which includes the tasks of perceiving, generating, and understanding human activities. 

Firstly, we present methods to recognize human motion (Paper A). We consider a conversational group scenario where people gather and stand in an environment to converse. Based on transformer-based networks and graph convolutional neural networks, we demonstrate how spatial-temporal group dynamics can be modeled and perceived on both the individual and group levels. Secondly, we investigate probabilistic autoregressive approaches to generate controllable human locomotion. We employ deep generative models, namely normalizing flows (Paper B) and diffusion models (Paper C), to generate and reconstruct the 3D skeletal poses of humans over time. Finally, we deal with the problem of motion style transfer. We propose style transfer systems that allow transforming motion styles while attempting to preserve motion context through GAN-based (Paper D) and diffusion-based (Paper E) methods. Compared with previous research mainly focusing on simple locomotion or exercise, we consider more complex dance movements and multimodal information. 

In summary, this thesis aims to propose methods that can effectively perceive, generate, and transfer 3D human motion. In terms of network architectures, we employ graph formulation to exploit the correlation of human skeletons, thereby introducing inductive bias through graph structures. Additionally, we leverage transformers to handle long-term data dependencies and weigh the importance of varying data components. In terms of learning frameworks, we adopt generative models to represent joint distribution over relevant variables and multiple modalities, which are flexible to cover a wide range of tasks. Our experiments demonstrate the effectiveness of the proposed frameworks by evaluating the methods on our own collected dataset and public datasets. We show how these methods are applied to various challenging tasks. 

Abstract [sv]

Människor är av primärt intresse för studier inom ämnet datorseende. Mer specifikt, att uppfatta, generera och förstå mänskliga aktiviteter har länge varit en huvudsaklig strävan inom maskinintelligens. Under de senaste årtiondena har datadrivna metoder för modellering av mänsklig rörelse visat stor potential inom olika interaktiva medier och områden för social robotik. Trots dess imponerande framgångar kvarstår utmaningar i att analysera multiagent/multimodal-beteenden och producera högupplösta och mycket varierade rörelser. Denna komplexitet uppstår eftersom mänsklig rörelse i grunden är dynamisk, osäker och sammanflätad med sin miljö. Denna avhandling syftar till att introducera utmaningar och datadrivna metoder för att förstå mänsklig rörelse och sedan beskriva bidragen från de inkluderade artiklarna. Vi presenterar denna avhandling huvudsakligen i stigande ordning av komplexitet: igenkänning, syntes och överföring, vilket inkluderar uppgifterna att uppfatta, generera och förstå mänskliga aktiviteter.

Först presenterar vi metoder för att känna igen mänsklig rörelse (Artikel A). Vi beaktar ett konversationsgruppsscenario där människor samlas och står i en miljö för att samtala. Baserat på transformer-baserade nätverk och graf-faltade neurala nätverk visar vi hur rumsligt-temporal gruppdynamik kan modelleras och uppfattas på både individ- och gruppnivåer. För det andra undersöker vi probabilistiska autoregressiva metoder för att generera kontrollerbar mänsklig rörelse. Vi använder djupa generativa modeller, nämligen normaliserande flöden (Artikel B) och diffusionsmodeller (Artikel C), för att generera och rekonstruera 3D-skelettpositioner av människor över tid. Slutligen behandlar vi problemet med översättning av rörelsestilar. Vi föreslår ett stilöversättningssystem som möjliggör omvandling av rörelsestilar samtidigt som det försöker bevara rörelsesammanhang genom GAN-baserade (Artikel D) och diffusionsbaserade (Artikel E) metoder. Jämfört med tidigare forskning som huvudsakligen fokuserar på enkel rörelse eller träning, beaktar vi mer komplexa dansrörelser och multimodal information.

Sammanfattningsvis syftar denna avhandling till att föreslå metoder som effektivt kan uppfatta, generera och översätta mänsklig rörelse i 3D. När det gäller nätverksarkitekturer använder vi en graf-formulering för att utnyttja korrelationen av mänskliga skelett, därigenom introducera induktiv bias genom grafstrukturer. Dessutom utnyttjar vi transformer för att hantera långsiktiga databeroenden och väga betydelsen av varierande komponenter i datan.När det gäller ramverk för inlärning tillämpar vi generativa modeller för att representera gemensam distribution över relevanta variabler och flera modaliteter, vilka är flexibla nog att täcka ett brett spektrum av uppgifter. Våra experiment visar effektiviteten av de föreslagna ramverken genom att utvärdera metoderna på egna insamlade dataset och offentliga dataset. Vi visar hur dessa metoder tillämpas för flertalet utmanande uppgifter.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. xiii, 68
Series
TRITA-EECS-AVL ; 2024:9
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-342366 (URN)978-91-8040-815-8 (ISBN)
Public defence
2024-02-16, https://kth-se.zoom.us/j/62347635904, F3, Lindstedtsvägen 26, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20240117

Available from: 2024-01-17 Created: 2024-01-16 Last updated: 2024-02-05Bibliographically approved
Yin, W., Yu, Y., Yin, H., Kragic, D. & Björkman, M. (2024). Scalable Motion Style Transfer with Constrained Diffusion Generation. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence: . Paper presented at The 38th Annual AAAI Conference on Artificial Intelligence, February 20-27, 2024, Vancouver, Canada (pp. 10234-10242). Association for the Advancement of Artificial Intelligence (AAAI), 38
Open this publication in new window or tab >>Scalable Motion Style Transfer with Constrained Diffusion Generation
Show others...
2024 (English)In: Proceedings of the 38th AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI) , 2024, Vol. 38, p. 10234-10242Conference paper, Published paper (Refereed)
Abstract [en]

Current training of motion style transfer systems relies on consistency losses across style domains to preserve contents, hindering its scalable application to a large number of domains and private data. Recent image transfer works show the potential of independent training on each domain by leveraging implicit bridging between diffusion models, with the content preservation, however, limited to simple data patterns. We address this by imposing biased sampling in backward diffusion while maintaining the domain independence in the training stage. We construct the bias from the source domain keyframes and apply them as the gradient of content constraints, yielding a framework with keyframe manifold constraint gradients (KMCGs). Our validation demonstrates the success of training separate models to transfer between as many as ten dance motion styles. Comprehensive experiments find a significant improvement in preserving motion contents in comparison to baseline and ablative diffusion-based style transfer models. In addition, we perform a human study for a subjective assessment of the quality of generated dance motions. The results validate the competitiveness of KMCGs.

Place, publisher, year, edition, pages
Association for the Advancement of Artificial Intelligence (AAAI), 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-342365 (URN)10.1609/aaai.v38i9.28889 (DOI)001241512400092 ()2-s2.0-85189340183 (Scopus ID)
Conference
The 38th Annual AAAI Conference on Artificial Intelligence, February 20-27, 2024, Vancouver, Canada
Note

QC 20241112

Available from: 2024-01-16 Created: 2024-01-16 Last updated: 2024-11-12Bibliographically approved
Fu, J., Tan, J., Yin, W., Pashami, S. & Björkman, M. (2023). Component atention network for multimodal dance improvisation recognition. In: PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2023: . Paper presented at 25th International Conference on Multimodal Interaction (ICMI), OCT 09-13, 2023, Sorbonne Univ, Paris, FRANCE (pp. 114-118). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Component atention network for multimodal dance improvisation recognition
Show others...
2023 (English)In: PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2023, Association for Computing Machinery (ACM) , 2023, p. 114-118Conference paper, Published paper (Refereed)
Abstract [en]

Dance improvisation is an active research topic in the arts. Motion analysis of improvised dance can be challenging due to its unique dynamics. Data-driven dance motion analysis, including recognition and generation, is often limited to skeletal data. However, data of other modalities, such as audio, can be recorded and benefit downstream tasks. This paper explores the application and performance of multimodal fusion methods for human motion recognition in the context of dance improvisation. We propose an attention-based model, component attention network (CANet), for multimodal fusion on three levels: 1) feature fusion with CANet, 2) model fusion with CANet and graph convolutional network (GCN), and 3) late fusion with a voting strategy. We conduct thorough experiments to analyze the impact of each modality in different fusion methods and distinguish critical temporal or component features. We show that our proposed model outperforms the two baseline methods, demonstrating its potential for analyzing improvisation in dance.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Dance Recognition, Multimodal Fusion, Attention Network
National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-343780 (URN)10.1145/3577190.3614114 (DOI)001147764700016 ()2-s2.0-85175844284 (Scopus ID)
Conference
25th International Conference on Multimodal Interaction (ICMI), OCT 09-13, 2023, Sorbonne Univ, Paris, FRANCE
Note

Part of proceedings ISBN 979-8-4007-0055-2

QC 20240222

Available from: 2024-02-22 Created: 2024-02-22 Last updated: 2024-03-05Bibliographically approved
Yin, W., Tu, R., Yin, H., Kragic, D., Kjellström, H. & Björkman, M. (2023). Controllable Motion Synthesis and Reconstruction with Autoregressive Diffusion Models. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 1102-1108). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Controllable Motion Synthesis and Reconstruction with Autoregressive Diffusion Models
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1102-1108Conference paper, Published paper (Refereed)
Abstract [en]

Data-driven and controllable human motion synthesis and prediction are active research areas with various applications in interactive media and social robotics. Challenges remain in these fields for generating diverse motions given past observations and dealing with imperfect poses. This paper introduces MoDiff, an autoregressive probabilistic diffusion model over motion sequences conditioned on control contexts of other modalities. Our model integrates a cross-modal Transformer encoder and a Transformer-based decoder, which are found effective in capturing temporal correlations in motion and control modalities. We also introduce a new data dropout method based on the diffusion forward process to provide richer data representations and robust generation. We demonstrate the superior performance of MoDiff in controllable motion synthesis for locomotion with respect to two baselines and show the benefits of diffusion data dropout for robust synthesis and reconstruction of high-fidelity motion close to recorded data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-341978 (URN)10.1109/RO-MAN57019.2023.10309317 (DOI)001108678600131 ()2-s2.0-85186990309 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2025-02-07Bibliographically approved
Yin, W., Yin, H., Baraka, K., Kragic, D. & Björkman, M. (2023). Dance Style Transfer with Cross-modal Transformer. In: 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV): . Paper presented at 23rd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), JAN 03-07, 2023, Waikoloa, HI (pp. 5047-5056). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Dance Style Transfer with Cross-modal Transformer
Show others...
2023 (English)In: 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 5047-5056Conference paper, Published paper (Refereed)
Abstract [en]

We present CycleDance, a dance style transfer system to transform an existing motion clip in one dance style to a motion clip in another dance style while attempting to preserve motion context of the dance. Our method extends an existing CycleGAN architecture for modeling audio sequences and integrates multimodal transformer encoders to account for music context. We adopt sequence length-based curriculum learning to stabilize training. Our approach captures rich and long-term intra-relations between motion frames, which is a common challenge in motion transfer and synthesis work. We further introduce new metrics for gauging transfer strength and content preservation in the context of dance movements. We perform an extensive ablation study as well as a human study including 30 participants with 5 or more years of dance experience. The results demonstrate that CycleDance generates realistic movements with the target style, significantly outperforming the baseline CycleGAN on naturalness, transfer strength, and content preservation. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE Winter Conference on Applications of Computer Vision, ISSN 2472-6737
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-333220 (URN)10.1109/WACV56688.2023.00503 (DOI)000971500205016 ()2-s2.0-85149044034 (Scopus ID)
Conference
23rd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), JAN 03-07, 2023, Waikoloa, HI
Note

QC 20230731

Available from: 2023-07-31 Created: 2023-07-31 Last updated: 2025-02-07Bibliographically approved
Yang, F., Yin, W., Wang, L., Li, T., Zhao, P., Liu, B., . . . Zhang, D. (2023). Diffusion-Based Time Series Data Imputation for Cloud Failure Prediction at Microsoft 365. In: ESEC/FSE 2023 - Proceedings of the 31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering: . Paper presented at 31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2023, San Francisco, United States of America, Dec 3 2023 - Dec 9 2023 (pp. 2050-2055). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Diffusion-Based Time Series Data Imputation for Cloud Failure Prediction at Microsoft 365
Show others...
2023 (English)In: ESEC/FSE 2023 - Proceedings of the 31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Association for Computing Machinery (ACM) , 2023, p. 2050-2055Conference paper, Published paper (Refereed)
Abstract [en]

Ensuring reliability in large-scale cloud systems like Microsoft 365 is crucial. Cloud failures, such as disk and node failure, threaten service reliability, causing service interruptions and financial loss. Existing works focus on failure prediction and proactively taking action before failures happen. However, they suffer from poor data quality, like data missing in model training and prediction, which limits performance. In this paper, we focus on enhancing data quality through data imputation by the proposed Diffusion+, a sample-efficient diffusion model, to impute the missing data efficiently conditioned on the observed data. Experiments with industrial datasets and application practice show that our model contributes to improving the performance of downstream failure prediction.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Diffusion model, disk failure prediction, missing data imputation
National Category
Software Engineering Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-341954 (URN)10.1145/3611643.3613866 (DOI)001148157800169 ()2-s2.0-85180547809 (Scopus ID)
Conference
31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2023, San Francisco, United States of America, Dec 3 2023 - Dec 9 2023
Note

Part of ISBN 9798400703270

QC 20240108

Available from: 2024-01-08 Created: 2024-01-08 Last updated: 2025-05-27Bibliographically approved
Yin, W., Yin, H., Baraka, K., Kragic, D. & Björkman, M. (2023). Multimodal dance style transfer. Machine Vision and Applications, 34(4), Article ID 48.
Open this publication in new window or tab >>Multimodal dance style transfer
Show others...
2023 (English)In: Machine Vision and Applications, ISSN 0932-8092, E-ISSN 1432-1769, Vol. 34, no 4, article id 48Article in journal (Refereed) Published
Abstract [en]

This paper first presents CycleDance, a novel dance style transfer system that transforms an existing motion clip in one dance style into a motion clip in another dance style while attempting to preserve the motion context of the dance. CycleDance extends existing CycleGAN architectures with multimodal transformer encoders to account for the music context. We adopt a sequence length-based curriculum learning strategy to stabilize training. Our approach captures rich and long-term intra-relations between motion frames, which is a common challenge in motion transfer and synthesis work. Building upon CycleDance, we further propose StarDance, which enables many-to-many mappings across different styles using a single generator network. Additionally, we introduce new metrics for gauging transfer strength and content preservation in the context of dance movements. To evaluate the performance of our approach, we perform an extensive ablation study and a human study with 30 participants, each with 5 or more years of dance experience. Our experimental results show that our approach can generate realistic movements with the target style, outperforming the baseline CycleGAN and its variants on naturalness, transfer strength, and content preservation. Our proposed approach has potential applications in choreography, gaming, animation, and tool development for artistic and scientific innovations in the field of dance.

Place, publisher, year, edition, pages
Springer Nature, 2023
Keywords
Style transfer, Dance motion, Multimodal learning, Generative models
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-328307 (URN)10.1007/s00138-023-01399-x (DOI)000984951800001 ()2-s2.0-85158999932 (Scopus ID)
Note

QC 20230607

Available from: 2023-06-07 Created: 2023-06-07 Last updated: 2024-01-17Bibliographically approved
Demir Kanik, S. U., Yin, W., Güneysu Özgür, A., Ghadirzadeh, A., Björkman, M. & Kragic, D. (2022). Improving EEG-based Motor Execution Classification for Robot Control. In: Proceedings 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022: Social Computing and Social Media: Design, User Experience and Impact. Paper presented at Social Computing and Social Media: Design, User Experience and Impact - 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 - July 1, 2022 (pp. 65-82). Springer Nature
Open this publication in new window or tab >>Improving EEG-based Motor Execution Classification for Robot Control
Show others...
2022 (English)In: Proceedings 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022: Social Computing and Social Media: Design, User Experience and Impact, Springer Nature , 2022, p. 65-82Conference paper, Published paper (Refereed)
Abstract [en]

Brain Computer Interface (BCI) systems have the potential to provide a communication tool using non-invasive signals which can be applied to various fields including neuro-rehabilitation and entertainment. Interpreting multi-class movement intentions in a real time setting to control external devices such as robotic arms remains to be one of the main challenges in the BCI field. We propose a learning framework to decode upper limb movement intentions before and during the movement execution (ME) with the inclusion of motor imagery (MI) trials. The design of the framework allows the system to evaluate the uncertainty of the classification output and respond accordingly. The EEG signals collected during MI and ME trials are fed into a hybrid architecture consisting of Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) with limited pre-processing. Outcome of the proposed approach shows the potential to anticipate the intended movement direction before the onset of the movement, while waiting to reach a certainty level by potentially observing more EEG data from the beginning of the actual movement before sending control commands to the robot to avoid undesired outcomes. Presented results indicate that both the accuracy and the confidence level of the model improves with the introduction of MI trials right before the movement execution. Our results confirm the possibility of the proposed model to contribute to real-time and continuous decoding of movement directions for robotic applications.

Place, publisher, year, edition, pages
Springer Nature, 2022
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13315
Keywords
brain computer interface
National Category
Neurosciences Signal Processing Robotics and automation
Identifiers
urn:nbn:se:kth:diva-318297 (URN)10.1007/978-3-031-05061-9_5 (DOI)000911435700005 ()2-s2.0-85133032331 (Scopus ID)
Conference
Social Computing and Social Media: Design, User Experience and Impact - 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 - July 1, 2022
Note

QC 20230307

Available from: 2022-09-19 Created: 2022-09-19 Last updated: 2025-02-05Bibliographically approved
Yin, W., Yin, H., Kragic, D. & Björkman, M. (2021). Graph-based Normalizing Flow for Human Motion Generation and Reconstruction. In: 2021 30th IEEE international conference on robot and human interactive communication (RO-MAN): . Paper presented at 30th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 08-12, 2021, ELECTR NETWORK (pp. 641-648). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Graph-based Normalizing Flow for Human Motion Generation and Reconstruction
2021 (English)In: 2021 30th IEEE international conference on robot and human interactive communication (RO-MAN), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 641-648Conference paper, Published paper (Refereed)
Abstract [en]

Data-driven approaches for modeling human skeletal motion have found various applications in interactive media and social robotics. Challenges remain in these fields for generating high-fidelity samples and robustly reconstructing motion from imperfect input data, due to e.g. missed marker detection. In this paper, we propose a probabilistic generative model to synthesize and reconstruct long horizon motion sequences conditioned on past information and control signals, such as the path along which an individual is moving. Our method adapts the existing work MoGlow by introducing a new graph-based model. The model leverages the spatial-temporal graph convolutional network (ST-GCN) to effectively capture the spatial structure and temporal correlation of skeletal motion data at multiple scales. We evaluate the models on a mixture of motion capture datasets of human locomotion with foot-step and bone-length analysis. The results demonstrate the advantages of our model in reconstructing missing markers and achieving comparable results on generating realistic future poses. When the inputs are imperfect, our model shows improvements on robustness of generation.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-305502 (URN)10.1109/RO-MAN50785.2021.9515316 (DOI)000709817200093 ()2-s2.0-85115049506 (Scopus ID)
Conference
30th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 08-12, 2021, ELECTR NETWORK
Note

QC 20211201

Part of proceedings: ISBN 978-1-6654-0492-1

Available from: 2021-12-01 Created: 2021-12-01 Last updated: 2024-01-17Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7189-1336

Search in DiVA

Show all publications