Component atention network for multimodal dance improvisation recognitionShow others and affiliations
2023 (English)In: PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2023, Association for Computing Machinery (ACM) , 2023, p. 114-118Conference paper, Published paper (Refereed)
Abstract [en]
Dance improvisation is an active research topic in the arts. Motion analysis of improvised dance can be challenging due to its unique dynamics. Data-driven dance motion analysis, including recognition and generation, is often limited to skeletal data. However, data of other modalities, such as audio, can be recorded and benefit downstream tasks. This paper explores the application and performance of multimodal fusion methods for human motion recognition in the context of dance improvisation. We propose an attention-based model, component attention network (CANet), for multimodal fusion on three levels: 1) feature fusion with CANet, 2) model fusion with CANet and graph convolutional network (GCN), and 3) late fusion with a voting strategy. We conduct thorough experiments to analyze the impact of each modality in different fusion methods and distinguish critical temporal or component features. We show that our proposed model outperforms the two baseline methods, demonstrating its potential for analyzing improvisation in dance.
Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2023. p. 114-118
Keywords [en]
Dance Recognition, Multimodal Fusion, Attention Network
National Category
Other Computer and Information Science
Identifiers
URN: urn:nbn:se:kth:diva-343780DOI: 10.1145/3577190.3614114ISI: 001147764700016Scopus ID: 2-s2.0-85175844284OAI: oai:DiVA.org:kth-343780DiVA, id: diva2:1840232
Conference
25th International Conference on Multimodal Interaction (ICMI), OCT 09-13, 2023, Sorbonne Univ, Paris, FRANCE
Note
Part of proceedings ISBN 979-8-4007-0055-2
QC 20240222
2024-02-222024-02-222024-03-05Bibliographically approved