kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Cross-attention Masked Auto-Encoder for Human 3D Motion Infilling and Denoising
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Tracab, Tegeluddsvägen 3, Stockholm, Sweden.ORCID iD: 0000-0002-1463-6590
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-2784-7300
Tracab, Tegeluddsvägen 3, Stockholm, Sweden.
Tracab, Tegeluddsvägen 3, Stockholm, Sweden.
Show others and affiliations
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Human 3D pose and motion capture have numerous applications in fields such as augmented and virtual reality, animation, robotics and sports. However, even the best capturing methods suffer from artifacts such as missed joints and noisy or inaccurate joint positions. To address this we propose the Cross-attention Masked Auto-Encoder (XMAE) for human 3D motion infilling and denoising. XMAE extends the original Masked Auto-Encoder design by introducing cross-attention in the decoder to deal with the train-test gap common in methods utilizing masking and mask tokens. Furthermore, we introduce joint displacement as an additional noise source during training, enabling XMAE to learn to correct incorrect joint positions. Through extensive experiments, we show XMAE's effectiveness compared to state-of-the-art approaches across three public datasets and its ability to denoise real-world data, reducing limb length standard deviation by 28\% when applied on our in-the-wild professional soccer dataset.

Place, publisher, year, edition, pages
BMVC , 2023.
Keywords [en]
3D Human pose estimation
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-352209OAI: oai:DiVA.org:kth-352209DiVA, id: diva2:1892367
Conference
The 34th British Machine Vision Conference, 20th - 24th November 2023, Aberdeen, UK
Note

QC 20240829

Available from: 2024-08-26 Created: 2024-08-26 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

fulltext(561 kB)61 downloads
File information
File name FULLTEXT01.pdfFile size 561 kBChecksum SHA-512
84da6b29bb2677eb369799f89635a0555ff85504ade936594f4f79cec3bd271a0dedf2f1ba3603576932d96770ec722be1b5c8ef3f5daf80061ee16ce204f2a5
Type fulltextMimetype application/pdf

Other links

Paper

Authority records

Björkstrand, DavidSullivan, Josephine

Search in DiVA

By author/editor
Björkstrand, DavidSullivan, Josephine
By organisation
Robotics, Perception and Learning, RPL
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
Total: 61 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 191 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf