Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Real-time labeling of non-rigid motion capture marker sets
KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.ORCID-id: 0000-0002-7801-7617
KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
2017 (engelsk)Inngår i: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 69, nr Supplement C, s. 59-67Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Passive optical motion capture is one of the predominant technologies for capturing high fidelity human motion, and is a workhorse in a large number of areas such as bio-mechanics, film and video games. While most state-of-the-art systems can automatically identify and track markers on the larger parts of the human body, the markers attached to the fingers and face provide unique challenges and usually require extensive manual cleanup. In this work we present a robust online method for identification and tracking of passive motion capture markers attached to non-rigid structures. The method is especially suited for large capture volumes and sparse marker sets. Once trained, our system can automatically initialize and track the markers, and the subject may exit and enter the capture volume at will. By using multiple assignment hypotheses and soft decisions, it can robustly recover from a difficult situation with many simultaneous occlusions and false observations (ghost markers). In three experiments, we evaluate the method for labeling a variety of marker configurations for finger and facial capture. We also compare the results with two of the most widely used motion capture platforms: Motion Analysis Cortex and Vicon Blade. The results show that our method is better at attaining correct marker labels and is especially beneficial for real-time applications.

sted, utgiver, år, opplag, sider
Elsevier, 2017. Vol. 69, nr Supplement C, s. 59-67
Emneord [en]
Animation, Motion capture, Hand capture, Labeling
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-218254DOI: 10.1016/j.cag.2017.10.001ISI: 000418980500008Scopus ID: 2-s2.0-85032454324OAI: oai:DiVA.org:kth-218254DiVA, id: diva2:1160124
Merknad

QC 20171127

Tilgjengelig fra: 2017-11-24 Laget: 2017-11-24 Sist oppdatert: 2018-01-16bibliografisk kontrollert
Inngår i avhandling
1. Performance, Processing and Perception of Communicative Motion for Avatars and Agents
Åpne denne publikasjonen i ny fane eller vindu >>Performance, Processing and Perception of Communicative Motion for Avatars and Agents
2017 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Artificial agents and avatars are designed with a large variety of face and body configurations. Some of these (such as virtual characters in films) may be highly realistic and human-like, while others (such as social robots) have considerably more limited expressive means. In both cases, human motion serves as the model and inspiration for the non-verbal behavior displayed. This thesis focuses on increasing the expressive capacities of artificial agents and avatars using two main strategies: 1) improving the automatic capturing of the most communicative areas for human communication, namely the face and the fingers, and 2) increasing communication clarity by proposing novel ways of eliciting clear and readable non-verbal behavior.

The first part of the thesis covers automatic methods for capturing and processing motion data. In paper A, we propose a novel dual sensor method for capturing hands and fingers using optical motion capture in combination with low-cost instrumented gloves. The approach circumvents the main problems with marker-based systems and glove-based systems, and it is demonstrated and evaluated on a key-word signing avatar. In paper B, we propose a robust method for automatic labeling of sparse, non-rigid motion capture marker sets, and we evaluate it on a variety of marker configurations for finger and facial capture. In paper C, we propose an automatic method for annotating hand gestures using Hierarchical Hidden Markov Models (HHMMs).

The second part of the thesis covers studies on creating and evaluating multimodal databases with clear and exaggerated motion. The main idea is that this type of motion is appropriate for agents under certain communicative situations (such as noisy environments) or for agents with reduced expressive degrees of freedom (such as humanoid robots). In paper D, we record motion capture data for a virtual talking head with variable articulation style (normal-to-over articulated). In paper E, we use techniques from mime acting to generate clear non-verbal expressions custom tailored for three agent embodiments (face-and-body, face-only and body-only).

sted, utgiver, år, opplag, sider
Stockholm: KTH Royal Institute of Technology, 2017. s. 73
Serie
TRITA-CSC-A, ISSN 1653-5723 ; 24
HSV kategori
Forskningsprogram
Tal- och musikkommunikation
Identifikatorer
urn:nbn:se:kth:diva-218272 (URN)978-91-7729-608-9 (ISBN)
Disputas
2017-12-15, F3, Lindstedtsvägen 26, Stockholm, 14:00 (engelsk)
Opponent
Veileder
Merknad

QC 20171127

Tilgjengelig fra: 2017-11-27 Laget: 2017-11-24 Sist oppdatert: 2018-01-13bibliografisk kontrollert

Open Access i DiVA

fulltext(5322 kB)277 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 5322 kBChecksum SHA-512
ae1aa153ca0d87e5a6b352a9e4e48a842e715fd701d37705189738c37fd5399d93e73d0967da22f99273d5ef43fb442c7181f60d3ba1ea5a3d4de19548cdb3b7
Type fulltextMimetype application/pdf

Andre lenker

Forlagets fulltekstScopus

Søk i DiVA

Av forfatter/redaktør
Alexanderson, SimonBeskow, Jonas
Av organisasjonen
I samme tidsskrift
Computers & graphics

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 277 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 1382 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf