Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Real-time Face Reconstruction from a Single Depth Image
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Microsoft Research, United States. (Computer Vision)ORCID iD: 0000-0003-4181-2753
Microsoft Research, United States.
Microsoft Research, United States.
Microsoft Research, United States.
Show others and affiliations
2014 (English)In: Proceedings - 2014 International Conference on 3D Vision, 3DV 2014, IEEE conference proceedings, 2014, 369-376 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper contributes a real time method for recovering facial shape and expression from a single depth image. The method also estimates an accurate and dense correspondence field between the input depth image and a generic face model. Both outputs are a result of minimizing the error in reconstructing the depth image, achieved by applying a set of identity and expression blend shapes to the model. Traditionally, such a generative approach has shown to be computationally expensive and non-robust because of the non-linear nature of the reconstruction error. To overcome this problem, we use a discriminatively trained prediction pipeline that employs random forests to generate an initial dense but noisy correspondence field. Our method then exploits a fast ICP-like approximation to update these correspondences, allowing us to quickly obtain a robust initial fit of our model. The model parameters are then fine tuned to minimize the true reconstruction error using a stochastic optimization technique. The correspondence field resulting from our hybrid generative-discriminative pipeline is accurate and useful for a variety of applications such as mesh deformation and retexturing. Our method works in real-time on a single depth image i.e. without temporal tracking, is free from per-user calibration, and works in low-light conditions.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2014. 369-376 p.
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-150827DOI: 10.1109/3DV.2014.93Scopus ID: 2-s2.0-84925299755ISBN: 9781479970018 (print)OAI: oai:DiVA.org:kth-150827DiVA: diva2:745602
Conference
2014 2nd International Conference on 3D Vision, 3DV 2014; The University of TokyoTokyo; Japan; 8 December 2014 through 11 December 2014
Note

QC 20140911

Available from: 2014-09-10 Created: 2014-09-10 Last updated: 2015-05-29Bibliographically approved
In thesis
1. Correspondence Estimation in Human Face and Posture Images
Open this publication in new window or tab >>Correspondence Estimation in Human Face and Posture Images
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Many computer vision tasks such as object detection, pose estimation,and alignment are directly related to the estimation of correspondences overinstances of an object class. Other tasks such as image classification andverification if not completely solved can largely benefit from correspondenceestimation. This thesis presents practical approaches for tackling the corre-spondence estimation problem with an emphasis on deformable objects.Different methods presented in this thesis greatly vary in details but theyall use a combination of generative and discriminative modeling to estimatethe correspondences from input images in an efficient manner. While themethods described in this work are generic and can be applied to any object,two classes of objects of high importance namely human body and faces arethe subjects of our experimentations.When dealing with human body, we are mostly interested in estimating asparse set of landmarks – specifically we are interested in locating the bodyjoints. We use pictorial structures to model the articulation of the body partsgeneratively and learn efficient discriminative models to localize the parts inthe image. This is a common approach explored by many previous works. Wefurther extend this hybrid approach by introducing higher order terms to dealwith the double-counting problem and provide an algorithm for solving theresulting non-convex problem efficiently. In another work we explore the areaof multi-view pose estimation where we have multiple calibrated cameras andwe are interested in determining the pose of a person in 3D by aggregating2D information. This is done efficiently by discretizing the 3D search spaceand use the 3D pictorial structures model to perform the inference.In contrast to the human body, faces have a much more rigid structureand it is relatively easy to detect the major parts of the face such as eyes,nose and mouth, but performing dense correspondence estimation on facesunder various poses and lighting conditions is still challenging. In a first workwe deal with this variation by partitioning the face into multiple parts andlearning separate regressors for each part. In another work we take a fullydiscriminative approach and learn a global regressor from image to landmarksbut to deal with insufficiency of training data we augment it by a large numberof synthetic images. While we have shown great performance on the standardface datasets for performing correspondence estimation, in many scenariosthe RGB signal gets distorted as a result of poor lighting conditions andbecomes almost unusable. This problem is addressed in another work wherewe explore use of depth signal for dense correspondence estimation. Hereagain a hybrid generative/discriminative approach is used to perform accuratecorrespondence estimation in real-time.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2014. vii, 32 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2014:14
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-150115 (URN)978-91-7595-261-1 (ISBN)
Public defence
2014-10-10, Kollegiesalen, Brinellvägen 8, KTH, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20140919

Available from: 2014-09-19 Created: 2014-08-29 Last updated: 2014-09-19Bibliographically approved

Open Access in DiVA

fulltext(12468 kB)652 downloads
File information
File name FULLTEXT01.pdfFile size 12468 kBChecksum SHA-512
7a902fed6a0b9fc5edf301246536b6091ac08460e341f9a89f6164092c386d66125860dcfcb2877fdd93181e31fdabc6f08be1fd5d4de67fc376d22a79579a09
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusConference website

Authority records BETA

Kazemi, Vahid

Search in DiVA

By author/editor
Kazemi, Vahid
By organisation
Computer Vision and Active Perception, CVAP
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 652 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 394 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf