Open this publication in new window or tab >>Show others...
2014 (English)In: Proceedings - 2014 International Conference on 3D Vision, 3DV 2014, IEEE conference proceedings, 2014, p. 369-376Conference paper, Published paper (Refereed)
Abstract [en]
This paper contributes a real time method for recovering facial shape and expression from a single depth image. The method also estimates an accurate and dense correspondence field between the input depth image and a generic face model. Both outputs are a result of minimizing the error in reconstructing the depth image, achieved by applying a set of identity and expression blend shapes to the model. Traditionally, such a generative approach has shown to be computationally expensive and non-robust because of the non-linear nature of the reconstruction error. To overcome this problem, we use a discriminatively trained prediction pipeline that employs random forests to generate an initial dense but noisy correspondence field. Our method then exploits a fast ICP-like approximation to update these correspondences, allowing us to quickly obtain a robust initial fit of our model. The model parameters are then fine tuned to minimize the true reconstruction error using a stochastic optimization technique. The correspondence field resulting from our hybrid generative-discriminative pipeline is accurate and useful for a variety of applications such as mesh deformation and retexturing. Our method works in real-time on a single depth image i.e. without temporal tracking, is free from per-user calibration, and works in low-light conditions.
Place, publisher, year, edition, pages
IEEE conference proceedings, 2014
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-150827 (URN)10.1109/3DV.2014.93 (DOI)2-s2.0-84925299755 (Scopus ID)9781479970018 (ISBN)
Conference
2014 2nd International Conference on 3D Vision, 3DV 2014; The University of TokyoTokyo; Japan; 8 December 2014 through 11 December 2014
Note
QC 20140911
2014-09-102014-09-102022-06-23Bibliographically approved