Denoising of volumetric depth confidence for view rendering
2012 (English)In: 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2012, IEEE , 2012, 1-4 p.Conference paper (Refereed)
In this paper, we deﬁne volumetric depth conﬁdence and propose a method to denoise this data by performing adaptive wavelet thresholding using three dimensional (3D) wavelet transforms. The depth information is relevant for emerging interactive multimedia applications such as 3D TV and free-viewpoint television (FTV). These emerging applications require high quality virtual view rendering to enable viewers to move freely in a dynamic real worldscene. Depth information of a real world scene from different viewpoints is used to render an arbitrary number of novel views. Usually, depth estimates of 3D object points from different viewpoints are inconsistent. This inconsistency of depth estimates affects the quality of view rendering negatively. Based on the superposition principle, we deﬁne a volumetric depth conﬁdence description of the underlying geometry of natural 3D scenes by using these inconsistent depth estimates from different viewpoints. Our method denoises this noisy volumetric description, and with this, we enhance the quality of view rendering by up to 0.45 dB when compared to rendering with conventional MPEG depth maps.
Place, publisher, year, edition, pages
IEEE , 2012. 1-4 p.
Volumetric depth conﬁdence, denoising, superposition, discrete wavelet transforms, adaptive thresholding, view rendering
IdentifiersURN: urn:nbn:se:kth:diva-105840DOI: 10.1109/3DTV.2012.6365470ScopusID: 2-s2.0-84872065368ISBN: 978-146734905-5OAI: oai:DiVA.org:kth-105840DiVA: diva2:572469
2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2012; Zurich; 15 October 2012 through 17 October 2012
QC 201301152012-11-272012-11-272013-01-29Bibliographically approved