Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Denoising of volumetric depth confidence for view rendering
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
Show others and affiliations
2012 (English)In: 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2012, IEEE , 2012, 1-4 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we define volumetric depth confidence and propose a method to denoise this data by performing adaptive wavelet thresholding using three dimensional (3D) wavelet transforms. The depth information is relevant for emerging interactive multimedia applications such as 3D TV and free-viewpoint television (FTV). These emerging applications require high quality virtual view rendering to enable viewers to move freely in a dynamic real worldscene. Depth information of a real world scene from different viewpoints is used to render an arbitrary number of novel views. Usually, depth estimates of 3D object points from different viewpoints are inconsistent. This inconsistency of depth estimates affects the quality of view rendering negatively. Based on the superposition principle, we define a volumetric depth confidence description of the underlying geometry of natural 3D scenes by using these inconsistent depth estimates from different viewpoints. Our method denoises this noisy volumetric description, and with this, we enhance the quality of view rendering by up to 0.45 dB when compared to rendering with conventional MPEG depth maps.

Place, publisher, year, edition, pages
IEEE , 2012. 1-4 p.
Keyword [en]
Volumetric depth confidence, denoising, superposition, discrete wavelet transforms, adaptive thresholding, view rendering
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:kth:diva-105840DOI: 10.1109/3DTV.2012.6365470Scopus ID: 2-s2.0-84872065368ISBN: 978-146734905-5 (print)OAI: oai:DiVA.org:kth-105840DiVA: diva2:572469
Conference
2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2012; Zurich; 15 October 2012 through 17 October 2012
Note

QC 20130115

Available from: 2012-11-27 Created: 2012-11-27 Last updated: 2013-01-29Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopushttp://www.ee.kth.se/~prara/publications/pdfs/Parthasarathy12_3DTVCON.pdf

Authority records BETA

Flierl, Markus

Search in DiVA

By author/editor
Parthasarathy, SrinivasChopra, AkulBaudin, EmilieRana, Pravin KumarFlierl, Markus
By organisation
Sound and Image Processing
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 57 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf