Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Variational Bayesian Inference Framework for Multiview Depth Image Enhancement
KTH, School of Electrical Engineering (EES), Sound and Image Processing. (Sound and Image Processing Lab)
KTH, School of Electrical Engineering (EES), Sound and Image Processing. (Sound and Image Processing Lab)
KTH, School of Electrical Engineering (EES), Sound and Image Processing. (Sound and Image Processing Lab)ORCID iD: 0000-0002-7807-5681
2012 (English)In: Proceedings - 2012 IEEE International Symposium on Multimedia, ISM 2012, IEEE , 2012, 183-190 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, a general model-based framework for multiview depth image enhancement is proposed. Depth imagery plays a pivotal role in emerging free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery of different viewpoints is used to synthesize an arbitrary number of novel views. Usually, the depth imagery is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery by using a variational Bayesian inference framework. First, our approach classifies the color information in the multiview color imagery. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further subclustering. Finally, the resulting mean of the sub-clusters is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the quality of virtual views by up to 0.25 dB.

Place, publisher, year, edition, pages
IEEE , 2012. 183-190 p.
Keyword [en]
Depth enhancement, Gaussian mixture model, Multiview video, Variational Bayesian inference
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-102704DOI: 10.1109/ISM.2012.44ISI: 000317430600036Scopus ID: 2-s2.0-84874244618ISBN: 978-076954875-3 (print)OAI: oai:DiVA.org:kth-102704DiVA: diva2:555934
Conference
14th IEEE International Symposium on Multimedia, ISM 2012;Irvine, CA;10 December 2012 through 12 December 2012
Funder
ICT - The Next Generation
Note

QC 20130308

Available from: 2012-09-22 Created: 2012-09-22 Last updated: 2013-05-23Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Flierl, Markus

Search in DiVA

By author/editor
Rana, Pravin KumarTaghia, JalilFlierl, Markus
By organisation
Sound and Image Processing
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 106 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf