Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Statistical methods for inter-view depth enhancement
KTH, School of Electrical Engineering (EES), Communication Theory.
KTH, School of Electrical Engineering (EES), Communication Theory.
KTH, School of Electrical Engineering (EES), Communication Theory.ORCID iD: 0000-0002-7807-5681
2014 (English)In: 2014 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), IEEE , 2014, 6874755- p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper briefly presents and evaluates recent advances in statistical methods for improving inter-view inconsistency in multiview depth imagery. View synthesis is vital in free-viewpoint television in order to allow viewers to move freely in a dynamic scene. Here, depth image-based rendering plays a pivotal role by synthesizing an arbitrary number of novel views by using a subset of captured views and corresponding depth maps only. Usually, each depth map is estimated individually at different viewpoints by stereo matching and, hence, shows lack of inter-view consistency. This lack of consistency affects the quality of view synthesis negatively. This paper discusses two different approaches to enhance the inter-view depth consistency. The first one uses generative models based on multiview color and depth classification to assign a probabilistic weight to each depth pixel. The weighted depth pixels are utilized to enhance depth maps. The second one performs inter-view consistency testing in depth difference space to enhance the depth maps at multiple viewpoints. We comparatively evaluate these two methods and discuss their pros and cons for future work.

Place, publisher, year, edition, pages
IEEE , 2014. 6874755- p.
Series
3DTV Conference, ISSN 2161-2021
Keyword [en]
Multiview depth maps, depth map enhancement, inter-view consistency, variational Bayesian inference
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-158300DOI: 10.1109/3DTV.2014.6874755ISI: 000345738600045Scopus ID: 2-s2.0-84906569216ISBN: 978-1-4799-4758-4 (print)OAI: oai:DiVA.org:kth-158300DiVA: diva2:777923
Conference
3DTV-Conference on True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), JUL 02-04, 2014, Budapest, Hungary
Note

QC 20150109

Available from: 2015-01-09 Created: 2015-01-07 Last updated: 2015-02-09Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Flier, Markus

Search in DiVA

By author/editor
Rana, Pravin KumarTaghia, JalilFlier, Markus
By organisation
Communication Theory
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 196 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf