Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Georgakis, Apostolos
    et al.
    Ericsson.
    Rana, Pravin Kumar
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Radulovic, Ivana
    Ericsson.
    3DTV Exploration Experiments (EE1 & EE4) on the Lovebird1 Data Set2009Report (Other (popular science, discussion, etc.))
    Abstract [en]

    This contribution describes the results to two sets of 3DTV exploration experiments undertaken by Ericsson using the Lovebird 1 sequence defined in the last MPEG meeting in London (see w10720). These sets cover both EE1 for depth estimation and view synthesis and EE4 for coding efficiency.

  • 2.
    Girdzijauskas, Ivana
    et al.
    Ericsson.
    Flierl, Markus
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Georgakis, Apostolos
    Ericsson.
    Kumar Rana, Pravin
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Methods and arrangements for 3D scene representation2010Patent (Other (popular science, discussion, etc.))
  • 3.
    Girdzijauskas, Ivana
    et al.
    Ericsson.
    Flierl, Markus
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Kumar Rana, Pravin
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Method and processor for 3D scene representation2012Patent (Other (popular science, discussion, etc.))
  • 4. Ma, Zhanyu
    et al.
    Rana, Pravin Kumar
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Taghia, Jalil
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Leijon, Arne
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Bayesian estimation of Dirichlet mixture model with variational inference2014In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 47, no 9, p. 3143-3157Article in journal (Refereed)
    Abstract [en]

    In statistical modeling, parameter estimation is an essential and challengeable task. Estimation of the parameters in the Dirichlet mixture model (DMM) is analytically intractable, due to the integral expressions of the gamma function and its corresponding derivatives. We introduce a Bayesian estimation strategy to estimate the posterior distribution of the parameters in DMM. By assuming the gamma distribution as the prior to each parameter, we approximate both the prior and the posterior distribution of the parameters with a product of several mutually independent gamma distributions. The extended factorized approximation method is applied to introduce a single lower-bound to the variational objective function and an analytically tractable estimation solution is derived. Moreover, there is only one function that is maximized during iterations and, therefore, the convergence of the proposed algorithm is theoretically guaranteed. With synthesized data, the proposed method shows the advantages over the EM-based method and the previously proposed Bayesian estimation method. With two important multimedia signal processing applications, the good performance of the proposed Bayesian estimation method is demonstrated.

  • 5.
    Parthasarathy, Srinivas
    et al.
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Chopra, Akul
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Baudin, Emilie
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Rana, Pravin Kumar
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Denoising of volumetric depth confidence for view rendering2012In: 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2012, IEEE , 2012, p. 1-4Conference paper (Refereed)
    Abstract [en]

    In this paper, we define volumetric depth confidence and propose a method to denoise this data by performing adaptive wavelet thresholding using three dimensional (3D) wavelet transforms. The depth information is relevant for emerging interactive multimedia applications such as 3D TV and free-viewpoint television (FTV). These emerging applications require high quality virtual view rendering to enable viewers to move freely in a dynamic real worldscene. Depth information of a real world scene from different viewpoints is used to render an arbitrary number of novel views. Usually, depth estimates of 3D object points from different viewpoints are inconsistent. This inconsistency of depth estimates affects the quality of view rendering negatively. Based on the superposition principle, we define a volumetric depth confidence description of the underlying geometry of natural 3D scenes by using these inconsistent depth estimates from different viewpoints. Our method denoises this noisy volumetric description, and with this, we enhance the quality of view rendering by up to 0.45 dB when compared to rendering with conventional MPEG depth maps.

  • 6.
    Rana, Pravin Kumar
    et al.
    KTH, School of Electrical Engineering (EES), Sound and Image Processing (Closed 130101).
    Dash, Mihir Kumar
    Routray, A.
    Pandey, Prem Chand
    Prediction of Sea Ice Edge in the Antarctic Using GVF Snake Model2011In: Journal of the Geological Society of India, ISSN 0016-7622, E-ISSN 0974-6889, Vol. 78, no 2, p. 99-108Article in journal (Refereed)
    Abstract [en]

    Antarctic sea ice cover plays an important role in shaping the earth's climate, primarily by insulating the ocean from the atmosphere and increasing the surface albedo. The convective processes accompanied with the sea ice formation result bottom water formation. The cold and dense bottom water moves towards the equator along the ocean basins and takes part in the global thermohaline circulation. Sea ice edge is a potential indicator of climate change. Additionally, fishing and commercial shipping activities as well as military submarine operations in the polar seas need reliable ice edge information. However, as the sea ice edge is unstable in time, the temporal validity of the estimated ice edge is often shorter than the time required to transfer the information to the operational user. Hence, an accurate sea ice edge prediction as well as determination is crucial for fine-scale geophysical modeling and for near-real-time operations. In this study, active contour modelling (known as Snake model) and non-rigid motion estimation techniques have been used for predicting the sea ice edge (SIE) in the Antarctic. For this purpose the SIE has been detected from sea ice concentration derived using special sensor microwave imager (SSM/I) observations. The 15% sea ice concentration pixels are being taken as the edge pixel between ice and water. The external force, gradient vector flow (GVF), of SIE for total the Antarctic region is parameterised for daily as well as weekly data set. The SIE is predicted at certain points using a statistical technique. These predicted points have been used to constitute a SIE using artificial intelligence technique, the gradient vector flow (GVF). The predicted edge has been validated with that of SSM/I. It is found that all the major curvatures have been captured by the predicated edge and it is in good agreement with that of the SSM/I observation.

  • 7.
    Rana, Pravin Kumar
    et al.
    KTH, School of Electrical Engineering (EES), Sound and Image Processing (Closed 130101). KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Sound and Image Processing (Closed 130101). KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Depth consistency testing for improved view interpolation2010In: , 2010, p. 384-389Conference paper (Refereed)
    Abstract [en]

    Multiview video will play a pivotal role in the next generation visual communication media services like three-dimensional (3D) television and free-viewpoint television. These advanced media services provide natural 3D impressions and enable viewers to move freely in a dynamic real world scene by changing the viewpoint. High quality virtual view interpolation is required to support free viewpoint viewing. Usually, depth maps of different viewpoints are used to reconstruct a novel view. As these depth maps are usually estimated individually by stereo-matching algorithms, they have very weak spatial consistency. The inconsistency of depth maps affects the quality of view interpolation. In this paper, we propose a method for depth consistency testing to improve view interpolation. The method addresses the problem by warping more than two depth maps from multiple reference viewpoints to the virtual viewpoint. We test the consistency among warped depth values and improve the depth value information of the virtual view. With that, we enhance the quality of the interpolated virtual view.

  • 8.
    Rana, Pravin Kumar
    et al.
    KTH, School of Electrical Engineering (EES), Sound and Image Processing (Closed 130101). KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Sound and Image Processing (Closed 130101). KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Depth Pixel Clustering for Consistency Testing of Multiview Depth2012In: European Signal Processing Conference, 2012, p. 1119-1123Conference paper (Refereed)
    Abstract [en]

    This paper proposes a clustering algorithm of depth pixels for consistency testing of multiview depth imagery. The testing addresses the inconsistencies among estimated depth maps of real world scenes by validating depth pixel connection evidence based on a hard connection threshold. With the proposed algorithm, we test the consistency among depth values generated from multiple depth observations using cluster adaptive connection thresholds. The connection threshold is based on statistical properties of depth pixels in a cluster or sub-cluster. This approach can improve the depth information of real world scenes at a given viewpoint. This allows us to enhance the quality of synthesized virtual views when compared to depth maps obtained by using fixed thresholding. Depth-image-based virtual view synthesis is widely used for upcoming multimedia services like three-dimensional television and free-viewpoint television.

  • 9.
    Rana, Pravin Kumar
    et al.
    KTH, School of Electrical Engineering (EES), Sound and Image Processing (Closed 130101). KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Sound and Image Processing (Closed 130101). KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    View Interpolation with structured depth from multiview video2011Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a method for interpolating multiview imagery which uses structured depth maps and multiview video plus inter-view connection information to represent a three-dimensional (3D) scene. The structured depth map consists of an inter-view consistent principal depth map and auxiliary depth information. The structured depth maps address the inconsistencies among estimated depth maps which may degrade the quality of rendered virtual views. Generated from multiple depth observations, the structuring of the depth maps is based on tested and adaptively chosen inter-view connections. Further, the use of connection information on the multiview video minimizes distortion due to varying illumination in the interpolated virtual views. Our approach improves the quality of rendered virtual views by up to 4 dB when compared to the reference MPEG view synthesis software for emerging multimedia services like 3D television and free-viewpoint television. Our approach obtains first the structured depth maps and the corresponding connection information. Second, it exploits the inter-view connection information when interpolating virtual views.

  • 10.
    Rana, Pravin Kumar
    et al.
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Ma, Zhanyu
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Taghia, Jalil
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Multiview Depth Map Enhancement by Variational Bayes Inference Estimation of Dirichlet Mixture Models2013In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE , 2013, p. 1528-1532Conference paper (Refereed)
    Abstract [en]

    High quality view synthesis is a prerequisite for future free-viewpointtelevision. It will enable viewers to move freely in a dynamicreal world scene. Depth image based rendering algorithms willplay a pivotal role when synthesizing an arbitrary number of novelviews by using a subset of captured views and corresponding depthmaps only. Usually, each depth map is estimated individually bystereo-matching algorithms and, hence, shows lack of inter-viewconsistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency ofmultiview depth imagery. First, our approach classifies the colorinformation in the multiview color imagery by modeling color witha mixture of Dirichlet distributions where the model parameters areestimated in a Bayesian framework with variational inference. Second, using the resulting color clusters, we classify the correspondingdepth values in the multiview depth imagery. Each clustered depthimage is subject to further sub-clustering. Finally, the resultingmean of each sub-cluster is used to enhance the depth imagery atmultiple viewpoints. Experiments show that our approach improvesthe average quality of virtual views by up to 0.8 dB when comparedto views synthesized by using conventionally estimated depth maps.

  • 11.
    Rana, Pravin Kumar
    et al.
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Taghia, Jalil
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Flier, Markus
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Statistical methods for inter-view depth enhancement2014In: 2014 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), IEEE , 2014, p. 6874755-Conference paper (Refereed)
    Abstract [en]

    This paper briefly presents and evaluates recent advances in statistical methods for improving inter-view inconsistency in multiview depth imagery. View synthesis is vital in free-viewpoint television in order to allow viewers to move freely in a dynamic scene. Here, depth image-based rendering plays a pivotal role by synthesizing an arbitrary number of novel views by using a subset of captured views and corresponding depth maps only. Usually, each depth map is estimated individually at different viewpoints by stereo matching and, hence, shows lack of inter-view consistency. This lack of consistency affects the quality of view synthesis negatively. This paper discusses two different approaches to enhance the inter-view depth consistency. The first one uses generative models based on multiview color and depth classification to assign a probabilistic weight to each depth pixel. The weighted depth pixels are utilized to enhance depth maps. The second one performs inter-view consistency testing in depth difference space to enhance the depth maps at multiple viewpoints. We comparatively evaluate these two methods and discuss their pros and cons for future work.

  • 12.
    Rana, Pravin Kumar
    et al.
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Taghia, Jalil
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Sound and Image Processing.
    A Variational Bayesian Inference Framework for Multiview Depth Image Enhancement2012In: Proceedings - 2012 IEEE International Symposium on Multimedia, ISM 2012, IEEE , 2012, p. 183-190Conference paper (Refereed)
    Abstract [en]

    In this paper, a general model-based framework for multiview depth image enhancement is proposed. Depth imagery plays a pivotal role in emerging free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery of different viewpoints is used to synthesize an arbitrary number of novel views. Usually, the depth imagery is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery by using a variational Bayesian inference framework. First, our approach classifies the color information in the multiview color imagery. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further subclustering. Finally, the resulting mean of the sub-clusters is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the quality of virtual views by up to 0.25 dB.

  • 13.
    Rana, Pravin Kumar
    et al.
    KTH, School of Electrical Engineering (EES), Communication Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Taghia, Jalil
    KTH, School of Electrical Engineering (EES), Communication Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Ma, Zhanyu
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Communication Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Probabilistic Multiview Depth Image Enhancement Using Variational Inference2015In: IEEE Journal on Selected Topics in Signal Processing, ISSN 1932-4553, E-ISSN 1941-0484, Vol. 9, no 3, p. 435-448Article in journal (Refereed)
    Abstract [en]

    An inference-based multiview depth image enhancement algorithm is introduced and investigated in this paper. Multiview depth imagery plays a pivotal role in free-viewpoint television. This technology requires high-quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery of different viewpoints is used to synthesize an arbitrary number of novel views. Usually, the depth imagery is estimated individually by stereo-matching algorithms and, hence, shows inter-view inconsistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the multiview depth imagery at multiple viewpoints by probabilistic weighting of each depth pixel. First, our approach classifies the color pixels in the multiview color imagery. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further subclustering. Clustering based on generative models is used for assigning probabilistic weights to each depth pixel. Finally, these probabilistic weights are used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach consistently improves the quality of virtual views by 0.2 dB to 1.6 dB, depending on the quality of the input multiview depth imagery.

1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf