Bayesian learning of probability density functions: A Markov chain Monte Carlo approach
2012 (English)In: Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, IEEE , 2012, 1512-1517 p.Conference paper (Refereed)
The paper considers the problem of reconstructing a probability density function from a finite set of samples independently drawn from it.We cast the problem in a Bayesian setting where the unknown density is modeled via a nonlinear transformation of a Bayesian prior placed on a Reproducing Kernel Hilbert Space. The learning of the unknown density function is then formulated as a minimum variance estimation problem. Since this requires the solution of analytically intractable integrals, we solve this problem by proposing a novel algorithm based on the Markov chain Monte Carlo framework. Simulations are used to corroborate the goodness of the new approach.
Place, publisher, year, edition, pages
IEEE , 2012. 1512-1517 p.
, Proceedings of the IEEE Conference on Decision and Control, ISSN 0191-2216
Metropolis-Hastings algorithm, regularization parameter, Reproducing Kernel Hilbert Spaces, stochastic processes, stochastic regularization
Engineering and Technology
IdentifiersURN: urn:nbn:se:kth:diva-119170DOI: 10.1109/CDC.2012.6426785ScopusID: 2-s2.0-84874274846OAI: oai:DiVA.org:kth-119170DiVA: diva2:610327
51st IEEE Conference on Decision and Control, CDC 2012, 10 December 2012 through 13 December 2012, Maui, HI
FunderEU, FP7, Seventh Framework Programme, FP7/2007-2013
QC 201303112013-03-112013-03-082013-03-11Bibliographically approved