Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bayesian structure learning in graphical models using sequential Monte Carlo
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.ORCID iD: 0000-0003-0772-846X
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.ORCID iD: 0000-0002-0633-5579
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.ORCID iD: 0000-0002-6886-5436
(English)Manuscript (preprint) (Other academic)
Abstract [en]

In this paper we present a family of algorithms, the junction tree expanders, for expanding junction trees in the sense that the number of nodes in the underlying decomposable graph is increased by one. The family of junction tree expanders is equipped with a number of theoretical results including a characterization stating that every junction tree and consequently every de- composable graph can be constructed by iteratively using a junction tree expander. Further, an important feature of a stochastic implementation of a junction tree expander is the Markovian property inherent to the tree propagation dynamics. Using this property, a sequential Monte Carlo algorithm for approximating a probability distribution defined on the space of decompos- able graphs is developed with the junction tree expander as a proposal kernel. Specifically, we apply the sequential Monte Carlo algorithm for structure learning in decomposable Gaussian graphical models where the target distribution is a junction tree posterior distribution. In this setting, posterior parametric inference on the underlying decomposable graph is a direct by- product of the suggested methodology; working with the G-Wishart family of conjugate priors, we derive a closed form expression for the Bayesian estimator of the precision matrix of Gaus- sian graphical models Markov with respect to a decomposable graph. Performance accuracy of the graph and parameter estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in high-dimensional domains. 

Keyword [en]
Structure learning, Bayesian statistics, Gaussian graphical models
National Category
Probability Theory and Statistics
Research subject
Mathematics
Identifiers
URN: urn:nbn:se:kth:diva-180326OAI: oai:DiVA.org:kth-180326DiVA: diva2:892652
Note

QC 20160524

Available from: 2016-01-11 Created: 2016-01-11 Last updated: 2016-05-24Bibliographically approved
In thesis
1. Bayesian structure learning in graphical models
Open this publication in new window or tab >>Bayesian structure learning in graphical models
2016 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis consists of two papers studying structure learning in probabilistic graphical models for both undirected graphs anddirected acyclic graphs (DAGs).

Paper A, presents a novel family of graph theoretical algorithms, called the junction tree expanders, that incrementally construct junction trees for decomposable graphs. Due to its Markovian property, the junction tree expanders are shown to be suitable for proposal kernels in a sequential Monte Carlo (SMC) sampling scheme for approximating a graph posterior distribution. A simulation study is performed for the case of Gaussian decomposable graphical models showing efficiency of the suggested unified approach for both structural and parametric Bayesian inference.

Paper B, develops a novel prior distribution over DAGs with the ability to express prior knowledge in terms of graph layerings. In conjunction with the prior, a search and score algorithm based on the layering property of DAGs, is developed for performing structure learning in Bayesian networks. A simulation study shows that the search and score algorithm along with the prior has superior performance for learning graph with a clearly layered structure compared with other priors.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2016. viii, 19 p.
Series
TRITA-MAT-A, 2015:16
Keyword
Bayesian statistics, graphical models, Bayesian networks, Markov networks, structure learning
National Category
Probability Theory and Statistics
Research subject
Applied and Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-179852 (URN)978-91-7595-832-3 (ISBN)
Presentation
2016-01-28, Rum 3418, Instititionen för matematik, Lindstedtsvägen 25, Kungliga Tekniska Högskolan, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20160111

Available from: 2016-01-11 Created: 2016-01-04 Last updated: 2016-01-11Bibliographically approved

Open Access in DiVA

fulltext(876 kB)92 downloads
File information
File name FULLTEXT01.pdfFile size 876 kBChecksum SHA-512
7c7863c8b359740349eefe27b17e33757fe6089e7a02a08236c21f8ef534890151776b97b8cbc434c3242843f320253788ca259a7d191b656277d7e8cd4c7246
Type fulltextMimetype application/pdf

Authority records BETA

Olsson, JimmyPavlenko, TatjanaRios, Felix

Search in DiVA

By author/editor
Olsson, JimmyPavlenko, TatjanaRios, Felix
By organisation
Mathematical Statistics
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar
Total: 92 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 279 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf