Change search
Link to record
Permanent link

Direct link
BETA
Publications (4 of 4) Show all publications
Sundin, M., Venkitaraman, A., Jansson, M. & Chatterjee, S. (2017). A Connectedness Constraint for Learning Sparse Graphs. In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE (pp. 151-155). IEEE
Open this publication in new window or tab >>A Connectedness Constraint for Learning Sparse Graphs
2017 (English)In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, p. 151-155Conference paper, Published paper (Refereed)
Abstract [en]

Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.

Place, publisher, year, edition, pages
IEEE, 2017
Series
European Signal Processing Conference, ISSN 2076-1465
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-226274 (URN)000426986000031 ()2-s2.0-85041483337 (Scopus ID)978-0-9928-6267-1 (ISBN)
Conference
25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE
Note

QC 20180419

Available from: 2018-04-19 Created: 2018-04-19 Last updated: 2018-04-19Bibliographically approved
Sundin, M., Chatterjee, S. & Jansson, M. (2016). Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction. In: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 24th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2016, Budapest, HUNGARY (pp. 1227-1231). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction
2016 (English)In: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 1227-1231Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Low-rank matrix reconstruction (LRMR) problem considersestimation (or reconstruction) of an underlying low-rank matrixfrom under-sampled linear measurements. A low-rank matrix can be represented using a factorized model. In thisarticle, we derive Bayesian Cramer-Rao bounds for LRMR where a factorized model is used. We first show a general informative bound, and then derive several Bayesian Cramer-Rao bounds for different scenarios. We always considered the low-rank matrix to be reconstructed as a random matrix, but its model hyper-parameters for three cases - deterministic known, deterministic unknown and random. Finally we compare the bounds with existing practical algorithms through numerical simulations.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Keywords
Low-rank matrices, matrix completion, Bayesian estimation, Cramer-Rao bounds.
National Category
Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-190111 (URN)000391891900234 ()2-s2.0-85005949749 (Scopus ID)
Conference
24th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2016, Budapest, HUNGARY
Note

QC 20160810

Available from: 2016-08-09 Created: 2016-08-09 Last updated: 2017-02-13Bibliographically approved
Sundin, M. (2016). Bayesian methods for sparse and low-rank matrix problems. (Doctoral dissertation). KTH Royal Institute of Technology
Open this publication in new window or tab >>Bayesian methods for sparse and low-rank matrix problems
2016 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

Many scientific and engineering problems require us to process measurements and data in order to extract information. Since we base decisions on information,it is important to design accurate and efficient processing algorithms. This is often done by modeling the signal of interest and the noise in the problem. One type ofmodeling is Compressed Sensing, where the signal has a sparse or low-rank representation. In this thesis we study different approaches to designing algorithms for sparse and low-rank problems.

Greedy methods are fast methods for sparse problems which iteratively detects and estimates the non-zero components. By modeling the detection problem as an array processing problem and a Bayesian filtering problem, we improve the detection accuracy. Bayesian methods approximate the sparsity by probability distributions which are iteratively modified. We show one approach to making the Bayesian method the Relevance Vector Machine robust against sparse noise.

Bayesian methods for low-rank matrix estimation typically use probability distributions which only depends on the singular values or a factorization approach. Here we introduce a new method, the Relevance Singular Vector Machine, which uses precision matrices with prior distributions to promote low-rank. The method is also applied to the robust Principal Component Analysis (PCA) problem, where a low-rank matrix is contaminated by sparse noise.

In many estimation problems, there exists theoretical lower bounds on how well an algorithm can perform. When the performance of an algorithm matches a lowerbound, we know that the algorithm has optimal performance and that the lower bound is tight. When no algorithm matches a lower bound, there exists room for better algorithms and/or tighter bounds. In this thesis we derive lower bounds for three different Bayesian low-rank matrix models.

In some problems, only the amplitudes of the measurements are recorded. Despitebeing non-linear, some problems can be transformed to linear problems. Earlier works have shown how sparsity can be utilized in the problem, here we show how the low-rank can be used.

In some situations, the number of measurements and/or the number of parametersis very large. Such Big Data problems require us to design new algorithms. We show how the Basis Pursuit algorithm can be modified for problems with a very large number of parameters.

Abstract [sv]

Många vetenskapliga och ingenjörsproblem kräver att vi behandlar mätningar och data för att finna information. Eftersom vi grundar beslut på information är det viktigt att designa noggranna och effektiva behandlingsalgoritmer. Detta görs ofta genom att modellera signalen vi söker och bruset i problemet. En typ av modellering är Compressed Sensing där signalen har en gles eller lågrangs-representation.I denna avhandling studerar vi olika sätt att designa algoritmer för glesa och lågrangsproblem.

Giriga metoder är snabba metoder för glesa problem som iterativt detekterar och skattar de nollskilda komponenterna. Genom att modellera detektionsproblemet som ett gruppantennproblem och ett Bayesianskt filtreringsproblem förbättrar vi prestandan hos algoritmerna. Bayesianska metoder approximerar glesheten med sannolikhetsfördelningar som iterativt modifieras. Vi visar ett sätt att göra den Bayesianska metoden Relevance Vector Machine robust mot glest brus.

Bayesianska metoder för skattning av lågrangsmatriser använder typiskt sannolikhetsfördelningar som endast beror på matrisens singulärvärden eller en faktoriseringsmetod. Vi introducerar en ny metod, Relevance Singular Vector Machine, som använder precisionsmatriser med a-priori fördelningar för att införa låg rang. Metoden används också för robust Principal Komponent Analys (PCA), där en lågrangsmatris har störts av glest brus.

I många skattningsproblem existerar det teoretiska undre gränser för hur väl en algoritm kan prestera. När en algoritm möter en undre gräns vet vi att algoritmen är optimal och att den undre gränsen är den bästa möjliga. När ingen algoritm möter en undre gräns vet vi att det existerar utrymme för bättre algoritmer och/eller bättre undre gränser. I denna avhandling härleder vi undre gränser för tre olika Bayesianska lågrangsmodeller.

I vissa problem registreras endast amplituderna hos mätningarna. Några problem kan transformeras till linjära problem, trots att de är olinjära. Tidigare arbeten har visat hur gleshet kan användas i problemet, här visar vi hur låg rang kan användas.

I vissa situationer är antalet mätningar och/eller antalet parametrar mycket stort. Sådana Big Data-problem kräver att vi designar nya algoritmer. Vi visar hur algoritmen Basis Pursuit kan modifieras när antalet parametrar är mycket stort.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2016. p. 204
Series
TRITA-EE, ISSN 1653-5146 ; 2016:087
National Category
Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-191139 (URN)978-91-7729-044-5 (ISBN)
External cooperation:
Public defence
2016-09-14, F3, Lindstedtsvägen 26, Stockholm, 10:15 (English)
Opponent
Supervisors
Note

QC 20160825

Available from: 2016-08-25 Created: 2016-08-24 Last updated: 2016-08-25Bibliographically approved
Sundin, M., Chatterjee, S. & Jansson, M. (2014). COMBINED MODELING OF SPARSE AND DENSE NOISE IMPROVES BAYESIAN RVM. In: 2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 22nd European Signal Processing Conference (EUSIPCO), SEP 01-05, 2014, Lisbon, PORTUGAL (pp. 1841-1845). IEEE
Open this publication in new window or tab >>COMBINED MODELING OF SPARSE AND DENSE NOISE IMPROVES BAYESIAN RVM
2014 (English)In: 2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2014, p. 1841-1845Conference paper, Published paper (Refereed)
Abstract [en]

Using a Bayesian approach, we consider the problem of recovering sparse signals under additive sparse and dense noise. Typically, sparse noise models outliers, impulse bursts or data loss. To handle sparse noise, existing methods simultaneously estimate sparse noise and sparse signal of interest. For estimating the sparse signal, without estimating the sparse noise, we construct a Relevance Vector Machine (RVM). In the RVM, sparse noise and ever present dense noise are treated through a combined noise model. Through simulations, we show the efficiency of new RVM for three applications: kernel regression, housing price prediction and compressed sensing.

Place, publisher, year, edition, pages
IEEE, 2014
Series
European Signal Processing Conference, ISSN 2076-1465
Keywords
Robust regression, Bayesian learning, Relevance vector machine, Compressed sensing
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-243801 (URN)000393420200370 ()978-0-9928626-1-9 (ISBN)
Conference
22nd European Signal Processing Conference (EUSIPCO), SEP 01-05, 2014, Lisbon, PORTUGAL
Note

QC 20190206

Available from: 2019-02-06 Created: 2019-02-06 Last updated: 2019-08-21Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6992-5771

Search in DiVA

Show all publications