kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 15) Show all publications
Sundin, M., Venkitaraman, A., Jansson, M. & Chatterjee, S. (2017). A Connectedness Constraint for Learning Sparse Graphs. In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE (pp. 151-155). IEEE
Open this publication in new window or tab >>A Connectedness Constraint for Learning Sparse Graphs
2017 (English)In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, p. 151-155Conference paper, Published paper (Refereed)
Abstract [en]

Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.

Place, publisher, year, edition, pages
IEEE, 2017
Series
European Signal Processing Conference, ISSN 2076-1465
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-226274 (URN)10.23919/EUSIPCO.2017.8081187 (DOI)000426986000031 ()2-s2.0-85041483337 (Scopus ID)978-0-9928-6267-1 (ISBN)
Conference
25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE
Note

QC 20180419

Available from: 2018-04-19 Created: 2018-04-19 Last updated: 2024-03-18Bibliographically approved
Li, K., Sundin, M., Rojas, C., Chatterjee, S. & Jansson, M. (2016). Alternating strategies with internal ADMM for low-rank matrix reconstruction. Signal Processing, 121, 153-159
Open this publication in new window or tab >>Alternating strategies with internal ADMM for low-rank matrix reconstruction
Show others...
2016 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 121, p. 153-159Article in journal (Refereed) Published
Abstract [en]

This paper focuses on the problem of reconstructing low-rank matrices from underdetermined measurements using alternating optimization strategies. We endeavour to combine an alternating least-squares based estimation strategy with ideas from the alternating direction method of multipliers (ADMM) to recover low-rank matrices with linear parameterized structures, such as Hankel matrices. The use of ADMM helps to improve the estimate in each iteration due to its capability of incorporating information about the direction of estimates achieved in previous iterations. We show that merging these two alternating strategies leads to a better performance and less consumed time than the existing alternating least squares (ALS) strategy. The improved performance is verified via numerical simulations with varying sampling rates and real applications.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
ADMM, Alternating strategies, Least squares, Low-rank matrix reconstruction
National Category
Control Engineering Signal Processing
Identifiers
urn:nbn:se:kth:diva-180899 (URN)10.1016/j.sigpro.2015.11.002 (DOI)000369193600013 ()2-s2.0-84949761064 (Scopus ID)
Funder
Swedish Research Council, 621-2011-5847
Note

QC 20160202. QC 20160304

Available from: 2016-02-02 Created: 2016-01-25 Last updated: 2024-03-18Bibliographically approved
Sundin, M., Chatterjee, S. & Jansson, M. (2016). Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction. In: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 24th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2016, Budapest, HUNGARY (pp. 1227-1231). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction
2016 (English)In: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 1227-1231Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Low-rank matrix reconstruction (LRMR) problem considersestimation (or reconstruction) of an underlying low-rank matrixfrom under-sampled linear measurements. A low-rank matrix can be represented using a factorized model. In thisarticle, we derive Bayesian Cramer-Rao bounds for LRMR where a factorized model is used. We first show a general informative bound, and then derive several Bayesian Cramer-Rao bounds for different scenarios. We always considered the low-rank matrix to be reconstructed as a random matrix, but its model hyper-parameters for three cases - deterministic known, deterministic unknown and random. Finally we compare the bounds with existing practical algorithms through numerical simulations.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Keywords
Low-rank matrices, matrix completion, Bayesian estimation, Cramer-Rao bounds.
National Category
Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-190111 (URN)10.1109/EUSIPCO.2016.7760444 (DOI)000391891900234 ()2-s2.0-85005949749 (Scopus ID)
Conference
24th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2016, Budapest, HUNGARY
Note

QC 20160810

Available from: 2016-08-09 Created: 2016-08-09 Last updated: 2024-03-15Bibliographically approved
Sundin, M. (2016). Bayesian methods for sparse and low-rank matrix problems. (Doctoral dissertation). KTH Royal Institute of Technology
Open this publication in new window or tab >>Bayesian methods for sparse and low-rank matrix problems
2016 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

Many scientific and engineering problems require us to process measurements and data in order to extract information. Since we base decisions on information,it is important to design accurate and efficient processing algorithms. This is often done by modeling the signal of interest and the noise in the problem. One type ofmodeling is Compressed Sensing, where the signal has a sparse or low-rank representation. In this thesis we study different approaches to designing algorithms for sparse and low-rank problems.

Greedy methods are fast methods for sparse problems which iteratively detects and estimates the non-zero components. By modeling the detection problem as an array processing problem and a Bayesian filtering problem, we improve the detection accuracy. Bayesian methods approximate the sparsity by probability distributions which are iteratively modified. We show one approach to making the Bayesian method the Relevance Vector Machine robust against sparse noise.

Bayesian methods for low-rank matrix estimation typically use probability distributions which only depends on the singular values or a factorization approach. Here we introduce a new method, the Relevance Singular Vector Machine, which uses precision matrices with prior distributions to promote low-rank. The method is also applied to the robust Principal Component Analysis (PCA) problem, where a low-rank matrix is contaminated by sparse noise.

In many estimation problems, there exists theoretical lower bounds on how well an algorithm can perform. When the performance of an algorithm matches a lowerbound, we know that the algorithm has optimal performance and that the lower bound is tight. When no algorithm matches a lower bound, there exists room for better algorithms and/or tighter bounds. In this thesis we derive lower bounds for three different Bayesian low-rank matrix models.

In some problems, only the amplitudes of the measurements are recorded. Despitebeing non-linear, some problems can be transformed to linear problems. Earlier works have shown how sparsity can be utilized in the problem, here we show how the low-rank can be used.

In some situations, the number of measurements and/or the number of parametersis very large. Such Big Data problems require us to design new algorithms. We show how the Basis Pursuit algorithm can be modified for problems with a very large number of parameters.

Abstract [sv]

Många vetenskapliga och ingenjörsproblem kräver att vi behandlar mätningar och data för att finna information. Eftersom vi grundar beslut på information är det viktigt att designa noggranna och effektiva behandlingsalgoritmer. Detta görs ofta genom att modellera signalen vi söker och bruset i problemet. En typ av modellering är Compressed Sensing där signalen har en gles eller lågrangs-representation.I denna avhandling studerar vi olika sätt att designa algoritmer för glesa och lågrangsproblem.

Giriga metoder är snabba metoder för glesa problem som iterativt detekterar och skattar de nollskilda komponenterna. Genom att modellera detektionsproblemet som ett gruppantennproblem och ett Bayesianskt filtreringsproblem förbättrar vi prestandan hos algoritmerna. Bayesianska metoder approximerar glesheten med sannolikhetsfördelningar som iterativt modifieras. Vi visar ett sätt att göra den Bayesianska metoden Relevance Vector Machine robust mot glest brus.

Bayesianska metoder för skattning av lågrangsmatriser använder typiskt sannolikhetsfördelningar som endast beror på matrisens singulärvärden eller en faktoriseringsmetod. Vi introducerar en ny metod, Relevance Singular Vector Machine, som använder precisionsmatriser med a-priori fördelningar för att införa låg rang. Metoden används också för robust Principal Komponent Analys (PCA), där en lågrangsmatris har störts av glest brus.

I många skattningsproblem existerar det teoretiska undre gränser för hur väl en algoritm kan prestera. När en algoritm möter en undre gräns vet vi att algoritmen är optimal och att den undre gränsen är den bästa möjliga. När ingen algoritm möter en undre gräns vet vi att det existerar utrymme för bättre algoritmer och/eller bättre undre gränser. I denna avhandling härleder vi undre gränser för tre olika Bayesianska lågrangsmodeller.

I vissa problem registreras endast amplituderna hos mätningarna. Några problem kan transformeras till linjära problem, trots att de är olinjära. Tidigare arbeten har visat hur gleshet kan användas i problemet, här visar vi hur låg rang kan användas.

I vissa situationer är antalet mätningar och/eller antalet parametrar mycket stort. Sådana Big Data-problem kräver att vi designar nya algoritmer. Vi visar hur algoritmen Basis Pursuit kan modifieras när antalet parametrar är mycket stort.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2016. p. 204
Series
TRITA-EE, ISSN 1653-5146 ; 2016:087
National Category
Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-191139 (URN)978-91-7729-044-5 (ISBN)
External cooperation:
Public defence
2016-09-14, F3, Lindstedtsvägen 26, Stockholm, 10:15 (English)
Opponent
Supervisors
Note

QC 20160825

Available from: 2016-08-25 Created: 2016-08-24 Last updated: 2022-06-22Bibliographically approved
Sundin, M., Rojas, C. R., Jansson, M. & Chatterjee, S. (2016). Relevance Singular Vector Machine for Low-Rank Matrix Reconstruction. IEEE Transactions on Signal Processing, 64(20), 5327-5339
Open this publication in new window or tab >>Relevance Singular Vector Machine for Low-Rank Matrix Reconstruction
2016 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 64, no 20, p. 5327-5339Article in journal (Refereed) Published
Abstract [en]

We develop Bayesian learning methods for low-rank matrix reconstruction and completion from linear measurements. For under-determined systems, the developed methods reconstruct low-rank matrices when neither the rank nor the noise power is known a priori. We derive relations between the proposed Bayesian models and low-rank promoting penalty functions. The relations justify the use of Kronecker structured covariance matrices in a Gaussian-based prior. In the methods, we use expectation maximization to learn the model parameters. The performance of the methods is evaluated through extensive numerical simulations on synthetic and real data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Keywords
Machine learning, Bayes methods, compressed sensing
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-193155 (URN)10.1109/TSP.2016.2597121 (DOI)000382677100011 ()2-s2.0-84985021194 (Scopus ID)
Note

QC 20161017

Available from: 2016-10-17 Created: 2016-09-30 Last updated: 2024-03-15Bibliographically approved
Sundin, M., Chatterjee, S. & Jansson, M. (2015). Bayesian learning for robust principal component analysis. In: 2015 23rd European Signal Processing Conference, EUSIPCO 2015: . Paper presented at 23rd European Signal Processing Conference, EUSIPCO 2015; Nice Congress CenterNice; France; 31 August 2015 through 4 September 2015 (pp. 2361-2365). IEEE
Open this publication in new window or tab >>Bayesian learning for robust principal component analysis
2015 (English)In: 2015 23rd European Signal Processing Conference, EUSIPCO 2015, IEEE , 2015, p. 2361-2365Conference paper, Published paper (Refereed)
Abstract [en]

We develop a Bayesian learning method for robust principal component analysis where the main task is to estimate a low-rank matrix from noisy and outlier contaminated measurements. To promote low-rank, we use a structured Gaussian prior that induces correlations among column vectors as well as row vectors of the matrix under estimation. In our method, the noise and outliers are modeled by a combined noise model. The method is evaluated and compared to other methods using synthetic data as well as data from the MovieLens 100K dataset. Comparisons show that the method empirically provides a significant performance improvement over existing methods.

Place, publisher, year, edition, pages
IEEE, 2015
Keywords
Robust principal component analysis, matrix completion, Bayesian learning
National Category
Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-185726 (URN)10.1109/EUSIPCO.2015.7362807 (DOI)000377943800474 ()2-s2.0-84963956159 (Scopus ID)
Conference
23rd European Signal Processing Conference, EUSIPCO 2015; Nice Congress CenterNice; France; 31 August 2015 through 4 September 2015
Funder
Swedish Research Council, 621-2011-5847
Note

QC 20160426

Available from: 2016-04-25 Created: 2016-04-25 Last updated: 2024-03-18Bibliographically approved
Casamitjana, A., Sundin, M., Ghosh, P. & Chatterjee, S. (2015). Bayesian learning for time-varying linear prediction of speech. In: 2015 23rd European Signal Processing Conference, EUSIPCO 2015: . Paper presented at 23rd European Signal Processing Conference, EUSIPCO 2015, 31 August 2015 through 4 September 2015 (pp. 325-329). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Bayesian learning for time-varying linear prediction of speech
2015 (English)In: 2015 23rd European Signal Processing Conference, EUSIPCO 2015, Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 325-329Conference paper, Published paper (Refereed)
Abstract [en]

We develop Bayesian learning algorithms for estimation of time-varying linear prediction (TVLP) coefficients of speech. Estimation of TVLP coefficients is a naturally underdeter-mined problem. We consider sparsity and subspace based approaches for dealing with the corresponding underde-termined system. Bayesian learning algorithms are developed to achieve better estimation performance. Expectation-maximization (EM) framework is employed to develop the Bayesian learning algorithms where we use a combined prior to model a driving noise (glottal signal) that has both sparse and dense statistical properties. The efficiency of the Bayesian learning algorithms is shown for synthetic signals using spectral distortion measure and formant tracking of real speech signals.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2015
Keywords
Bayesian learning, expectation-maximization, sparsity, Time-varying linear prediction, Algorithms, Forecasting, Maximum principle, Signal processing, Estimation performance, Expectation - maximizations, Expectation Maximization, Linear prediction, Spectral distortions, Statistical properties, Learning algorithms
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-186794 (URN)10.1109/EUSIPCO.2015.7362398 (DOI)000377943800066 ()2-s2.0-84963983947 (Scopus ID)9780992862633 (ISBN)
Conference
23rd European Signal Processing Conference, EUSIPCO 2015, 31 August 2015 through 4 September 2015
Note

QC 20160516

Available from: 2016-05-16 Created: 2016-05-13 Last updated: 2024-03-18Bibliographically approved
Sundin, M., Chatterjee, S. & Jansson, M. (2015). Greedy minimization of l1-norm with high empirical success. In: : . Paper presented at 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015.
Open this publication in new window or tab >>Greedy minimization of l1-norm with high empirical success
2015 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

We develop a greedy algorithm for the basis-pursuit problem. Thealgorithm is empirically found to provide the same solution as convex optimization based solvers. The method uses only a subset ofthe optimization variables in each iteration and iterates until an optimality condition is satisfied. In simulations, the algorithm converges faster than standard methods when the number of measurements is small and the number of variables large.

Keywords
Convex optimization, basis-pursuit, greedy algorithms.
National Category
Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-163700 (URN)10.1109/ICASSP.2015.7178685 (DOI)000427402903186 ()2-s2.0-84946038019 (Scopus ID)
Conference
40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015
Note

Presented at the 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015 in Brisbane, Australia. QC 20150415

Available from: 2015-04-10 Created: 2015-04-10 Last updated: 2024-03-18Bibliographically approved
Sundin, M., Chatterjee, S. & Jansson, M. (2014). COMBINED MODELING OF SPARSE AND DENSE NOISE IMPROVES BAYESIAN RVM. In: 2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 22nd European Signal Processing Conference (EUSIPCO), SEP 01-05, 2014, Lisbon, PORTUGAL (pp. 1841-1845). IEEE
Open this publication in new window or tab >>COMBINED MODELING OF SPARSE AND DENSE NOISE IMPROVES BAYESIAN RVM
2014 (English)In: 2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2014, p. 1841-1845Conference paper, Published paper (Refereed)
Abstract [en]

Using a Bayesian approach, we consider the problem of recovering sparse signals under additive sparse and dense noise. Typically, sparse noise models outliers, impulse bursts or data loss. To handle sparse noise, existing methods simultaneously estimate sparse noise and sparse signal of interest. For estimating the sparse signal, without estimating the sparse noise, we construct a Relevance Vector Machine (RVM). In the RVM, sparse noise and ever present dense noise are treated through a combined noise model. Through simulations, we show the efficiency of new RVM for three applications: kernel regression, housing price prediction and compressed sensing.

Place, publisher, year, edition, pages
IEEE, 2014
Series
European Signal Processing Conference, ISSN 2076-1465
Keywords
Robust regression, Bayesian learning, Relevance vector machine, Compressed sensing
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-243801 (URN)000393420200370 ()2-s2.0-84911921506 (Scopus ID)978-0-9928626-1-9 (ISBN)
Conference
22nd European Signal Processing Conference (EUSIPCO), SEP 01-05, 2014, Lisbon, PORTUGAL
Note

QC 20190206

Available from: 2019-02-06 Created: 2019-02-06 Last updated: 2022-06-26Bibliographically approved
Sundin, M., Chatterjee, S. & Jansson, M. (2014). Combined Modelling of Sparse and Dense noise improves Bayesian RVM. In: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014: . Paper presented at The 22nd European Signal Processing Conference (EUSIPCO), 1-5 Sep. 2014,Lisbon (pp. 1841-1845). IEEE conference proceedings
Open this publication in new window or tab >>Combined Modelling of Sparse and Dense noise improves Bayesian RVM
2014 (English)In: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014, IEEE conference proceedings, 2014, p. 1841-1845Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Using a Bayesian approach, we consider the problem of recovering sparse signals under additive sparse and dense noise. Typically, sparse noise models outliers, impulse bursts or data loss. To handle sparse noise, existing methods simultaneously estimate sparse noise and sparse signal of interest. For estimating the sparse signal, without estimating the sparse noise, we construct a Relevance Vector Machine (RVM). In the RVM, sparse noise and ever present dense noise are treated through a combined noise model. Through simulations, we show the efficiency of new RVM for three applications: kernel regression, housing price prediction and compressed sensing.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2014
Keywords
Robust regression, Bayesian learning, Relevance vector machine, Compressed sensing
National Category
Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-165603 (URN)
Conference
The 22nd European Signal Processing Conference (EUSIPCO), 1-5 Sep. 2014,Lisbon
Note

QC 20150512

Available from: 2015-04-29 Created: 2015-04-29 Last updated: 2024-03-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6992-5771

Search in DiVA

Show all publications