kth.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Publikationer (10 of 15) Visa alla publikationer
Sundin, M., Venkitaraman, A., Jansson, M. & Chatterjee, S. (2017). A Connectedness Constraint for Learning Sparse Graphs. In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE (pp. 151-155). IEEE
Öppna denna publikation i ny flik eller fönster >>A Connectedness Constraint for Learning Sparse Graphs
2017 (Engelska)Ingår i: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, s. 151-155Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.

Ort, förlag, år, upplaga, sidor
IEEE, 2017
Serie
European Signal Processing Conference, ISSN 2076-1465
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:kth:diva-226274 (URN)10.23919/EUSIPCO.2017.8081187 (DOI)000426986000031 ()2-s2.0-85041483337 (Scopus ID)978-0-9928-6267-1 (ISBN)
Konferens
25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE
Anmärkning

QC 20180419

Tillgänglig från: 2018-04-19 Skapad: 2018-04-19 Senast uppdaterad: 2024-03-18Bibliografiskt granskad
Li, K., Sundin, M., Rojas, C., Chatterjee, S. & Jansson, M. (2016). Alternating strategies with internal ADMM for low-rank matrix reconstruction. Signal Processing, 121, 153-159
Öppna denna publikation i ny flik eller fönster >>Alternating strategies with internal ADMM for low-rank matrix reconstruction
Visa övriga...
2016 (Engelska)Ingår i: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 121, s. 153-159Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

This paper focuses on the problem of reconstructing low-rank matrices from underdetermined measurements using alternating optimization strategies. We endeavour to combine an alternating least-squares based estimation strategy with ideas from the alternating direction method of multipliers (ADMM) to recover low-rank matrices with linear parameterized structures, such as Hankel matrices. The use of ADMM helps to improve the estimate in each iteration due to its capability of incorporating information about the direction of estimates achieved in previous iterations. We show that merging these two alternating strategies leads to a better performance and less consumed time than the existing alternating least squares (ALS) strategy. The improved performance is verified via numerical simulations with varying sampling rates and real applications.

Ort, förlag, år, upplaga, sidor
Elsevier, 2016
Nyckelord
ADMM, Alternating strategies, Least squares, Low-rank matrix reconstruction
Nationell ämneskategori
Reglerteknik Signalbehandling
Identifikatorer
urn:nbn:se:kth:diva-180899 (URN)10.1016/j.sigpro.2015.11.002 (DOI)000369193600013 ()2-s2.0-84949761064 (Scopus ID)
Forskningsfinansiär
Vetenskapsrådet, 621-2011-5847
Anmärkning

QC 20160202. QC 20160304

Tillgänglig från: 2016-02-02 Skapad: 2016-01-25 Senast uppdaterad: 2024-03-18Bibliografiskt granskad
Sundin, M., Chatterjee, S. & Jansson, M. (2016). Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction. In: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 24th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2016, Budapest, HUNGARY (pp. 1227-1231). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction
2016 (Engelska)Ingår i: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 1227-1231Konferensbidrag, Poster (med eller utan abstract) (Refereegranskat)
Abstract [en]

Low-rank matrix reconstruction (LRMR) problem considersestimation (or reconstruction) of an underlying low-rank matrixfrom under-sampled linear measurements. A low-rank matrix can be represented using a factorized model. In thisarticle, we derive Bayesian Cramer-Rao bounds for LRMR where a factorized model is used. We first show a general informative bound, and then derive several Bayesian Cramer-Rao bounds for different scenarios. We always considered the low-rank matrix to be reconstructed as a random matrix, but its model hyper-parameters for three cases - deterministic known, deterministic unknown and random. Finally we compare the bounds with existing practical algorithms through numerical simulations.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2016
Nyckelord
Low-rank matrices, matrix completion, Bayesian estimation, Cramer-Rao bounds.
Nationell ämneskategori
Signalbehandling
Forskningsämne
Elektro- och systemteknik
Identifikatorer
urn:nbn:se:kth:diva-190111 (URN)10.1109/EUSIPCO.2016.7760444 (DOI)000391891900234 ()2-s2.0-85005949749 (Scopus ID)
Konferens
24th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2016, Budapest, HUNGARY
Anmärkning

QC 20160810

Tillgänglig från: 2016-08-09 Skapad: 2016-08-09 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Sundin, M. (2016). Bayesian methods for sparse and low-rank matrix problems. (Doctoral dissertation). KTH Royal Institute of Technology
Öppna denna publikation i ny flik eller fönster >>Bayesian methods for sparse and low-rank matrix problems
2016 (Engelska)Doktorsavhandling, monografi (Övrigt vetenskapligt)
Abstract [en]

Many scientific and engineering problems require us to process measurements and data in order to extract information. Since we base decisions on information,it is important to design accurate and efficient processing algorithms. This is often done by modeling the signal of interest and the noise in the problem. One type ofmodeling is Compressed Sensing, where the signal has a sparse or low-rank representation. In this thesis we study different approaches to designing algorithms for sparse and low-rank problems.

Greedy methods are fast methods for sparse problems which iteratively detects and estimates the non-zero components. By modeling the detection problem as an array processing problem and a Bayesian filtering problem, we improve the detection accuracy. Bayesian methods approximate the sparsity by probability distributions which are iteratively modified. We show one approach to making the Bayesian method the Relevance Vector Machine robust against sparse noise.

Bayesian methods for low-rank matrix estimation typically use probability distributions which only depends on the singular values or a factorization approach. Here we introduce a new method, the Relevance Singular Vector Machine, which uses precision matrices with prior distributions to promote low-rank. The method is also applied to the robust Principal Component Analysis (PCA) problem, where a low-rank matrix is contaminated by sparse noise.

In many estimation problems, there exists theoretical lower bounds on how well an algorithm can perform. When the performance of an algorithm matches a lowerbound, we know that the algorithm has optimal performance and that the lower bound is tight. When no algorithm matches a lower bound, there exists room for better algorithms and/or tighter bounds. In this thesis we derive lower bounds for three different Bayesian low-rank matrix models.

In some problems, only the amplitudes of the measurements are recorded. Despitebeing non-linear, some problems can be transformed to linear problems. Earlier works have shown how sparsity can be utilized in the problem, here we show how the low-rank can be used.

In some situations, the number of measurements and/or the number of parametersis very large. Such Big Data problems require us to design new algorithms. We show how the Basis Pursuit algorithm can be modified for problems with a very large number of parameters.

Abstract [sv]

Många vetenskapliga och ingenjörsproblem kräver att vi behandlar mätningar och data för att finna information. Eftersom vi grundar beslut på information är det viktigt att designa noggranna och effektiva behandlingsalgoritmer. Detta görs ofta genom att modellera signalen vi söker och bruset i problemet. En typ av modellering är Compressed Sensing där signalen har en gles eller lågrangs-representation.I denna avhandling studerar vi olika sätt att designa algoritmer för glesa och lågrangsproblem.

Giriga metoder är snabba metoder för glesa problem som iterativt detekterar och skattar de nollskilda komponenterna. Genom att modellera detektionsproblemet som ett gruppantennproblem och ett Bayesianskt filtreringsproblem förbättrar vi prestandan hos algoritmerna. Bayesianska metoder approximerar glesheten med sannolikhetsfördelningar som iterativt modifieras. Vi visar ett sätt att göra den Bayesianska metoden Relevance Vector Machine robust mot glest brus.

Bayesianska metoder för skattning av lågrangsmatriser använder typiskt sannolikhetsfördelningar som endast beror på matrisens singulärvärden eller en faktoriseringsmetod. Vi introducerar en ny metod, Relevance Singular Vector Machine, som använder precisionsmatriser med a-priori fördelningar för att införa låg rang. Metoden används också för robust Principal Komponent Analys (PCA), där en lågrangsmatris har störts av glest brus.

I många skattningsproblem existerar det teoretiska undre gränser för hur väl en algoritm kan prestera. När en algoritm möter en undre gräns vet vi att algoritmen är optimal och att den undre gränsen är den bästa möjliga. När ingen algoritm möter en undre gräns vet vi att det existerar utrymme för bättre algoritmer och/eller bättre undre gränser. I denna avhandling härleder vi undre gränser för tre olika Bayesianska lågrangsmodeller.

I vissa problem registreras endast amplituderna hos mätningarna. Några problem kan transformeras till linjära problem, trots att de är olinjära. Tidigare arbeten har visat hur gleshet kan användas i problemet, här visar vi hur låg rang kan användas.

I vissa situationer är antalet mätningar och/eller antalet parametrar mycket stort. Sådana Big Data-problem kräver att vi designar nya algoritmer. Vi visar hur algoritmen Basis Pursuit kan modifieras när antalet parametrar är mycket stort.

Ort, förlag, år, upplaga, sidor
KTH Royal Institute of Technology, 2016. s. 204
Serie
TRITA-EE, ISSN 1653-5146 ; 2016:087
Nationell ämneskategori
Signalbehandling
Forskningsämne
Elektro- och systemteknik
Identifikatorer
urn:nbn:se:kth:diva-191139 (URN)978-91-7729-044-5 (ISBN)
Externt samarbete:
Disputation
2016-09-14, F3, Lindstedtsvägen 26, Stockholm, 10:15 (Engelska)
Opponent
Handledare
Anmärkning

QC 20160825

Tillgänglig från: 2016-08-25 Skapad: 2016-08-24 Senast uppdaterad: 2022-06-22Bibliografiskt granskad
Sundin, M., Rojas, C. R., Jansson, M. & Chatterjee, S. (2016). Relevance Singular Vector Machine for Low-Rank Matrix Reconstruction. IEEE Transactions on Signal Processing, 64(20), 5327-5339
Öppna denna publikation i ny flik eller fönster >>Relevance Singular Vector Machine for Low-Rank Matrix Reconstruction
2016 (Engelska)Ingår i: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 64, nr 20, s. 5327-5339Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

We develop Bayesian learning methods for low-rank matrix reconstruction and completion from linear measurements. For under-determined systems, the developed methods reconstruct low-rank matrices when neither the rank nor the noise power is known a priori. We derive relations between the proposed Bayesian models and low-rank promoting penalty functions. The relations justify the use of Kronecker structured covariance matrices in a Gaussian-based prior. In the methods, we use expectation maximization to learn the model parameters. The performance of the methods is evaluated through extensive numerical simulations on synthetic and real data.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2016
Nyckelord
Machine learning, Bayes methods, compressed sensing
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:kth:diva-193155 (URN)10.1109/TSP.2016.2597121 (DOI)000382677100011 ()2-s2.0-84985021194 (Scopus ID)
Anmärkning

QC 20161017

Tillgänglig från: 2016-10-17 Skapad: 2016-09-30 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Sundin, M., Chatterjee, S. & Jansson, M. (2015). Bayesian learning for robust principal component analysis. In: 2015 23rd European Signal Processing Conference, EUSIPCO 2015: . Paper presented at 23rd European Signal Processing Conference, EUSIPCO 2015; Nice Congress CenterNice; France; 31 August 2015 through 4 September 2015 (pp. 2361-2365). IEEE
Öppna denna publikation i ny flik eller fönster >>Bayesian learning for robust principal component analysis
2015 (Engelska)Ingår i: 2015 23rd European Signal Processing Conference, EUSIPCO 2015, IEEE , 2015, s. 2361-2365Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We develop a Bayesian learning method for robust principal component analysis where the main task is to estimate a low-rank matrix from noisy and outlier contaminated measurements. To promote low-rank, we use a structured Gaussian prior that induces correlations among column vectors as well as row vectors of the matrix under estimation. In our method, the noise and outliers are modeled by a combined noise model. The method is evaluated and compared to other methods using synthetic data as well as data from the MovieLens 100K dataset. Comparisons show that the method empirically provides a significant performance improvement over existing methods.

Ort, förlag, år, upplaga, sidor
IEEE, 2015
Nyckelord
Robust principal component analysis, matrix completion, Bayesian learning
Nationell ämneskategori
Signalbehandling
Forskningsämne
Elektro- och systemteknik
Identifikatorer
urn:nbn:se:kth:diva-185726 (URN)10.1109/EUSIPCO.2015.7362807 (DOI)000377943800474 ()2-s2.0-84963956159 (Scopus ID)
Konferens
23rd European Signal Processing Conference, EUSIPCO 2015; Nice Congress CenterNice; France; 31 August 2015 through 4 September 2015
Forskningsfinansiär
Vetenskapsrådet, 621-2011-5847
Anmärkning

QC 20160426

Tillgänglig från: 2016-04-25 Skapad: 2016-04-25 Senast uppdaterad: 2024-03-18Bibliografiskt granskad
Casamitjana, A., Sundin, M., Ghosh, P. & Chatterjee, S. (2015). Bayesian learning for time-varying linear prediction of speech. In: 2015 23rd European Signal Processing Conference, EUSIPCO 2015: . Paper presented at 23rd European Signal Processing Conference, EUSIPCO 2015, 31 August 2015 through 4 September 2015 (pp. 325-329). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Bayesian learning for time-varying linear prediction of speech
2015 (Engelska)Ingår i: 2015 23rd European Signal Processing Conference, EUSIPCO 2015, Institute of Electrical and Electronics Engineers (IEEE), 2015, s. 325-329Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We develop Bayesian learning algorithms for estimation of time-varying linear prediction (TVLP) coefficients of speech. Estimation of TVLP coefficients is a naturally underdeter-mined problem. We consider sparsity and subspace based approaches for dealing with the corresponding underde-termined system. Bayesian learning algorithms are developed to achieve better estimation performance. Expectation-maximization (EM) framework is employed to develop the Bayesian learning algorithms where we use a combined prior to model a driving noise (glottal signal) that has both sparse and dense statistical properties. The efficiency of the Bayesian learning algorithms is shown for synthetic signals using spectral distortion measure and formant tracking of real speech signals.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2015
Nyckelord
Bayesian learning, expectation-maximization, sparsity, Time-varying linear prediction, Algorithms, Forecasting, Maximum principle, Signal processing, Estimation performance, Expectation - maximizations, Expectation Maximization, Linear prediction, Spectral distortions, Statistical properties, Learning algorithms
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:kth:diva-186794 (URN)10.1109/EUSIPCO.2015.7362398 (DOI)000377943800066 ()2-s2.0-84963983947 (Scopus ID)9780992862633 (ISBN)
Konferens
23rd European Signal Processing Conference, EUSIPCO 2015, 31 August 2015 through 4 September 2015
Anmärkning

QC 20160516

Tillgänglig från: 2016-05-16 Skapad: 2016-05-13 Senast uppdaterad: 2024-03-18Bibliografiskt granskad
Sundin, M., Chatterjee, S. & Jansson, M. (2015). Greedy minimization of l1-norm with high empirical success. In: : . Paper presented at 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015.
Öppna denna publikation i ny flik eller fönster >>Greedy minimization of l1-norm with high empirical success
2015 (Engelska)Konferensbidrag, Poster (med eller utan abstract) (Refereegranskat)
Abstract [en]

We develop a greedy algorithm for the basis-pursuit problem. Thealgorithm is empirically found to provide the same solution as convex optimization based solvers. The method uses only a subset ofthe optimization variables in each iteration and iterates until an optimality condition is satisfied. In simulations, the algorithm converges faster than standard methods when the number of measurements is small and the number of variables large.

Nyckelord
Convex optimization, basis-pursuit, greedy algorithms.
Nationell ämneskategori
Signalbehandling
Forskningsämne
Elektro- och systemteknik
Identifikatorer
urn:nbn:se:kth:diva-163700 (URN)10.1109/ICASSP.2015.7178685 (DOI)000427402903186 ()2-s2.0-84946038019 (Scopus ID)
Konferens
40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015
Anmärkning

Presented at the 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015 in Brisbane, Australia. QC 20150415

Tillgänglig från: 2015-04-10 Skapad: 2015-04-10 Senast uppdaterad: 2024-03-18Bibliografiskt granskad
Sundin, M., Chatterjee, S. & Jansson, M. (2014). COMBINED MODELING OF SPARSE AND DENSE NOISE IMPROVES BAYESIAN RVM. In: 2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 22nd European Signal Processing Conference (EUSIPCO), SEP 01-05, 2014, Lisbon, PORTUGAL (pp. 1841-1845). IEEE
Öppna denna publikation i ny flik eller fönster >>COMBINED MODELING OF SPARSE AND DENSE NOISE IMPROVES BAYESIAN RVM
2014 (Engelska)Ingår i: 2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2014, s. 1841-1845Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Using a Bayesian approach, we consider the problem of recovering sparse signals under additive sparse and dense noise. Typically, sparse noise models outliers, impulse bursts or data loss. To handle sparse noise, existing methods simultaneously estimate sparse noise and sparse signal of interest. For estimating the sparse signal, without estimating the sparse noise, we construct a Relevance Vector Machine (RVM). In the RVM, sparse noise and ever present dense noise are treated through a combined noise model. Through simulations, we show the efficiency of new RVM for three applications: kernel regression, housing price prediction and compressed sensing.

Ort, förlag, år, upplaga, sidor
IEEE, 2014
Serie
European Signal Processing Conference, ISSN 2076-1465
Nyckelord
Robust regression, Bayesian learning, Relevance vector machine, Compressed sensing
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:kth:diva-243801 (URN)000393420200370 ()2-s2.0-84911921506 (Scopus ID)978-0-9928626-1-9 (ISBN)
Konferens
22nd European Signal Processing Conference (EUSIPCO), SEP 01-05, 2014, Lisbon, PORTUGAL
Anmärkning

QC 20190206

Tillgänglig från: 2019-02-06 Skapad: 2019-02-06 Senast uppdaterad: 2022-06-26Bibliografiskt granskad
Sundin, M., Chatterjee, S. & Jansson, M. (2014). Combined Modelling of Sparse and Dense noise improves Bayesian RVM. In: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014: . Paper presented at The 22nd European Signal Processing Conference (EUSIPCO), 1-5 Sep. 2014,Lisbon (pp. 1841-1845). IEEE conference proceedings
Öppna denna publikation i ny flik eller fönster >>Combined Modelling of Sparse and Dense noise improves Bayesian RVM
2014 (Engelska)Ingår i: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014, IEEE conference proceedings, 2014, s. 1841-1845Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]

Using a Bayesian approach, we consider the problem of recovering sparse signals under additive sparse and dense noise. Typically, sparse noise models outliers, impulse bursts or data loss. To handle sparse noise, existing methods simultaneously estimate sparse noise and sparse signal of interest. For estimating the sparse signal, without estimating the sparse noise, we construct a Relevance Vector Machine (RVM). In the RVM, sparse noise and ever present dense noise are treated through a combined noise model. Through simulations, we show the efficiency of new RVM for three applications: kernel regression, housing price prediction and compressed sensing.

Ort, förlag, år, upplaga, sidor
IEEE conference proceedings, 2014
Nyckelord
Robust regression, Bayesian learning, Relevance vector machine, Compressed sensing
Nationell ämneskategori
Signalbehandling
Forskningsämne
Elektro- och systemteknik
Identifikatorer
urn:nbn:se:kth:diva-165603 (URN)
Konferens
The 22nd European Signal Processing Conference (EUSIPCO), 1-5 Sep. 2014,Lisbon
Anmärkning

QC 20150512

Tillgänglig från: 2015-04-29 Skapad: 2015-04-29 Senast uppdaterad: 2024-03-18Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-6992-5771

Sök vidare i DiVA

Visa alla publikationer