Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 93) Show all publications
Ghayem, F., Sadeghi, M., Babaie-Zadeh, M., Chatterjee, S., Skoglund, M. & Jutten, C. (2018). Sparse Signal Recovery Using Iterative Proximal Projection. IEEE Transactions on Signal Processing, 66(4), 879-894
Open this publication in new window or tab >>Sparse Signal Recovery Using Iterative Proximal Projection
Show others...
2018 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 66, no 4, p. 879-894Article in journal (Refereed) Published
Abstract [en]

This paper is concerned with designing efficient algorithms for recovering sparse signals from noisy underdetermined measurements. More precisely, we consider minimization of a nonsmooth and nonconvex sparsity promoting function subject to an error constraint. To solve this problem, we use an alternating minimization penalty method, which ends up with an iterative proximal-projection approach. Furthermore, inspired by accelerated gradient schemes for solving convex problems, we equip the obtained algorithm with a so-called extrapolation step to boost its performance. Additionally, we prove its convergence to a critical point. Our extensive simulations on synthetic as well as real data verify that the proposed algorithm considerably outperforms some well-known and recently proposed algorithms.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Sparse signal recovery, compressed sensing, SL0, proximal splitting algorithms, iterative sparsification-projection
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-223260 (URN)10.1109/TSP.2017.2778695 (DOI)000423703600003 ()2-s2.0-85037644363 (Scopus ID)
Note

QC 20180216

Available from: 2018-02-16 Created: 2018-02-16 Last updated: 2018-02-16Bibliographically approved
Sundin, M., Venkitaraman, A., Jansson, M. & Chatterjee, S. (2017). A Connectedness Constraint for Learning Sparse Graphs. In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE (pp. 151-155). IEEE
Open this publication in new window or tab >>A Connectedness Constraint for Learning Sparse Graphs
2017 (English)In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, p. 151-155Conference paper, Published paper (Refereed)
Abstract [en]

Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.

Place, publisher, year, edition, pages
IEEE, 2017
Series
European Signal Processing Conference, ISSN 2076-1465
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-226274 (URN)000426986000031 ()2-s2.0-85041483337 (Scopus ID)978-0-9928-6267-1 (ISBN)
Conference
25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE
Note

QC 20180419

Available from: 2018-04-19 Created: 2018-04-19 Last updated: 2018-04-19Bibliographically approved
Zaki, A., Venkitaraman, A., Chatterjee, S. & Rasmussen, L. K. (2017). Distributed greedy sparse learning over doubly stochastic networks. In: 25th European Signal Processing Conference, EUSIPCO 2017: . Paper presented at 25th European Signal Processing Conference, EUSIPCO 2017, Kos International Convention CenterKos, Greece, 28 August 2017 through 2 September 2017 (pp. 361-364). Institute of Electrical and Electronics Engineers (IEEE), 2017
Open this publication in new window or tab >>Distributed greedy sparse learning over doubly stochastic networks
2017 (English)In: 25th European Signal Processing Conference, EUSIPCO 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, Vol. 2017, p. 361-364Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we develop a greedy algorithm for sparse learning over a doubly stochastic network. In the proposed algorithm, nodes of the network perform sparse learning by exchanging their individual intermediate variables. The algorithm is iterative in nature. We provide a restricted isometry property (RIP)-based theoretical guarantee both on the performance of the algorithm and the number of iterations required for convergence. Using simulations, we show that the proposed algorithm provides good performance.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
National Category
Computer Engineering
Identifiers
urn:nbn:se:kth:diva-224301 (URN)10.23919/EUSIPCO.2017.8081229 (DOI)2-s2.0-85041494941 (Scopus ID)9780992862671 (ISBN)
Conference
25th European Signal Processing Conference, EUSIPCO 2017, Kos International Convention CenterKos, Greece, 28 August 2017 through 2 September 2017
Note

QC 20180316

Available from: 2018-03-16 Created: 2018-03-16 Last updated: 2018-03-16Bibliographically approved
Zaki, A., Venkitaraman, A., Chatterjee, S. & Rasmussen, L. K. (2017). Distributed Greedy Sparse Learning over Doubly Stochastic Networks. In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE (pp. 361-364). IEEE
Open this publication in new window or tab >>Distributed Greedy Sparse Learning over Doubly Stochastic Networks
2017 (English)In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, p. 361-364Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we develop a greedy algorithm for sparse learning over a doubly stochastic network. In the proposed algorithm, nodes of the network perform sparse learning by exchanging their individual intermediate variables. The algorithm is iterative in nature. We provide a restricted isometry property (RIP)-based theoretical guarantee both on the performance of the algorithm and the number of iterations required for convergence. Using simulations, we show that the proposed algorithm provides good performance.

Place, publisher, year, edition, pages
IEEE, 2017
Series
European Signal Processing Conference, ISSN 2076-1465
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-226275 (URN)000426986000073 ()2-s2.0-85041494941 (Scopus ID)978-0-9928-6267-1 (ISBN)
Conference
25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE
Note

QC 20180420

Available from: 2018-04-20 Created: 2018-04-20 Last updated: 2018-04-20Bibliographically approved
Zaki, A., Chatterjee, S. & Rasmussen, L. K. (2017). Generalized fusion algorithm for compressive sampling reconstruction and RIP-based analysis. Signal Processing, 139, 36-48
Open this publication in new window or tab >>Generalized fusion algorithm for compressive sampling reconstruction and RIP-based analysis
2017 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 139, p. 36-48Article in journal (Refereed) Published
Abstract [en]

We design a Generalized Fusion Algorithm for Compressive Sampling (gFACS) reconstruction. In the gFACS algorithm, several individual compressive sampling (CS) reconstruction algorithms participate to achieve a better performance than the individual algorithms. The gFACS algorithm is iterative in nature and its convergence is proved under certain conditions using Restricted Isometry Property (RIP) based theoretical analysis. The theoretical analysis allows for the participation of any off-the-shelf or new CS reconstruction algorithm with simple modifications, and still guarantees convergence. We show modifications of some well-known CS reconstruction algorithms for their seamless use in the gFACS algorithm. Simulation results show that the proposed gFACS algorithm indeed provides better performance than the participating individual algorithms.

Place, publisher, year, edition, pages
ELSEVIER SCIENCE BV, 2017
Keywords
Compressive sampling, Greedy algorithm, RIP analysis, Fusion strategy
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-208710 (URN)10.1016/j.sigpro.2017.03.021 (DOI)000402214200004 ()2-s2.0-85017439915 (Scopus ID)
Note

QC 2017-06-12

Available from: 2017-06-12 Created: 2017-06-12 Last updated: 2017-06-15Bibliographically approved
Li, K., Sundin, M., Rojas, C., Chatterjee, S. & Jansson, M. (2016). Alternating strategies with internal ADMM for low-rank matrix reconstruction. Signal Processing, 121, 153-159
Open this publication in new window or tab >>Alternating strategies with internal ADMM for low-rank matrix reconstruction
Show others...
2016 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 121, p. 153-159Article in journal (Refereed) Published
Abstract [en]

This paper focuses on the problem of reconstructing low-rank matrices from underdetermined measurements using alternating optimization strategies. We endeavour to combine an alternating least-squares based estimation strategy with ideas from the alternating direction method of multipliers (ADMM) to recover low-rank matrices with linear parameterized structures, such as Hankel matrices. The use of ADMM helps to improve the estimate in each iteration due to its capability of incorporating information about the direction of estimates achieved in previous iterations. We show that merging these two alternating strategies leads to a better performance and less consumed time than the existing alternating least squares (ALS) strategy. The improved performance is verified via numerical simulations with varying sampling rates and real applications.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
ADMM, Alternating strategies, Least squares, Low-rank matrix reconstruction
National Category
Control Engineering Signal Processing
Identifiers
urn:nbn:se:kth:diva-180899 (URN)10.1016/j.sigpro.2015.11.002 (DOI)000369193600013 ()2-s2.0-84949761064 (Scopus ID)
Funder
Swedish Research Council, 621-2011-5847
Note

QC 20160202. QC 20160304

Available from: 2016-02-02 Created: 2016-01-25 Last updated: 2017-11-30Bibliographically approved
Vehkaperä, M., Kabashima, Y. & Chatterjee, S. (2016). Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing. IEEE Transactions on Information Theory, 62(4), 2100-2124
Open this publication in new window or tab >>Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing
2016 (English)In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 62, no 4, p. 2100-2124Article in journal (Refereed) Published
Abstract [en]

The performance of regularized least-squares estimation in noisy compressed sensing is analyzed in the limit when the dimensions of the measurement matrix grow large. The sensing matrix is considered to be from a class of random ensembles that encloses as special cases standard Gaussian, row-orthogonal, geometric, and so-called T-orthogonal constructions. Source vectors that have non-uniform sparsity are included in the system model. Regularization based on l(1)-norm and leading to LASSO estimation, or basis pursuit denoising, is given the main emphasis in the analysis. Extensions to l(2)-norm and zero-norm regularization are also briefly discussed. The analysis is carried out using the replica method in conjunction with some novel matrix integration results. Numerical experiments for LASSO are provided to verify the accuracy of the analytical results. The numerical experiments show that for noisy compressed sensing, the standard Gaussian ensemble is a suboptimal choice for the measurement matrix. Orthogonal constructions provide a superior performance in all considered scenarios and are easier to implement in practical applications. It is also discovered that for non-uniform sparsity patterns, the T-orthogonal matrices can further improve the mean square error behavior of the reconstruction when the noise level is not too high. However, as the additive noise becomes more prominent in the system, the simple row-orthogonal measurement matrix appears to be the best choice out of the considered ensembles.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Compressed sensing, eigenvalues of random matrices, compressed sensing matrices, noisy linear measurements, l(1) minimization
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-185615 (URN)10.1109/TIT.2016.2525824 (DOI)000372744300039 ()2-s2.0-84963759120 (Scopus ID)
Funder
Swedish Research Council, 621-2011-1024
Note

QC 20160428

Available from: 2016-04-28 Created: 2016-04-25 Last updated: 2018-01-10Bibliographically approved
Fotedar, G., Aditya Gaonkar, P., Chatterjee, S. & Ghosh, P. K. (2016). Automatic recognition of social roles using long term role transitions in small group interactions. In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH: . Paper presented at 17th Annual Conference of the International Speech Communication Association, INTERSPEECH 2016, 8 September 2016 through 16 September 2016 (pp. 2065-2069).
Open this publication in new window or tab >>Automatic recognition of social roles using long term role transitions in small group interactions
2016 (English)In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2016, p. 2065-2069Conference paper, Published paper (Refereed)
Abstract [en]

Recognition of social roles in small group interactions is challenging because of the presence of disfluency in speech, frequent overlaps between speakers, short speaker turns and the need for reliable data annotation. In this work, we consider the problem of recognizing four roles, namely Gatekeeper, Protagonist, Neutral, and Supporter in small group interactions in AMI corpus. In general, Gatekeeper and Protagonist roles occur less frequently compared to Neutral, and Supporter. In this work, we exploit role transitions across segments in a meeting by incorporating role transition probabilities and formulating the role recognition as a decoding problem over the sequence of segments in an interaction. Experiments are performed in a five fold cross validation setup using acoustic, lexical and structural features with precision, recall and F-score as the performance metrics. The results reveal that precision averaged across all folds and different feature combinations improves in the case of Gatekeeper and Protagonist by 13.64% and 12.75% when the role transition information is used which in turn improves the F-score for Gatekeeper by 6.58% while the F-scores for the rest of the roles do not change significantly.

Keywords
Dynamic programming, Small group interaction, Social computing, Social roles, Reconfigurable hardware, Speech communication, Speech processing, Automatic recognition, Feature combination, Group interaction, Performance metrics, Structural feature, Transition probabilities, Speech recognition
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-202007 (URN)10.21437/Interspeech.2016-202 (DOI)000409394401115 ()2-s2.0-84994365873 (Scopus ID)
Conference
17th Annual Conference of the International Speech Communication Association, INTERSPEECH 2016, 8 September 2016 through 16 September 2016
Note

QC 20170224

Available from: 2017-02-24 Created: 2017-02-24 Last updated: 2018-01-13Bibliographically approved
Sundman, D., Chatterjee, S. & Skoglund, M. (2016). Design and Analysis of a Greedy Pursuit for Distributed Compressed Sensing. IEEE Transactions on Signal Processing, 64(11), 2803-2818
Open this publication in new window or tab >>Design and Analysis of a Greedy Pursuit for Distributed Compressed Sensing
2016 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 64, no 11, p. 2803-2818Article in journal (Refereed) Published
Abstract [en]

We consider a distributed compressed sensing scenario where many sensors measure correlated sparse signals and the sensors are connected through a network. Correlation between sparse signals is modeled by a partial common support-set. For such a scenario, the main objective of this paper is to develop a greedy pursuit algorithm. We develop a distributed parallel pursuit (DIPP) algorithm based on exchange of information about estimated support-sets at sensors. The exchange of information helps to improve estimation of the partial common support-set, that in turn helps to gradually improve estimation of support-sets in all sensors, leading to a better quality reconstruction performance. We provide restricted isometry property (RIP) based theoretical analysis on the algorithm's convergence and reconstruction performance. Under certain theoretical requirements (i.e., under certain assumptions) on the quality of information exchange over the network and RIP parameters of sensor nodes, we show that the DIPP algorithm converges to a performance level that depends on a scaled additive measurement noise power (convergence in theory) where the scaling coefficient is a function of RIP parameters and information processing quality parameters. Using simulations, we show practical reconstruction performance of DIPP vis-a-vis amount of undersampling, signal-to-measurement-noise ratios and network-connectivity conditions.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Keywords
Compressed sensing, restricted isometry property, distributed estimation
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-187771 (URN)10.1109/TSP.2016.2523462 (DOI)000374993300006 ()2-s2.0-84969261390 (Scopus ID)
Note

QC 20160530

Available from: 2016-05-30 Created: 2016-05-30 Last updated: 2017-11-30Bibliographically approved
Koslicki, D., Chatterjee, S., Shahrivar, D., Walker, A. W., Francis, S. C., Fraser, L. J., . . . Corander, J. (2015). ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition. PLoS ONE, 10(10), Article ID e0140644.
Open this publication in new window or tab >>ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition
Show others...
2015 (English)In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, no 10, article id e0140644Article in journal (Refereed) Published
Abstract [en]

Motivation Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. Results There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. Availability An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.

Place, publisher, year, edition, pages
PUBLIC LIBRARY SCIENCE, 2015
Keywords
Split Vector Quantization, LSF Parameters, Sequences, Megan
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-176956 (URN)10.1371/journal.pone.0140644 (DOI)000363309200025 ()26496191 (PubMedID)2-s2.0-84949460421 (Scopus ID)
Note

QC 20151216

Available from: 2015-12-16 Created: 2015-11-13 Last updated: 2017-12-01Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2638-6047

Search in DiVA

Show all publications