Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 95) Show all publications
Liang, X., Javid, A. M., Skoglund, M. & Chatterjee, S. (2018). DISTRIBUTED LARGE NEURAL NETWORK WITH CENTRALIZED EQUIVALENCE. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP): . Paper presented at 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) (pp. 2976-2980). IEEE
Open this publication in new window or tab >>DISTRIBUTED LARGE NEURAL NETWORK WITH CENTRALIZED EQUIVALENCE
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 2018, p. 2976-2980Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we develop a distributed algorithm for learning a large neural network that is deep and wide. We consider a scenario where the training dataset is not available in a single processing node, but distributed among several nodes. We show that a recently proposed large neural network architecture called progressive learning network (PLN) can be trained in a distributed setup with centralized equivalence. That means we would get the same result if the data be available in a single node. Using a distributed convex optimization method called alternating-direction-method-of-multipliers (ADMM), we perform training of PLN in the distributed setup.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
Distributed learning, neural networks, data parallelism, convex optimization
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-237152 (URN)10.1109/ICASSP.2018.8462179 (DOI)000446384603029 ()2-s2.0-85054237028 (Scopus ID)
Conference
2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Note

QC 20181025

Available from: 2018-10-25 Created: 2018-10-25 Last updated: 2018-10-25Bibliographically approved
Venkitaraman, A., Chatterjee, S. & Händel, P. (2018). MULTI-KERNEL REGRESSION FOR GRAPH SIGNAL PROCESSING. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP): . Paper presented at 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) (pp. 4644-4648). IEEE
Open this publication in new window or tab >>MULTI-KERNEL REGRESSION FOR GRAPH SIGNAL PROCESSING
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 2018, p. 4644-4648Conference paper, Published paper (Refereed)
Abstract [en]

We develop a multi-kernel based regression method for graph signal processing where the target signal is assumed to be smooth over a graph. In multi-kernel regression, an effective kernel function is expressed as a linear combination of many basis kernel functions. We estimate the linear weights to learn the effective kernel function by appropriate regularization based on graph smoothness. We show that the resulting optimization problem is shown to be convex and propose an accelerated projected gradient descent based solution. Simulation results using real-world graph signals show efficiency of the multi-kernel based approach over a standard kernel based approach.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
Graph signal processing, kernel regression, convex optimization
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-237154 (URN)10.1109/ICASSP.2018.8461643 (DOI)000446384604162 ()2-s2.0-85054280684 (Scopus ID)
Conference
2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Note

QC 20181025

Available from: 2018-10-25 Created: 2018-10-25 Last updated: 2018-10-25Bibliographically approved
Ghayem, F., Sadeghi, M., Babaie-Zadeh, M., Chatterjee, S., Skoglund, M. & Jutten, C. (2018). Sparse Signal Recovery Using Iterative Proximal Projection. IEEE Transactions on Signal Processing, 66(4), 879-894
Open this publication in new window or tab >>Sparse Signal Recovery Using Iterative Proximal Projection
Show others...
2018 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 66, no 4, p. 879-894Article in journal (Refereed) Published
Abstract [en]

This paper is concerned with designing efficient algorithms for recovering sparse signals from noisy underdetermined measurements. More precisely, we consider minimization of a nonsmooth and nonconvex sparsity promoting function subject to an error constraint. To solve this problem, we use an alternating minimization penalty method, which ends up with an iterative proximal-projection approach. Furthermore, inspired by accelerated gradient schemes for solving convex problems, we equip the obtained algorithm with a so-called extrapolation step to boost its performance. Additionally, we prove its convergence to a critical point. Our extensive simulations on synthetic as well as real data verify that the proposed algorithm considerably outperforms some well-known and recently proposed algorithms.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Sparse signal recovery, compressed sensing, SL0, proximal splitting algorithms, iterative sparsification-projection
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-223260 (URN)10.1109/TSP.2017.2778695 (DOI)000423703600003 ()2-s2.0-85037644363 (Scopus ID)
Note

QC 20180216

Available from: 2018-02-16 Created: 2018-02-16 Last updated: 2018-02-16Bibliographically approved
Sundin, M., Venkitaraman, A., Jansson, M. & Chatterjee, S. (2017). A Connectedness Constraint for Learning Sparse Graphs. In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE (pp. 151-155). IEEE
Open this publication in new window or tab >>A Connectedness Constraint for Learning Sparse Graphs
2017 (English)In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, p. 151-155Conference paper, Published paper (Refereed)
Abstract [en]

Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.

Place, publisher, year, edition, pages
IEEE, 2017
Series
European Signal Processing Conference, ISSN 2076-1465
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-226274 (URN)000426986000031 ()2-s2.0-85041483337 (Scopus ID)978-0-9928-6267-1 (ISBN)
Conference
25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE
Note

QC 20180419

Available from: 2018-04-19 Created: 2018-04-19 Last updated: 2018-04-19Bibliographically approved
Zaki, A., Venkitaraman, A., Chatterjee, S. & Rasmussen, L. K. (2017). Distributed greedy sparse learning over doubly stochastic networks. In: 25th European Signal Processing Conference, EUSIPCO 2017: . Paper presented at 25th European Signal Processing Conference, EUSIPCO 2017, Kos International Convention CenterKos, Greece, 28 August 2017 through 2 September 2017 (pp. 361-364). Institute of Electrical and Electronics Engineers (IEEE), 2017
Open this publication in new window or tab >>Distributed greedy sparse learning over doubly stochastic networks
2017 (English)In: 25th European Signal Processing Conference, EUSIPCO 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, Vol. 2017, p. 361-364Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we develop a greedy algorithm for sparse learning over a doubly stochastic network. In the proposed algorithm, nodes of the network perform sparse learning by exchanging their individual intermediate variables. The algorithm is iterative in nature. We provide a restricted isometry property (RIP)-based theoretical guarantee both on the performance of the algorithm and the number of iterations required for convergence. Using simulations, we show that the proposed algorithm provides good performance.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
National Category
Computer Engineering
Identifiers
urn:nbn:se:kth:diva-224301 (URN)10.23919/EUSIPCO.2017.8081229 (DOI)2-s2.0-85041494941 (Scopus ID)9780992862671 (ISBN)
Conference
25th European Signal Processing Conference, EUSIPCO 2017, Kos International Convention CenterKos, Greece, 28 August 2017 through 2 September 2017
Note

QC 20180316

Available from: 2018-03-16 Created: 2018-03-16 Last updated: 2018-03-16Bibliographically approved
Zaki, A., Venkitaraman, A., Chatterjee, S. & Rasmussen, L. K. (2017). Distributed Greedy Sparse Learning over Doubly Stochastic Networks. In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE (pp. 361-364). IEEE
Open this publication in new window or tab >>Distributed Greedy Sparse Learning over Doubly Stochastic Networks
2017 (English)In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, p. 361-364Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we develop a greedy algorithm for sparse learning over a doubly stochastic network. In the proposed algorithm, nodes of the network perform sparse learning by exchanging their individual intermediate variables. The algorithm is iterative in nature. We provide a restricted isometry property (RIP)-based theoretical guarantee both on the performance of the algorithm and the number of iterations required for convergence. Using simulations, we show that the proposed algorithm provides good performance.

Place, publisher, year, edition, pages
IEEE, 2017
Series
European Signal Processing Conference, ISSN 2076-1465
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-226275 (URN)000426986000073 ()2-s2.0-85041494941 (Scopus ID)978-0-9928-6267-1 (ISBN)
Conference
25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE
Note

QC 20180420

Available from: 2018-04-20 Created: 2018-04-20 Last updated: 2018-04-20Bibliographically approved
Zaki, A., Chatterjee, S. & Rasmussen, L. K. (2017). Generalized fusion algorithm for compressive sampling reconstruction and RIP-based analysis. Signal Processing, 139, 36-48
Open this publication in new window or tab >>Generalized fusion algorithm for compressive sampling reconstruction and RIP-based analysis
2017 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 139, p. 36-48Article in journal (Refereed) Published
Abstract [en]

We design a Generalized Fusion Algorithm for Compressive Sampling (gFACS) reconstruction. In the gFACS algorithm, several individual compressive sampling (CS) reconstruction algorithms participate to achieve a better performance than the individual algorithms. The gFACS algorithm is iterative in nature and its convergence is proved under certain conditions using Restricted Isometry Property (RIP) based theoretical analysis. The theoretical analysis allows for the participation of any off-the-shelf or new CS reconstruction algorithm with simple modifications, and still guarantees convergence. We show modifications of some well-known CS reconstruction algorithms for their seamless use in the gFACS algorithm. Simulation results show that the proposed gFACS algorithm indeed provides better performance than the participating individual algorithms.

Place, publisher, year, edition, pages
ELSEVIER SCIENCE BV, 2017
Keywords
Compressive sampling, Greedy algorithm, RIP analysis, Fusion strategy
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-208710 (URN)10.1016/j.sigpro.2017.03.021 (DOI)000402214200004 ()2-s2.0-85017439915 (Scopus ID)
Note

QC 2017-06-12

Available from: 2017-06-12 Created: 2017-06-12 Last updated: 2017-06-15Bibliographically approved
Li, K., Sundin, M., Rojas, C., Chatterjee, S. & Jansson, M. (2016). Alternating strategies with internal ADMM for low-rank matrix reconstruction. Signal Processing, 121, 153-159
Open this publication in new window or tab >>Alternating strategies with internal ADMM for low-rank matrix reconstruction
Show others...
2016 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 121, p. 153-159Article in journal (Refereed) Published
Abstract [en]

This paper focuses on the problem of reconstructing low-rank matrices from underdetermined measurements using alternating optimization strategies. We endeavour to combine an alternating least-squares based estimation strategy with ideas from the alternating direction method of multipliers (ADMM) to recover low-rank matrices with linear parameterized structures, such as Hankel matrices. The use of ADMM helps to improve the estimate in each iteration due to its capability of incorporating information about the direction of estimates achieved in previous iterations. We show that merging these two alternating strategies leads to a better performance and less consumed time than the existing alternating least squares (ALS) strategy. The improved performance is verified via numerical simulations with varying sampling rates and real applications.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
ADMM, Alternating strategies, Least squares, Low-rank matrix reconstruction
National Category
Control Engineering Signal Processing
Identifiers
urn:nbn:se:kth:diva-180899 (URN)10.1016/j.sigpro.2015.11.002 (DOI)000369193600013 ()2-s2.0-84949761064 (Scopus ID)
Funder
Swedish Research Council, 621-2011-5847
Note

QC 20160202. QC 20160304

Available from: 2016-02-02 Created: 2016-01-25 Last updated: 2017-11-30Bibliographically approved
Vehkaperä, M., Kabashima, Y. & Chatterjee, S. (2016). Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing. IEEE Transactions on Information Theory, 62(4), 2100-2124
Open this publication in new window or tab >>Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing
2016 (English)In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 62, no 4, p. 2100-2124Article in journal (Refereed) Published
Abstract [en]

The performance of regularized least-squares estimation in noisy compressed sensing is analyzed in the limit when the dimensions of the measurement matrix grow large. The sensing matrix is considered to be from a class of random ensembles that encloses as special cases standard Gaussian, row-orthogonal, geometric, and so-called T-orthogonal constructions. Source vectors that have non-uniform sparsity are included in the system model. Regularization based on l(1)-norm and leading to LASSO estimation, or basis pursuit denoising, is given the main emphasis in the analysis. Extensions to l(2)-norm and zero-norm regularization are also briefly discussed. The analysis is carried out using the replica method in conjunction with some novel matrix integration results. Numerical experiments for LASSO are provided to verify the accuracy of the analytical results. The numerical experiments show that for noisy compressed sensing, the standard Gaussian ensemble is a suboptimal choice for the measurement matrix. Orthogonal constructions provide a superior performance in all considered scenarios and are easier to implement in practical applications. It is also discovered that for non-uniform sparsity patterns, the T-orthogonal matrices can further improve the mean square error behavior of the reconstruction when the noise level is not too high. However, as the additive noise becomes more prominent in the system, the simple row-orthogonal measurement matrix appears to be the best choice out of the considered ensembles.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Compressed sensing, eigenvalues of random matrices, compressed sensing matrices, noisy linear measurements, l(1) minimization
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-185615 (URN)10.1109/TIT.2016.2525824 (DOI)000372744300039 ()2-s2.0-84963759120 (Scopus ID)
Funder
Swedish Research Council, 621-2011-1024
Note

QC 20160428

Available from: 2016-04-28 Created: 2016-04-25 Last updated: 2018-01-10Bibliographically approved
Fotedar, G., Aditya Gaonkar, P., Chatterjee, S. & Ghosh, P. K. (2016). Automatic recognition of social roles using long term role transitions in small group interactions. In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH: . Paper presented at 17th Annual Conference of the International Speech Communication Association, INTERSPEECH 2016, 8 September 2016 through 16 September 2016 (pp. 2065-2069).
Open this publication in new window or tab >>Automatic recognition of social roles using long term role transitions in small group interactions
2016 (English)In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2016, p. 2065-2069Conference paper, Published paper (Refereed)
Abstract [en]

Recognition of social roles in small group interactions is challenging because of the presence of disfluency in speech, frequent overlaps between speakers, short speaker turns and the need for reliable data annotation. In this work, we consider the problem of recognizing four roles, namely Gatekeeper, Protagonist, Neutral, and Supporter in small group interactions in AMI corpus. In general, Gatekeeper and Protagonist roles occur less frequently compared to Neutral, and Supporter. In this work, we exploit role transitions across segments in a meeting by incorporating role transition probabilities and formulating the role recognition as a decoding problem over the sequence of segments in an interaction. Experiments are performed in a five fold cross validation setup using acoustic, lexical and structural features with precision, recall and F-score as the performance metrics. The results reveal that precision averaged across all folds and different feature combinations improves in the case of Gatekeeper and Protagonist by 13.64% and 12.75% when the role transition information is used which in turn improves the F-score for Gatekeeper by 6.58% while the F-scores for the rest of the roles do not change significantly.

Keywords
Dynamic programming, Small group interaction, Social computing, Social roles, Reconfigurable hardware, Speech communication, Speech processing, Automatic recognition, Feature combination, Group interaction, Performance metrics, Structural feature, Transition probabilities, Speech recognition
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-202007 (URN)10.21437/Interspeech.2016-202 (DOI)000409394401115 ()2-s2.0-84994365873 (Scopus ID)
Conference
17th Annual Conference of the International Speech Communication Association, INTERSPEECH 2016, 8 September 2016 through 16 September 2016
Note

QC 20170224

Available from: 2017-02-24 Created: 2017-02-24 Last updated: 2018-01-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2638-6047

Search in DiVA

Show all publications