Change search
Link to record
Permanent link

Direct link
BETA
Jansson, Magnus, ProfessorORCID iD iconorcid.org/0000-0002-6855-5868
Publications (10 of 149) Show all publications
Owrang, A., Bresler, Y. & Jansson, M. (2020). Model selection with covariance matching based non-negative lasso. Signal Processing, 170, Article ID 107431.
Open this publication in new window or tab >>Model selection with covariance matching based non-negative lasso
2020 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 170, article id 107431Article in journal (Refereed) Published
Abstract [en]

We consider the problem of model selection for high-dimensional linear regressions in the context of support recovery with multiple measurement vectors available. Here, we assume that the regression coefficient vectors have a common support and the elements of the additive noise vector are potentially correlated. Accordingly, to estimate the support, we propose a non-negative Lasso estimator that is based on covariance matching techniques. We provide deterministic conditions under which the support estimate of our method is guaranteed to match the true support. Further, we use the extended Fisher information criterion to select the tuning parameter in our non-negative Lasso. We also prove that the extended Fisher information criterion can find the true support with probability one as the number of rows in the design matrix grows to infinity. The numerical simulations confirm that our support estimate is asymptotically consistent. Finally, the simulations also show that the proposed method is robust to high correlation between columns of the design matrix.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Covariance matching, Extended Bayesian information criterion, Generalized least squares, High-dimensional inference, Model selection, Non-negative lasso, Regularization, Sparse multiple measurement vector model
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-267783 (URN)10.1016/j.sigpro.2019.107431 (DOI)000515206600030 ()2-s2.0-85077691312 (Scopus ID)
Note

QC 20200304

Available from: 2020-03-04 Created: 2020-03-04 Last updated: 2020-03-16Bibliographically approved
Owrang, A. & Jansson, M. (2018). A Model Selection Criterion for High-Dimensional Linear Regression. IEEE Transactions on Signal Processing, 66(13), 3436-3446
Open this publication in new window or tab >>A Model Selection Criterion for High-Dimensional Linear Regression
2018 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 66, no 13, p. 3436-3446Article in journal (Refereed) Published
Abstract [en]

Statistical model selection is a great challenge when the number of accessible measurements is much smaller than the dimension of the parameter space. We study the problem of model selection in the context of subset selection for high-dimensional linear regressions. Accordingly, we propose a new model selection criterion with the Fisher information that leads to the selection of a parsimonious model from all the combinatorial models up to some maximum level of sparsity. We analyze the performance of our criterion as the number of measurements grows to infinity, as well as when the noise variance tends to zero. In each case, we prove that our proposed criterion gives the true model with a probability approaching one. Additionally, we devise a computationally affordable algorithm to conduct model selection with the proposed criterion in practice. Interestingly, as a side product, our algorithm can provide the ideal regularization parameter for the Lasso estimator such that Lasso selects the true variables. Finally, numerical simulations are included to support our theoretical findings.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
Model selection, high-dimensional inference, subset selection, Bayesian information criterion, Lasso, sparse estimation, regularization
National Category
Control Engineering Probability Theory and Statistics
Identifiers
urn:nbn:se:kth:diva-231694 (URN)10.1109/TSP.2018.2821628 (DOI)000435193800003 ()2-s2.0-85044784662 (Scopus ID)
Note

QC 20180824

Available from: 2018-08-24 Created: 2018-08-24 Last updated: 2019-08-20Bibliographically approved
Ali Khan, N., Ali, S. & Jansson, M. (2018). Direction of arrival estimation using adaptive directional time-frequency distributions. Multidimensional systems and signal processing, 29(2), 503-521
Open this publication in new window or tab >>Direction of arrival estimation using adaptive directional time-frequency distributions
2018 (English)In: Multidimensional systems and signal processing, ISSN 0923-6082, E-ISSN 1573-0824, Vol. 29, no 2, p. 503-521Article in journal (Refereed) Published
Abstract [en]

Time-frequency distributions (TFDs) allow direction of arrival (DOA) estimation algorithms to be used in scenarios when the total number of sources are more than the number of sensors. The performance of such time-frequency (t-f) based DOA estimation algorithms depends on the resolution of the underlying TFD as a higher resolution TFD leads to better separation of sources in the t-f domain. This paper presents a novel DOA estimation algorithm that uses the adaptive directional t-f distribution (ADTFD) for the analysis of close signal components. The ADTFD optimizes the direction of kernel at each point in the t-f domain to obtain a clear t-f representation, which is then exploited for DOA estimation. Moreover, the proposed methodology can also be applied for DOA estimation of sparse signals. Experimental results indicate that the proposed DOA algorithm based on the ADTFD outperforms other fixed and adaptive kernel based DOA algorithms.

Place, publisher, year, edition, pages
Springer-Verlag New York, 2018
Keywords
Adaptive directional Time-frequency distribution, Direction of arrival estimation, High resolution TFDs, Instantaneous frequency estimation, MUSIC, Algorithms, Frequency estimation, Robustness (control systems), Source separation, Time-frequency distributions, Direction of arrival
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-197217 (URN)10.1007/s11045-016-0435-y (DOI)000427294900004 ()2-s2.0-84975231543 (Scopus ID)
Note

QC 20180405

Available from: 2016-12-05 Created: 2016-11-30 Last updated: 2018-04-05Bibliographically approved
Hou, J., Liu, T., Wahlberg, B. & Jansson, M. (2018). Subspace Hammerstein Model Identification under Periodic Disturbance. In: : . Paper presented at 18th IFAC Symposium on System Identification SYSID 2018 (pp. 335-340). Elsevier B.V., 51(15)
Open this publication in new window or tab >>Subspace Hammerstein Model Identification under Periodic Disturbance
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, a subspace identification method is proposed for Hammerstein systems under periodic disturbance. By using the linear superposition principle to decompose the periodic disturbance response from the deterministic system response, an orthogonal projection is established to eliminate the disturbance effect. The unknown disturbance period can be estimated by defining an objective function of output prediction error for minimization. Correspondingly, a singular value decomposition (SVD) based algorithm is given to estimate the observability matrix and the lower triangular block-Toeplitz matrix. The state matrices A and C are subsequently retrieved from the estimated observability matrix via a shift-invariant algorithm, while the input matrix B and the nonlinear input function parameters are retrieved from the estimated lower triangular block-Toeplitz matrix by an SVD approach. Consistent estimation of the observability matrix and the lower triangular block-Toeplitz matrix is analyzed. An illustrative example is shown to demonstrate the effectiveness of the proposed identification method. 

Place, publisher, year, edition, pages
Elsevier B.V., 2018
Keywords
Consistent estimation, Hammerstein system, Periodic disturbance, Subspace identification, Identification (control systems), Nonlinear systems, Observability, Block Toeplitz matrices, Linear superposition principles, Output prediction errors, Periodic disturbances, Subspace identification methods, Singular value decomposition
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-247499 (URN)10.1016/j.ifacol.2018.09.157 (DOI)000446599200058 ()2-s2.0-85054358180 (Scopus ID)
Conference
18th IFAC Symposium on System Identification SYSID 2018
Note

QC 20190403

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-05-20Bibliographically approved
Sundin, M., Venkitaraman, A., Jansson, M. & Chatterjee, S. (2017). A Connectedness Constraint for Learning Sparse Graphs. In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO): . Paper presented at 25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE (pp. 151-155). IEEE
Open this publication in new window or tab >>A Connectedness Constraint for Learning Sparse Graphs
2017 (English)In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, p. 151-155Conference paper, Published paper (Refereed)
Abstract [en]

Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.

Place, publisher, year, edition, pages
IEEE, 2017
Series
European Signal Processing Conference, ISSN 2076-1465
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-226274 (URN)10.23919/EUSIPCO.2017.8081187 (DOI)000426986000031 ()2-s2.0-85041483337 (Scopus ID)978-0-9928-6267-1 (ISBN)
Conference
25th European Signal Processing Conference (EUSIPCO), AUG 28-SEP 02, 2017, GREECE
Note

QC 20180419

Available from: 2018-04-19 Created: 2018-04-19 Last updated: 2020-03-05Bibliographically approved
Zachariah, D., Stoica, P. & Jansson, M. (2017). Comments on "Enhanced PUMA for Direction-of-Arrival Estimation and Its Performance Analysis". IEEE Transactions on Signal Processing, 65(22), 6113-6114
Open this publication in new window or tab >>Comments on "Enhanced PUMA for Direction-of-Arrival Estimation and Its Performance Analysis"
2017 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 65, no 22, p. 6113-6114Article in journal, Editorial material (Other academic) Published
Abstract [en]

We show that the recently proposed (enhanced) principal-singular-vector utilization for modal analysis (PUMA) estimator for array processing [C. Qian, L. Huang, N. Sidiropoulos, and H. C. So, "Enhanced PUMA for direction-of-arrival estimation and its performance analysis," IEEE Trans. Signal Process., vol. 64, no. 16, pp. 4127-4137, Aug. 2016], minimizes the same criterion function as the well-established method of direction estimation (MODE) estimator.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2017
Keywords
Array signal processing, direction-of-arrival estimation
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-215431 (URN)10.1109/TSP.2017.2742982 (DOI)000411680100021 ()2-s2.0-85028514157 (Scopus ID)
Note

QC 20171020

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2017-10-20Bibliographically approved
Li, K., Sundin, M., Rojas, C., Chatterjee, S. & Jansson, M. (2016). Alternating strategies with internal ADMM for low-rank matrix reconstruction. Signal Processing, 121, 153-159
Open this publication in new window or tab >>Alternating strategies with internal ADMM for low-rank matrix reconstruction
Show others...
2016 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 121, p. 153-159Article in journal (Refereed) Published
Abstract [en]

This paper focuses on the problem of reconstructing low-rank matrices from underdetermined measurements using alternating optimization strategies. We endeavour to combine an alternating least-squares based estimation strategy with ideas from the alternating direction method of multipliers (ADMM) to recover low-rank matrices with linear parameterized structures, such as Hankel matrices. The use of ADMM helps to improve the estimate in each iteration due to its capability of incorporating information about the direction of estimates achieved in previous iterations. We show that merging these two alternating strategies leads to a better performance and less consumed time than the existing alternating least squares (ALS) strategy. The improved performance is verified via numerical simulations with varying sampling rates and real applications.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
ADMM, Alternating strategies, Least squares, Low-rank matrix reconstruction
National Category
Control Engineering Signal Processing
Identifiers
urn:nbn:se:kth:diva-180899 (URN)10.1016/j.sigpro.2015.11.002 (DOI)000369193600013 ()2-s2.0-84949761064 (Scopus ID)
Funder
Swedish Research Council, 621-2011-5847
Note

QC 20160202. QC 20160304

Available from: 2016-02-02 Created: 2016-01-25 Last updated: 2017-11-30Bibliographically approved
Gholami, M. R., Jansson, M., Strom, E. G. & Sayed, A. H. (2016). Diffusion Estimation Over Cooperative Multi-Agent Networks With Missing Data. IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2(3), 276-289
Open this publication in new window or tab >>Diffusion Estimation Over Cooperative Multi-Agent Networks With Missing Data
2016 (English)In: IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, ISSN 2373-776X, Vol. 2, no 3, p. 276-289Article in journal (Refereed) Published
Abstract [en]

In many fields, and especially in the medical and social sciences and in recommender systems, data are gathered through clinical studies or targeted surveys. Participants are generally reluctant to respond to all questions in a survey or they may lack information to respond adequately to some questions. The data collected from these studies tend to lead to linear regression models where the regression vectors are only known partially: some of their entries are either missing completely or replaced randomly by noisy values. In this work, assuming missing positions are replaced by noisy values, we examine how a connected network of agents, with each one of them subjected to a stream of data with incomplete regression information, can cooperate with each other through local interactions to estimate the underlying model parameters in the presence of missing data. We explain how to adjust the distributed diffusion strategy through (de)regularization in order to eliminate the bias introduced by the incomplete model. We also propose a technique to recursively estimate the (de)regularization parameter and examine the performance of the resulting strategy. We illustrate the results by considering two applications: one dealing with a mental health survey and the other dealing with a household consumption survey.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Missing data, linear regression, mean-square-error, regularization, distributed estimation, diffusion strategy
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-194280 (URN)10.1109/TSIPN.2016.2570679 (DOI)000384248200004 ()2-s2.0-85040123059 (Scopus ID)
Note

QC 20161024

Available from: 2016-10-24 Created: 2016-10-21 Last updated: 2020-03-09Bibliographically approved
Malek-Mohammadi, M., Koochakzadeh, A., Babaie-Zadeh, M., Jansson, M. & Rojas, C. R. (2016). Successive Concave Sparsity Approximation for Compressed Sensing. IEEE Transactions on Signal Processing, 64(21), 5657-5671
Open this publication in new window or tab >>Successive Concave Sparsity Approximation for Compressed Sensing
Show others...
2016 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 64, no 21, p. 5657-5671Article in journal (Refereed) Published
Abstract [en]

In this paper, based on a successively accuracy-increasing approximation of the l(0) norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the l(0) norm can be controlled. We prove that the series of the approximations asymptotically coincides with the l(1) and l(0) norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed that leads to a number of weighted l(1) minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.

Keywords
Compressed sensing (CS), Iterative thresholding, Nonconvex optimization, Oracle estimator, The LASSO estimator
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-194254 (URN)10.1109/TSP.2016.2585096 (DOI)000384291000016 ()2-s2.0-84990954599 (Scopus ID)
Note

QC 20161025

Available from: 2016-10-25 Created: 2016-10-21 Last updated: 2017-11-29Bibliographically approved
Sundin, M., Chatterjee, S. & Jansson, M. (2015). Bayesian learning for robust principal component analysis. In: 2015 23rd European Signal Processing Conference, EUSIPCO 2015: . Paper presented at 23rd European Signal Processing Conference, EUSIPCO 2015; Nice Congress CenterNice; France; 31 August 2015 through 4 September 2015 (pp. 2361-2365). IEEE
Open this publication in new window or tab >>Bayesian learning for robust principal component analysis
2015 (English)In: 2015 23rd European Signal Processing Conference, EUSIPCO 2015, IEEE , 2015, p. 2361-2365Conference paper, Published paper (Refereed)
Abstract [en]

We develop a Bayesian learning method for robust principal component analysis where the main task is to estimate a low-rank matrix from noisy and outlier contaminated measurements. To promote low-rank, we use a structured Gaussian prior that induces correlations among column vectors as well as row vectors of the matrix under estimation. In our method, the noise and outliers are modeled by a combined noise model. The method is evaluated and compared to other methods using synthetic data as well as data from the MovieLens 100K dataset. Comparisons show that the method empirically provides a significant performance improvement over existing methods.

Place, publisher, year, edition, pages
IEEE, 2015
Keywords
Robust principal component analysis, matrix completion, Bayesian learning
National Category
Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-185726 (URN)10.1109/EUSIPCO.2015.7362807 (DOI)000377943800474 ()2-s2.0-84963956159 (Scopus ID)
Conference
23rd European Signal Processing Conference, EUSIPCO 2015; Nice Congress CenterNice; France; 31 August 2015 through 4 September 2015
Funder
Swedish Research Council, 621-2011-5847
Note

QC 20160426

Available from: 2016-04-25 Created: 2016-04-25 Last updated: 2016-07-15Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6855-5868

Search in DiVA

Show all publications