Change search
Refine search result
1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Sundin, Martin
    KTH, School of Electrical Engineering (EES), Signal Processing. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Bayesian methods for sparse and low-rank matrix problems2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Many scientific and engineering problems require us to process measurements and data in order to extract information. Since we base decisions on information,it is important to design accurate and efficient processing algorithms. This is often done by modeling the signal of interest and the noise in the problem. One type ofmodeling is Compressed Sensing, where the signal has a sparse or low-rank representation. In this thesis we study different approaches to designing algorithms for sparse and low-rank problems.

    Greedy methods are fast methods for sparse problems which iteratively detects and estimates the non-zero components. By modeling the detection problem as an array processing problem and a Bayesian filtering problem, we improve the detection accuracy. Bayesian methods approximate the sparsity by probability distributions which are iteratively modified. We show one approach to making the Bayesian method the Relevance Vector Machine robust against sparse noise.

    Bayesian methods for low-rank matrix estimation typically use probability distributions which only depends on the singular values or a factorization approach. Here we introduce a new method, the Relevance Singular Vector Machine, which uses precision matrices with prior distributions to promote low-rank. The method is also applied to the robust Principal Component Analysis (PCA) problem, where a low-rank matrix is contaminated by sparse noise.

    In many estimation problems, there exists theoretical lower bounds on how well an algorithm can perform. When the performance of an algorithm matches a lowerbound, we know that the algorithm has optimal performance and that the lower bound is tight. When no algorithm matches a lower bound, there exists room for better algorithms and/or tighter bounds. In this thesis we derive lower bounds for three different Bayesian low-rank matrix models.

    In some problems, only the amplitudes of the measurements are recorded. Despitebeing non-linear, some problems can be transformed to linear problems. Earlier works have shown how sparsity can be utilized in the problem, here we show how the low-rank can be used.

    In some situations, the number of measurements and/or the number of parametersis very large. Such Big Data problems require us to design new algorithms. We show how the Basis Pursuit algorithm can be modified for problems with a very large number of parameters.

  • 2.
    Sundin, Martin
    et al.
    KTH, School of Electrical Engineering (EES), Signal Processing. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Chatterjee, Saikat
    KTH, School of Electrical Engineering (EES), Signal Processing. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Communication Theory.
    Jansson, Magnus
    KTH, School of Electrical Engineering (EES), Signal Processing. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction2016In: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 1227-1231Conference paper (Refereed)
    Abstract [en]

    Low-rank matrix reconstruction (LRMR) problem considersestimation (or reconstruction) of an underlying low-rank matrixfrom under-sampled linear measurements. A low-rank matrix can be represented using a factorized model. In thisarticle, we derive Bayesian Cramer-Rao bounds for LRMR where a factorized model is used. We first show a general informative bound, and then derive several Bayesian Cramer-Rao bounds for different scenarios. We always considered the low-rank matrix to be reconstructed as a random matrix, but its model hyper-parameters for three cases - deterministic known, deterministic unknown and random. Finally we compare the bounds with existing practical algorithms through numerical simulations.

  • 3.
    Sundin, Martin
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES).
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES).
    Jansson, Magnus
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES).
    COMBINED MODELING OF SPARSE AND DENSE NOISE IMPROVES BAYESIAN RVM2014In: 2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2014, p. 1841-1845Conference paper (Refereed)
    Abstract [en]

    Using a Bayesian approach, we consider the problem of recovering sparse signals under additive sparse and dense noise. Typically, sparse noise models outliers, impulse bursts or data loss. To handle sparse noise, existing methods simultaneously estimate sparse noise and sparse signal of interest. For estimating the sparse signal, without estimating the sparse noise, we construct a Relevance Vector Machine (RVM). In the RVM, sparse noise and ever present dense noise are treated through a combined noise model. Through simulations, we show the efficiency of new RVM for three applications: kernel regression, housing price prediction and compressed sensing.

  • 4.
    Sundin, Martin
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Venkitaraman, Arun
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Jansson, Magnus
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    A Connectedness Constraint for Learning Sparse Graphs2017In: 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE , 2017, p. 151-155Conference paper (Refereed)
    Abstract [en]

    Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.

1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf