Change search
Link to record
Permanent link

Direct link
BETA
Mattila, Robert
Publications (7 of 7) Show all publications
Ramírez-Chavarría, R. G., Quintana-Carapia, G., Müller, M. C., Mattila, R., Matatagui, D. & Sánchez-Pérez, C. (2018). Bioimpedance Parameter Estimation using Fast Spectral Measurements and Regularization. IFAC-PapersOnLine, 51(15), 521-526
Open this publication in new window or tab >>Bioimpedance Parameter Estimation using Fast Spectral Measurements and Regularization
Show others...
2018 (English)In: IFAC-PapersOnLine, E-ISSN 2405-8963, Vol. 51, no 15, p. 521-526Article in journal (Refereed) Published
Abstract [en]

This work proposes an alternative framework for parametric bioimpedance estimation as a powerful tool to characterize biological media. We model the bioimpedance as an electrical network of parallel RC circuits, and transform the frequency-domain estimation problem into a time constant domain estimation problem by means of the distribution of relaxation times (DRT) method. The Fredholm integral equation of the first kind is employed to pose the problem in a regularized least squares (RLS) form. We validate the proposed methodology by numerical simulations for a synthetic biological electrical circuit, by using a multisine signal in the frequency range of 1kHz to 853kHz and considering an error in variables (EIV) problem. Results show that the proposed method outperforms the state-of-the-art techniques for spectral bioimpedance analysis. We also illustrates its potentiality in terms of accurate spectral measurements and precise data interpretation, for further usage in biological applications.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Biomedical Systems, Frequency Measurements, Impedance Spectroscopy, Parameter Estimation, Regression Algorithm, Spectrum Analysis
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-246551 (URN)10.1016/j.ifacol.2018.09.198 (DOI)2-s2.0-85054370558 (Scopus ID)
Note

QC 20190403

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-03Bibliographically approved
Mattila, R., Rojas, C. R., Krishnamurthy, V. & Wahlberg, B. (2018). Inverse Filtering for Linear Gaussian State-Space Models. In: 2018 IEEE Conference on Decision and Control  (CDC): . Paper presented at 57th IEEE Conference on Decision and Control, CDC 2018; Centre of the Fontainebleau in Miami Beac hMiami; United States; 17 December 2018 through 19 December 2018 (pp. 5556-5561). Institute of Electrical and Electronics Engineers (IEEE), Article ID 8619013.
Open this publication in new window or tab >>Inverse Filtering for Linear Gaussian State-Space Models
2018 (English)In: 2018 IEEE Conference on Decision and Control  (CDC), Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 5556-5561, article id 8619013Conference paper, Published paper (Refereed)
Abstract [en]

This paper considers inverse filtering problems for linear Gaussian state-space systems. We consider three problems of increasing generality in which the aim is to reconstruct the measurements and/or certain unknown sensor parameters, such as the observation likelihood, given posteriors (i. e., the sample path of mean and covariance). The paper is motivated by applications where one wishes to calibrate a Bayesian estimator based on remote observations of the posterior estimates, e. g., determine how accurate an adversary's sensors are. We propose inverse filtering algorithms and evaluate their robustness with respect to noise (e. g., measurement or quantization errors) in numerical simulations.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Series
IEEE Conference on Decision and Control, ISSN 0743-1546
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-245114 (URN)10.1109/CDC.2018.8619013 (DOI)000458114805022 ()2-s2.0-85062188998 (Scopus ID)978-1-5386-1395-5 (ISBN)
Conference
57th IEEE Conference on Decision and Control, CDC 2018; Centre of the Fontainebleau in Miami Beac hMiami; United States; 17 December 2018 through 19 December 2018
Note

QC 20190306

Available from: 2019-03-06 Created: 2019-03-06 Last updated: 2019-03-06Bibliographically approved
Mattila, R., Rojas, C. R., Krishnamurthy, V. & Wahlberg, B. (2017). Asymptotically Efficient Identification of Known-Sensor Hidden Markov Models. IEEE Signal Processing Letters, 24(12), 1813-1817
Open this publication in new window or tab >>Asymptotically Efficient Identification of Known-Sensor Hidden Markov Models
2017 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 12, p. 1813-1817Article in journal (Refereed) Published
Abstract [en]

We consider estimating the transition probability matrix of a finite-state finite-observation alphabet hidden Markov model with known observation probabilities. We propose a two-step algorithm: a method of moments estimator (formulated as a convex optimization problem) followed by a single iteration of a Newton-Raphson maximum-likelihood estimator. The two-fold contribution of this letter is, first, to theoretically show that the proposed estimator is consistent and asymptotically efficient, and second, to numerically show that the method is computationally less demanding than conventional methods-in particular for large datasets.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2017
Keywords
Hidden Markov models (HMM), maximum-likelihood (ML), method of moments, system identification
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-217930 (URN)10.1109/LSP.2017.2759902 (DOI)000413962800006 ()2-s2.0-85031786373 (Scopus ID)
Note

QC 20171121

Available from: 2017-11-21 Created: 2017-11-21 Last updated: 2017-11-21Bibliographically approved
Mattila, R., Rojas, C., Krishnamurthy, V. & Wahlberg, B. (2017). Computing monotone policies for Markov decision processes: a nearly-isotonic penalty approach. IFAC-PapersOnLine, 50(1), 8429-8434
Open this publication in new window or tab >>Computing monotone policies for Markov decision processes: a nearly-isotonic penalty approach
2017 (English)In: IFAC-PapersOnLine, ISSN 2405-8963, Vol. 50, no 1, p. 8429-8434Article in journal (Refereed) Published
Abstract [en]

This paper discusses algorithms for solving Markov decision processes (MDPs) that have monotone optimal policies. We propose a two-stage alternating convex optimization scheme that can accelerate the search for an optimal policy by exploiting the monotone property The first stage is a linear program formulated in terms of the joint state-action probabilities. The second stage is a regularized problem formulated in terms of the conditional probabilities of actions given states. The regularization uses techniques from nearly-isotonic regression. While a variety of iterative method can be used in the first formulation of the problem, we show in numerical simulations that, in particular, the alternating method of multipliers (ADMM) can be significantly accelerated using the regularization step.

Place, publisher, year, edition, pages
Elsevier, 2017
Keywords
alternating direction method of multipliers (ADMM), isotonic regression, l1-regularization, Markov decision process (MDP), monotone policy, sparsity, stochastic control
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-223069 (URN)10.1016/j.ifacol.2017.08.1575 (DOI)000423964900394 ()2-s2.0-85031809673 (Scopus ID)
Funder
Swedish Research Council, 2016-06079
Note

QC 20180213, Funding Agency: Linnaeus Center ACCESS at KTH 

Available from: 2018-02-13 Created: 2018-02-13 Last updated: 2018-03-05Bibliographically approved
Mattila, R., Rojas, C. R., Krishnamurthy, V. & Wahlberg, B. (2017). Identification of Hidden Markov Models Using Spectral Learning with Likelihood Maximization. In: 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017: . Paper presented at IEEE 56th Annual Conference on Decision and Control (CDC), DEC 12-15, 2017, Melbourne, Australia (pp. 5859-5864). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Identification of Hidden Markov Models Using Spectral Learning with Likelihood Maximization
2017 (English)In: 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 5859-5864Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we consider identifying a hidden Markov model (HMM) with the purpose of computing estimates of joint and conditional (posterior) probabilities over observation sequences. The classical maximum likelihood estimation algorithm (via the Baum-Welch/expectation-maximization algorithm), has recently been challenged by methods of moments. Such methods employ low-order moments to provide parameter estimates and have several benefits, including consistency and low computational cost. This paper aims to reduce the gap in statistical efficiency that results from restricting to only low-order moments in the training data. In particular, we propose a two-step procedure that combines spectral learning with a single Newton-like iteration for maximum likelihood estimation. We demonstrate an improved statistical performance using the proposed algorithm in numerical simulations.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
IEEE Conference on Decision and Control, ISSN 0743-1546
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-223860 (URN)10.1109/CDC.2017.8264545 (DOI)000424696905103 ()2-s2.0-85046135167 (Scopus ID)978-1-5090-2873-3 (ISBN)
Conference
IEEE 56th Annual Conference on Decision and Control (CDC), DEC 12-15, 2017, Melbourne, Australia
Funder
Swedish Research Council, 2016-06079
Note

QC 20180306

Available from: 2018-03-06 Created: 2018-03-06 Last updated: 2018-06-01Bibliographically approved
Mattila, R., Rojas, C. R., Krishnamurthy, V. & Wahlberg, B. (2017). Inverse filtering for hidden Markov models. In: Advances in Neural Information Processing Systems: . Paper presented at 31st Annual Conference on Neural Information Processing Systems, NIPS 2017, 4 December 2017 through 9 December 2017 (pp. 4205-4214). Neural information processing systems foundation, 2017
Open this publication in new window or tab >>Inverse filtering for hidden Markov models
2017 (English)In: Advances in Neural Information Processing Systems, Neural information processing systems foundation , 2017, Vol. 2017, p. 4205-4214Conference paper, Published paper (Refereed)
Abstract [en]

This paper considers a number of related inverse filtering problems for hidden Markov models (HMMs). In particular, given a sequence of state posteriors and the system dynamics; i) estimate the corresponding sequence of observations, ii) estimate the observation likelihoods, and iii) jointly estimate the observation likelihoods and the observation sequence. We show how to avoid a computationally expensive mixed integer linear program (MILP) by exploiting the algebraic structure of the HMM filter using simple linear algebra operations, and provide conditions for when the quantities can be uniquely reconstructed. We also propose a solution to the more general case where the posteriors are noisily observed. Finally, the proposed inverse filtering algorithms are evaluated on real-world polysomnographic data used for automatic sleep segmentation.

Place, publisher, year, edition, pages
Neural information processing systems foundation, 2017
Series
Advances in Neural Information Processing Systems, ISSN 1049-5258 ; 2017
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-228586 (URN)000452649404027 ()2-s2.0-85047014120 (Scopus ID)
Conference
31st Annual Conference on Neural Information Processing Systems, NIPS 2017, 4 December 2017 through 9 December 2017
Funder
Swedish Research Council, 2016-06079
Note

QC 20180528

Available from: 2018-05-28 Created: 2018-05-28 Last updated: 2019-01-04Bibliographically approved
Mattila, R., Siika, A., Roy, J. & Wahlberg, B. (2016). A Markov Decision Process Model to Guide Treatment of Abdominal Aortic Aneurysms. In: 2016 IEEE CONFERENCE ON CONTROL APPLICATIONS (CCA): . Paper presented at IEEE Conference on Control Applications (CCA), SEP 19-22, 2016, Buenos Aires, ARGENTINA. IEEE
Open this publication in new window or tab >>A Markov Decision Process Model to Guide Treatment of Abdominal Aortic Aneurysms
2016 (English)In: 2016 IEEE CONFERENCE ON CONTROL APPLICATIONS (CCA), IEEE, 2016Conference paper, Published paper (Refereed)
Abstract [en]

An abdominal aortic aneurysm (AAA) is an enlargement of the abdominal aorta which, if left untreated, can progressively widen and may rupture with fatal consequences. In this paper, we determine an optimal treatment policy using Markov decision process modeling. The policy is optimal with respect to the number of quality adjusted life-years (QALYs) that are expected to be accumulated during the remaining life of a patient. The new policy takes into account factors that are ignored by the current clinical policy (e.g. the life-expectancy and the age-dependent surgical mortality). The resulting optimal policy is structurally different from the current policy. In particular, the policy suggests that young patients with small aneurysms should undergo surgery. The robustness of the policy structure is demonstrated using simulations. A gain in the number of expected QALYs is shown, which indicates a possibility of improved care for patients with AAAs.

Place, publisher, year, edition, pages
IEEE, 2016
Series
IEEE International Conference on Control Applications, ISSN 1085-1992
Keywords
Abdominal aortic aneurysm (AAA), biosystems, decision making, Markov decision process (MDP), public health-care, treatment policy
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-197033 (URN)000386696600057 ()978-1-5090-0755-4 (ISBN)
Conference
IEEE Conference on Control Applications (CCA), SEP 19-22, 2016, Buenos Aires, ARGENTINA
Note

QC 20161208

Available from: 2016-12-08 Created: 2016-11-28 Last updated: 2016-12-08Bibliographically approved
Organisations

Search in DiVA

Show all publications