kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 26) Show all publications
Mochaourab, R., Venkitaraman, A., Samsten, I., Papapetrou, P. & Rojas, C. R. (2022). Post Hoc Explainability for Time Series Classification Toward a signal processing perspective. IEEE signal processing magazine (Print), 39(4), 119-129
Open this publication in new window or tab >>Post Hoc Explainability for Time Series Classification Toward a signal processing perspective
Show others...
2022 (English)In: IEEE signal processing magazine (Print), ISSN 1053-5888, E-ISSN 1558-0792, Vol. 39, no 4, p. 119-129Article in journal (Refereed) Published
Abstract [en]

Time series data correspond to observations of phenomena that are recorded over time. Such data are encountered regularly in a wide range of applications, such as speech and music recognition, monitoring health and medical diagnosis, financial analysis, motion tracking, and shape identification, to name a few. With such a diversity of applications and the large variations in their characteristics, time series classification is a complex and challenging task. One of the fundamental steps in the design of time series classifiers is that of defining or constructing the discriminant features that help differentiate between classes. This is typically achieved by designing novel representation techniques that transform the raw time series data to a new data domain, where subsequently a classifier is trained on the transformed data, such as one-nearest neighbors or random forests. In recent time series classification approaches, deep neural network models have been employed that are able to jointly learn a representation of time series and perform classification. In many of these sophisticated approaches, the discriminant features tend to be complicated to analyze and interpret, given the high degree of nonlinearity.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-315692 (URN)10.1109/MSP.2022.3155955 (DOI)000818887300011 ()2-s2.0-85133840717 (Scopus ID)
Note

QC 20220715

Available from: 2022-07-15 Created: 2022-07-15 Last updated: 2023-06-08Bibliographically approved
Winqvist, R., Venkitaraman, A. & Wahlberg, B. (2021). Learning Models of Model Predictive Controllers using Gradient Data. In: IFAC PAPERSONLINE: . Paper presented at 19th IFAC Symposium on System Identification (SYSID), JUL 13-16, 2021, Padova, ITALY (pp. 7-12). Elsevier BV, 54(7)
Open this publication in new window or tab >>Learning Models of Model Predictive Controllers using Gradient Data
2021 (English)In: IFAC PAPERSONLINE, Elsevier BV , 2021, Vol. 54, no 7, p. 7-12Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the problem of controller identification given the data from a linear quadratic Model Predictive Controller (MPC) with constraints. We propose an approach for learning MPC that explicitly uses the gradient information in the training process. This is motivated by the observation that recent differentiable convex optimization MPC solvers can provide both the optimal feedback law from the state to control input as well as the corresponding gradient. As a proof of concept, we apply this approach to explicit MPC (eMPC), for which the feedback law is a piece-wise affine function of the state, but the number of pieces grows rapidly with the state dimension. Controller identification can here be used to find an approximate low complexity functional approximation of the controller. The eMPC is modelled using a Neural Network (NN) with Rectified Linear Units (ReLUs), since such NNs can represent any piece-wise affine function. A key motivation is to replace on-line solvers with neural networks to implement MPC and to simplify the evaluation of the function in larger input dimensions. We also study experimental design and model evaluation in this framework, and propose a hit-and-run sampling algorithm for input design. The proposed algorithms are illustrated and numerically evaluated on a second order MPC problem.

Place, publisher, year, edition, pages
Elsevier BV, 2021
Keywords
Identification for control, data-driven control, neural networks relevant to control and identification, input and excitation design, model predictive control, modeling for control optimization
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-303782 (URN)10.1016/j.ifacol.2021.08.326 (DOI)000696396200003 ()2-s2.0-85118192164 (Scopus ID)
Conference
19th IFAC Symposium on System Identification (SYSID), JUL 13-16, 2021, Padova, ITALY
Note

QC 20211022

Available from: 2021-10-22 Created: 2021-10-22 Last updated: 2022-06-25Bibliographically approved
Venkitaraman, A., Chatterjee, S. & Händel, P. (2020). Gaussian Processes over Graphs. In: 2020 IEEE International Conference on Acoustics Speech and Signal Processing ICASSP: . Paper presented at 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020; Barcelona; Spain; 4 May 2020 through 8 May 2020 (pp. 5640-5644). Institute of Electrical and Electronics Engineers (IEEE), Article ID 9053859.
Open this publication in new window or tab >>Gaussian Processes over Graphs
2020 (English)In: 2020 IEEE International Conference on Acoustics Speech and Signal Processing ICASSP, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 5640-5644, article id 9053859Conference paper, Published paper (Refereed)
Abstract [en]

Kernel Regression over Graphs (KRG) was recently proposed for predicting graph signals in a supervised learning setting, where the inputs are agnostic to the graph. KRG model predicts targets that are smooth graph signals as over the given graph, given the input when all the signals are deterministic. In this work, we consider the development of a stochastic or Bayesian variant of KRG. Using priors and likelihood functions, our goal is to systematically derive a predictive distribution for the smooth graph signal target given the training data and a new input. We show that this naturally results in a Gaussian process formulation which we call Gaussian Processes over Graphs (GPG). Experiments with real-world datasets show that the performance of GPG is superior to a conventional Gaussian Process (without the graph-structure) for small training data sizes and under noisy training.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2020
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
Keywords
Graph signal processing, Gaussian processes, Bayesian estimation, kernel regression, graph-Laplacian
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:kth:diva-292363 (URN)10.1109/ICASSP40776.2020.9053859 (DOI)000615970405180 ()2-s2.0-85089226492 (Scopus ID)
Conference
2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020; Barcelona; Spain; 4 May 2020 through 8 May 2020
Note

QC 20210401

Available from: 2021-04-01 Created: 2021-04-01 Last updated: 2022-06-25Bibliographically approved
Javid, A. M., Venkitaraman, A., Skoglund, M. & Chatterjee, S. (2020). High-dimensional neural feature design for layer-wise reduction of training cost. EURASIP Journal on Advances in Signal Processing, 2020(1), Article ID 40.
Open this publication in new window or tab >>High-dimensional neural feature design for layer-wise reduction of training cost
2020 (English)In: EURASIP Journal on Advances in Signal Processing, ISSN 1687-6172, E-ISSN 1687-6180, Vol. 2020, no 1, article id 40Article in journal (Refereed) Published
Abstract [en]

We design a rectified linear unit-based multilayer neural network by mapping the feature vectors to a higher dimensional space in every layer. We design the weight matrices in every layer to ensure a reduction of the training cost as the number of layers increases. Linear projection to the target in the higher dimensional space leads to a lower training cost if a convex cost is minimized. Anl(2)-norm convex constraint is used in the minimization to reduce the generalization error and avoid overfitting. The regularization hyperparameters of the network are derived analytically to guarantee a monotonic decrement of the training cost, and therefore, it eliminates the need for cross-validation to find the regularization hyperparameter in each layer. We show that the proposed architecture is norm-preserving and provides an invertible feature vector and, therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.

Place, publisher, year, edition, pages
Springer Nature, 2020
Keywords
Rectified linear unit, Feature design, Neural network, Convex cost function
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-282234 (URN)10.1186/s13634-020-00695-2 (DOI)000567866600001 ()2-s2.0-85090410372 (Scopus ID)
Note

QC 20201103

Available from: 2020-11-03 Created: 2020-11-03 Last updated: 2022-06-25Bibliographically approved
Javid, A. M., Venkitaraman, A., Skoglund, M. & Chatterjee, S. (2020). High-dimensional neural feature using rectified linear unit and random matrix instance. In: 2020 IEEE international conference on acoustics, speech, and signal processing: . Paper presented at IEEE International Conference on Acoustics, Speech, and Signal Processing, MAY 04-08, 2020, Barcelona, SPAIN (pp. 4237-4241). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>High-dimensional neural feature using rectified linear unit and random matrix instance
2020 (English)In: 2020 IEEE international conference on acoustics, speech, and signal processing, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 4237-4241Conference paper, Published paper (Refereed)
Abstract [en]

We design a ReLU-based multilayer neural network to generate a rich high-dimensional feature vector. The feature guarantees a monotonically decreasing training cost as the number of layers increases. We design the weight matrix in each layer to extend the feature vectors to a higher dimensional space while providing a richer representation in the sense of training cost. Linear projection to the target in the higher dimensional space leads to a lower training cost if a convex cost is minimized. An l(2)-norm convex constraint is used in the minimization to improve the generalization error and avoid overfitting. The regularization hyperparameters of the network are derived analytically to guarantee a monotonic decrement of the training cost and therefore, it eliminates the need for cross-validation to find the regularization hyperparameter in each layer.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2020
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
Keywords
Rectified linear unit, random matrix, convex cost function
National Category
Control Engineering Computer Sciences
Identifiers
urn:nbn:se:kth:diva-292709 (URN)10.1109/ICASSP40776.2020.9054736 (DOI)000615970404097 ()2-s2.0-85091295366 (Scopus ID)
Conference
IEEE International Conference on Acoustics, Speech, and Signal Processing, MAY 04-08, 2020, Barcelona, SPAIN
Note

QC 20210710

Available from: 2021-04-14 Created: 2021-04-14 Last updated: 2022-06-25Bibliographically approved
Venkitaraman, A., Hjalmarsson, H. & Wahlberg, B. (2020). Learning sparse linear dynamic networks in a hyper-parameter free setting. In: IFAC PAPERSONLINE: . Paper presented at 21st IFAC World Congress on Automatic Control - Meeting Societal Challenges, JUL 11-17, 2020, ELECTR NETWORK (pp. 82-86). ELSEVIER, 53(2)
Open this publication in new window or tab >>Learning sparse linear dynamic networks in a hyper-parameter free setting
2020 (English)In: IFAC PAPERSONLINE, ELSEVIER , 2020, Vol. 53, no 2, p. 82-86Conference paper, Oral presentation only (Refereed)
Abstract [en]

We address the issue of estimating the topology and dynamics of sparse linear dynamic networks in a hyperparameter-free setting. We propose a method to estimate the network dynamics in a computationally efficient and parameter tuning-free iterative framework known as SPICE (Sparse Iterative Covariance Estimation). Our approach does not assume that the network is undirected and is applicable even with varying noise levels across the modules of the network. We also do not assume any explicit prior knowledge on the network dynamics. Numerical experiments with realistic dynamic networks illustrate the usefulness of our method. 

Place, publisher, year, edition, pages
ELSEVIER, 2020
Keywords
Dynamic networks
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-298007 (URN)10.1016/j.ifacol.2020.12.095 (DOI)000652592500015 ()2-s2.0-85105102779 (Scopus ID)
Conference
21st IFAC World Congress on Automatic Control - Meeting Societal Challenges, JUL 11-17, 2020, ELECTR NETWORK
Note

QC 20210624

Available from: 2021-06-24 Created: 2021-06-24 Last updated: 2022-06-25Bibliographically approved
Venkitaraman, A., Chatterjee, S. & Wahlberg, B. (2020). Recursive Prediction of Graph Signals with Incoming Nodes. In: 2020 IEEE International Conference on Acoustics, Speech, And Signal Processing: . Paper presented at 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020; Barcelona; Spain; 4 May 2020 through 8 May 2020 (pp. 5565-5569). Institute of Electrical and Electronics Engineers (IEEE), Article ID 9053145.
Open this publication in new window or tab >>Recursive Prediction of Graph Signals with Incoming Nodes
2020 (English)In: 2020 IEEE International Conference on Acoustics, Speech, And Signal Processing, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 5565-5569, article id 9053145Conference paper, Published paper (Refereed)
Abstract [en]

Kernel and linear regression have been recently explored in the prediction of graph signals as the output, given arbitrary input signals that are agnostic to the graph. In many real-world problems, the graph expands over time as new nodes get introduced. Keeping this premise in mind, we propose a method to recursively obtain the optimal prediction or regression coefficients for the recently proposed Linear Regression over Graphs (LRG), as the graph expands with incoming nodes. This comes as a natural consequence of the structure of the regression problem, and obviates the need to solve a new regression problem each time a new node is added. Experiments with real-world graph signals show that our approach results in a good prediction performance which tends to be close to that obtained from knowing the entire graph apriori.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2020
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
Keywords
Linear regression, graph expansion, graph signal processing, recursive least squares, convex optimization
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-292373 (URN)10.1109/ICASSP40776.2020.9053145 (DOI)000615970405165 ()2-s2.0-85089231874 (Scopus ID)
Conference
2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020; Barcelona; Spain; 4 May 2020 through 8 May 2020
Note

QC 20210416

Available from: 2021-04-16 Created: 2021-04-16 Last updated: 2023-04-05Bibliographically approved
Venkitaraman, A., Frossard, P. & Chatterjee, S. (2019). KERNEL REGRESSION FOR GRAPH SIGNAL PREDICTION IN PRESENCE OF SPARSE NOISE. In: 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP): . Paper presented at 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), MAY 12-17, 2019, Brighton, ENGLAND (pp. 5426-5430). IEEE
Open this publication in new window or tab >>KERNEL REGRESSION FOR GRAPH SIGNAL PREDICTION IN PRESENCE OF SPARSE NOISE
2019 (English)In: 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE , 2019, p. 5426-5430Conference paper, Published paper (Refereed)
Abstract [en]

In presence of sparse noise we propose kernel regression for predicting output vectors which are smooth over a given graph. Sparse noise models the training outputs being corrupted either with missing samples or large perturbations. The presence of sparse noise is handled using appropriate use of l(1)-norm along-with use of l(2)-norm in a convex cost function. For optimization of the cost function, we propose an iteratively reweighted least-squares (IRLS) approach that is suitable for kernel substitution or kernel trick due to availability of a closed form solution. Simulations using real-world temperature data show efficacy of our proposed method, mainly for limited-size training datasets.

Place, publisher, year, edition, pages
IEEE, 2019
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
Keywords
Kernel regression, graph signal processing, Sparse noise, graph-Laplacian, iteratively reweighted least squares
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-261058 (URN)10.1109/ICASSP.2019.8682979 (DOI)000482554005132 ()2-s2.0-85068972923 (Scopus ID)
Conference
44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), MAY 12-17, 2019, Brighton, ENGLAND
Note

QC 20191002

Part of ISBN 978-1-4799-8131-1

Available from: 2019-10-02 Created: 2019-10-02 Last updated: 2024-10-25Bibliographically approved
Venkitaraman, A. & Zachariah, D. (2019). Learning Sparse Graphs for Prediction of Multivariate Data Processes. IEEE Signal Processing Letters, 26(3), 495-499
Open this publication in new window or tab >>Learning Sparse Graphs for Prediction of Multivariate Data Processes
2019 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 26, no 3, p. 495-499Article in journal (Refereed) Published
Abstract [en]

We address the problem of prediction of multivariate data process using an underlying graph model. We develop a method that learns a sparse partial correlation graph in a tuning-free and computationally efficient manner. Specifically, the graph structure is learned recursively without the need for cross validation or parameter tuning by building upon a hyperparameter-free framework. Our approach does not require the graph to be undirected and also accommodates varying noise levels across different nodes. Experiments using real-world datasets show that the proposed method offers significant performance gains in prediction, in comparison with the graphs frequently associated with these datasets.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Partial correlation graphs, multivariate process, sparse graphs, prediction, hyperparameter-free
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-245903 (URN)10.1109/LSP.2019.2896435 (DOI)000458852100005 ()2-s2.0-85061748115 (Scopus ID)
Note

QC 20190315

Available from: 2019-03-15 Created: 2019-03-15 Last updated: 2022-06-26Bibliographically approved
Venkitaraman, A., Chatterjee, S. & Händel, P. (2019). On Hilbert transform, analytic signal, and modulation analysis for signals over graphs. Signal Processing, 156, 106-115
Open this publication in new window or tab >>On Hilbert transform, analytic signal, and modulation analysis for signals over graphs
2019 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 156, p. 106-115Article in journal (Refereed) Published
Abstract [en]

We propose Hilbert transform and analytic signal construction for signals over graphs. This is motivated by the popularity of Hilbert transform, analytic signal, and modulation analysis in conventional signal processing, and the observation that complementary insight is often obtained by viewing conventional signals in the graph setting. Our definitions of Hilbert transform and analytic signal use a conjugate symmetry-like property exhibited by the graph Fourier transform (GFT), resulting in a 'one-sided' spectrum for the graph analytic signal. The resulting graph Hilbert transform is shown to possess many interesting mathematical properties and also exhibit the ability to highlight anomalies/discontinuities in the graph signal and the nodes across which signal discontinuities occur. Using the graph analytic signal, we further define amplitude, phase, and frequency modulations for a graph signal. We illustrate the proposed concepts by showing applications to synthesized and real-world signals. For example, we show that the graph Hilbert transform can indicate presence of anomalies and that graph analytic signal, and associated amplitude and frequency modulations reveal complementary information in speech signals.

Place, publisher, year, edition, pages
Elsevier, 2019
Keywords
Graph signal processing, Analytic signal, Hilbert transform, Demodulation, Anomaly detection
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-240988 (URN)10.1016/j.sigpro.2018.10.016 (DOI)000453494200011 ()2-s2.0-85056192636 (Scopus ID)
Note

QC 20190110

Available from: 2019-01-10 Created: 2019-01-10 Last updated: 2022-06-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1285-8947

Search in DiVA

Show all publications