kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 164) Show all publications
Qin, H., Garbulowski, M., Sonnhammer, E. L. L. & Chatterjee, S. (2025). BiGSM: Bayesian inference of gene regulatory network via sparse modelling. Bioinformatics, 41(6), Article ID btaf318.
Open this publication in new window or tab >>BiGSM: Bayesian inference of gene regulatory network via sparse modelling
2025 (English)In: Bioinformatics, ISSN 1367-4803, E-ISSN 1367-4811, Vol. 41, no 6, article id btaf318Article in journal (Refereed) Published
Abstract [en]

Motivation: Inference of gene regulatory network (GRN) is challenging due to the inherent sparsity of the GRN matrix and noisy expression data, often leading to a high possibility of false positive or negative predictions. To address this, it is essential to leverage the sparsity of the GRN matrix and develop a robust method capable of handling varying levels of noise in the data. Moreover, most existing GRN inference methods produce only fixed point estimates, which lack the flexibility and informativeness for comprehensive network analysis. In contrast, a Bayesian approach that yields closed-form posterior distributions allows probabilistic link selection, offering insights into the statistical confidence of each possible link. Consequently, it is important to engineer a Bayesian GRN inference method and rigorously execute a benchmark evaluation compared to state-of-the-art methods.

Results: We propose a method—Bayesian inference of GRN via Sparse Modelling (BiGSM). BiGSM effectively exploits the sparsity of the GRN matrix and infers the posterior distributions of GRN links from noisy expression data by using the maximum likelihood based learning. We thoroughly benchmarked BiGSM using biological and simulated datasets including GeneNetWeaver, GeneSPIDER, and GRNbenchmark. The benchmark test evaluates its accuracy and robustness across varying noise levels and data models. Using point-estimate based performance measures, BiGSM provides an overall best performance in comparison with several state-of-the-art methods including GENIE3, LASSO, LSCON, and Zscore. Additionally, BiGSM is the only method in the set of competing methods that provides posteriors for the GRN weights, helping to decipher confidence across predictions.

Availability and implementation: Code implemented via MATLAB and Python are available at Github: https://github.com/SachLab/BiGSM and archived at zenodo.

Place, publisher, year, edition, pages
Oxford University Press (OUP), 2025
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-367876 (URN)10.1093/bioinformatics/btaf318 (DOI)001505003500001 ()40484997 (PubMedID)2-s2.0-105008280617 (Scopus ID)
Note

QC 20250922

Available from: 2025-08-04 Created: 2025-08-04 Last updated: 2025-09-22Bibliographically approved
Yang, C., Chatterjee, S. & Oechtering, T. J. (2025). Enhancing Network Calibration for Low-Cost Gas Sensor Networks Through Adaptive Similarity Search. In: : . Paper presented at 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025, Hyderabad, India, April 6-11, 2025. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Enhancing Network Calibration for Low-Cost Gas Sensor Networks Through Adaptive Similarity Search
2025 (English)Conference paper, Published paper (Refereed)
Abstract [en]

IoT-based low-cost gas sensors networks are important for environmental monitoring, but their regular calibrations are needed to achieve acceptable sensing performance. A critical step in network calibration is identifying when sensors within the network are sensing the same phenomenon, which is essential for accurate calibration. In this paper, we propose an adaptive similarity-search-based method for detecting these periods of similarity under the assumption of linear sensor drift. Our method leverages the relationships between neighboring sensors' measurements to enhance calibration accuracy, outperforming the commonly used Pearson correlation approach. We validate the effectiveness of our method through experiments with both synthetic data and real-world CO2 sensor networks, demonstrating improved calibration accuracy and reliability.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
environmental monitoring, IoT, Low-cost gas sensor networks, network calibration, Pearson correlation, sensor drift, similarity search
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-368909 (URN)10.1109/ICASSP49660.2025.10888054 (DOI)2-s2.0-105009700295 (Scopus ID)
Conference
2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025, Hyderabad, India, April 6-11, 2025
Note

Part of ISBN 9798350368741

QC 20250822

Available from: 2025-08-22 Created: 2025-08-22 Last updated: 2025-08-22Bibliographically approved
Grau-Jurado, P., Mostafaei, S., Xu, H., Mo, M., Petek, B., Kalar, I., . . . Garcia-Ptacek, S. (2025). Medications and cognitive decline in Alzheimer's disease: Cohort cluster analysis of 15,428 patients. Journal of Alzheimer's Disease, 103(3), 931-940
Open this publication in new window or tab >>Medications and cognitive decline in Alzheimer's disease: Cohort cluster analysis of 15,428 patients
Show others...
2025 (English)In: Journal of Alzheimer's Disease, ISSN 1387-2877, E-ISSN 1875-8908, Vol. 103, no 3, p. 931-940Article in journal (Refereed) Published
Abstract [en]

BACKGROUND: Medications for comorbid conditions may affect cognition in Alzheimer's disease (AD). OBJECTIVE: To explore the association between common medications and cognition, measured with the Mini-Mental State Examination. METHODS: Cohort study including persons with AD from the Swedish Registry for Cognitive/Dementia Disorders (SveDem). Medications were included if they were used by ≥5% of patients (26 individual drugs). Each follow-up was analyzed independently by performing 100 Monte-Carlo simulations of two steps each 1) k-means clustering of patients according to Mini-Mental State Examination at follow-up and its decline since previous measure, and 2) Identification of medications presenting statistically significant differences in the proportion of users in the different clusters. RESULTS: 15,428 patients (60.38% women) were studied. Four clusters were identified. Medications associated with the best cognition cluster (relative to the worse) were atorvastatin (point estimate 1.44 95% confidence interval [1.15-1.83] at first follow-up, simvastatin (1.41 [1.11-1.78] at second follow-up), warfarin (1.56 [1.22-2.01] first follow-up), zopiclone (1.35 [1.15-1.58], and metformin (2.08 [1.35-3.33] second follow-up. Oxazepam (0.60 [0.50-0.73] first follow-up), paracetamol (0.83 [0.73-0.95] first follow-up), cyanocobalamin, felodipine and furosemide were associated with the worst cluster. Cholinesterase inhibitors were associated with the best cognition clusters, whereas memantine appeared in the worse cognition clusters, consistent with its indication in moderate to severe dementia. CONCLUSIONS: We performed unsupervised clustering to classify patients based on their current cognition and cognitive decline from previous testing. Atorvastatin, simvastatin, warfarin, metformin, and zopiclone presented a positive and statistically significant associations with cognition, while oxazepam, cyanocobalamin, felodipine, furosemide and paracetamol, were associated with the worst cluster.

Place, publisher, year, edition, pages
SAGE Publications, 2025
Keywords
Alzheimer's disease, cohort study, comorbidity, metformin, Mini-Mental State Examination, oxazepam, pharmacological treatments, statins, warfarin, zopiclone
National Category
Geriatrics Neurosciences Pharmacology and Toxicology
Identifiers
urn:nbn:se:kth:diva-361181 (URN)10.1177/13872877241307870 (DOI)001432424700022 ()39772858 (PubMedID)2-s2.0-85219757719 (Scopus ID)
Note

QC 20250312

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-03-17Bibliographically approved
Cumlin, F., Liang, X., Ungureanu, V., Reddy, C. K. .., Schüldt, C. & Chatterjee, S. (2025). Multivariate Probabilistic Assessment of Speech Quality. In: Interspeech 2025: . Paper presented at 26th Interspeech Conference 2025, Rotterdam, Netherlands, Kingdom of the, August 17-21, 2025 (pp. 5413-5417). International Speech Communication Association
Open this publication in new window or tab >>Multivariate Probabilistic Assessment of Speech Quality
Show others...
2025 (English)In: Interspeech 2025, International Speech Communication Association , 2025, p. 5413-5417Conference paper, Published paper (Refereed)
Abstract [en]

The mean opinion score (MOS) is a standard metric for assessing speech quality, but its singular focus fails to identify specific distortions when low scores are observed. The NISQA dataset addresses this limitation by providing ratings across four additional dimensions: noisiness, coloration, discontinuity, and loudness, alongside MOS. In this paper, we extend the explored univariate MOS estimation to a multivariate framework by modeling these dimensions jointly using a multivariate Gaussian distribution. Our approach utilizes Cholesky decomposition to predict covariances without imposing restrictive assumptions, and extends probabilistic affine transformations to a multivariate context. Experimental results show that our model performs on par with state-of-the-art methods in point estimation, while uniquely providing uncertainty and correlation estimates across speech quality dimensions. This enables better diagnosis of poor speech quality and inform targeted improvements.

Place, publisher, year, edition, pages
International Speech Communication Association, 2025
Keywords
Bayesian learning, deep neural networks, non-intrusive, speech quality assessment, uncertainty estimation
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-372799 (URN)10.21437/Interspeech.2025-518 (DOI)2-s2.0-105020073616 (Scopus ID)
Conference
26th Interspeech Conference 2025, Rotterdam, Netherlands, Kingdom of the, August 17-21, 2025
Note

QC 20251113

Available from: 2025-11-13 Created: 2025-11-13 Last updated: 2025-11-13Bibliographically approved
Ghosh, A., Honore, A. & Chatterjee, S. (2024). DANSE: Data-Driven Non-Linear State Estimation of Model-Free Process in Unsupervised Learning Setup. IEEE Transactions on Signal Processing, 72, 1824-1838
Open this publication in new window or tab >>DANSE: Data-Driven Non-Linear State Estimation of Model-Free Process in Unsupervised Learning Setup
2024 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 72, p. 1824-1838Article in journal (Refereed) Published
Abstract [en]

We address the tasks of Bayesian state estimation and forecasting for a model-free process in an unsupervised learning setup. For a model-free process, we do not have any a-priori knowledge of the process dynamics. In the article, we propose DANSE - a Data-driven Nonlinear State Estimation method. DANSE provides a closed-form posterior of the state of the model-free process, given linear measurements of the state. In addition, it provides a closed-form posterior for forecasting. A data-driven recurrent neural network (RNN) is used in DANSE to provide the parameters of a prior of the state. The prior depends on the past measurements as input, and then we find the closed-form posterior of the state using the current measurement as input. The data-driven RNN captures the underlying non-linear dynamics of the model-free process. The training of DANSE, mainly learning the parameters of the RNN, is executed using an unsupervised learning approach. In unsupervised learning, we have access to a training dataset comprising only a set of (noisy) measurement data trajectories, but we do not have any access to the state trajectories. Therefore, DANSE does not have access to state information in the training data and can not use supervised learning. Using simulated linear and non-linear process models (Lorenz attractor and Chen attractor), we evaluate the unsupervised learning-based DANSE. We show that the proposed DANSE, without knowledge of the process model and without supervised learning, provides a competitive performance against model-driven methods, such as the Kalman filter (KF), extended KF (EKF), unscented KF (UKF), a data-driven deep Markov model (DMM) and a recently proposed hybrid method called KalmanNet. In addition, we show that DANSE works for high-dimensional state estimation.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
State estimation, Computational modeling, Training, Bayes methods, Noise measurement, Supervised learning, Unsupervised learning, Bayesian state estimation, forecasting, neural networks, recurrent neural networks
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-346102 (URN)10.1109/TSP.2024.3383277 (DOI)001200035500017 ()2-s2.0-85189517590 (Scopus ID)
Note

QC 20250923

Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-09-23Bibliographically approved
Honore, A., Siren, H., Vinuesa, R., Chatterjee, S. & Herlenius, E. (2024). Deep recurrent architectures for neonatal sepsis detection from vital signs data. In: Machine Learning Applications in Medicine and Biology: (pp. 115-149). Springer Nature
Open this publication in new window or tab >>Deep recurrent architectures for neonatal sepsis detection from vital signs data
Show others...
2024 (English)In: Machine Learning Applications in Medicine and Biology, Springer Nature , 2024, p. 115-149Chapter in book (Other academic)
Abstract [en]

Preterm birth corresponds to a live birth before 37 weeks of gestation. It occurs in 5-15% of births worldwide and is becoming more common in almost every country. It is the primary cause of infant mortality and morbidity in both developed and developing countries. Additionally, it is associated with a higher mortality rate of 30-50% among young adult men and women. Late-onset sepsis corresponds to an infection of the blood stream of a neonate after 72 hours of life. It occurs in 15-25% of very-preterm infants, resulting in a 10% mortality rate and a threefold increase in morbidity. Detecting neonatal sepsis early and accurately could thus significantly reduce mortality, morbidity, and the use of antibiotics in premature infants. Predictors based upon deep architectures are often designed and evaluated in case control setups and using data derived from patient electrocardiogram (ECG) only. In an effort to bridge the gap between clinical and technical knowledge in neonatal sepsis detection research, we provide a detailed background of (1) the population under scrutiny, (2) the computation of features from vital signs signals, and (3) the supervised learning approach to neonatal sepsis detection. We then discuss a study evaluating deep recurrent architectures in a retrospective cohort study setup for neonatal sepsis detection. Data from different modalities were used, including chest impedance, pulse oximetry, ECG, demographics factors, and routine body weights measurements. The vanilla and long-short-term-memory (LSTM) Recurrent Neural Network (RNN) architectures were studied, and the performances were compared against logistic regression (LR) for a variety of classification metrics in a leave-one-out cross-validation framework. This study indicates that LSTM-based RNN models trigger less alarms than LR on a population of patients not suffering from sepsis. However, the performances in terms of precision and recall remain low, which indicates that further research is required before such models are implemented in clinical practice.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Neonatal sepsis detection, Preterm birth, Recurrent neural networks
National Category
Pediatrics
Identifiers
urn:nbn:se:kth:diva-362487 (URN)10.1007/978-3-031-51893-5_5 (DOI)2-s2.0-105002270814 (Scopus ID)
Note

 Part of ISBN 9783031518935, 9783031518928

QC 20250422

Available from: 2025-04-16 Created: 2025-04-16 Last updated: 2025-04-22Bibliographically approved
Ghosh, A., Abdalmoaty, M., Chatterjee, S. & Hjalmarsson, H. (2024). DeepBayes—An estimator for parameter estimation in stochastic nonlinear dynamical models. Automatica, 159, Article ID 111327.
Open this publication in new window or tab >>DeepBayes—An estimator for parameter estimation in stochastic nonlinear dynamical models
2024 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 159, article id 111327Article in journal (Refereed) Published
Abstract [en]

Stochastic nonlinear dynamical systems are ubiquitous in modern, real-world applications. Yet, estimating the unknown parameters of stochastic, nonlinear dynamical models remains a challenging problem. The majority of existing methods employ maximum likelihood or Bayesian estimation. However, these methods suffer from some limitations, most notably the substantial computational time for inference coupled with limited flexibility in application. In this work, we propose DeepBayes estimators that leverage the power of deep recurrent neural networks. The method consists of first training a recurrent neural network to minimize the mean-squared estimation error over a set of synthetically generated data using models drawn from the model set of interest. The a priori trained estimator can then be used directly for inference by evaluating the network with the estimation data. The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference. We experiment with two popular recurrent neural networks — long short term memory network (LSTM) and gated recurrent unit (GRU). We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches. We also provide a study on a real-world nonlinear benchmark problem. The experimental evaluations show that the proposed approach is asymptotically as good as the Bayes estimator.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
Deep learning, Dynamical systems, Nonlinear system identification, Parameter estimation, Recurrent neural networks
National Category
Control Engineering Other Electrical Engineering, Electronic Engineering, Information Engineering Probability Theory and Statistics
Identifiers
urn:nbn:se:kth:diva-339038 (URN)10.1016/j.automatica.2023.111327 (DOI)001161034600001 ()2-s2.0-85174673962 (Scopus ID)
Note

QC 20251002

Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2025-10-02Bibliographically approved
Liang, X., Cumlin, F., Ungureanu, V., Reddy, C. K. .., Schüldt, C. & Chatterjee, S. (2024). DeePMOS-B: Deep Posterior Mean-Opinion-Score using Beta Distribution. In: 32nd European Signal Processing Conference, EUSIPCO 2024 - Proceedings: . Paper presented at 32nd European Signal Processing Conference, EUSIPCO 2024, Lyon, France, August 26-30, 2024 (pp. 416-420). European Signal Processing Conference, EUSIPCO
Open this publication in new window or tab >>DeePMOS-B: Deep Posterior Mean-Opinion-Score using Beta Distribution
Show others...
2024 (English)In: 32nd European Signal Processing Conference, EUSIPCO 2024 - Proceedings, European Signal Processing Conference, EUSIPCO , 2024, p. 416-420Conference paper, Published paper (Refereed)
Abstract [en]

Mean opinion score (MOS) is a bounded speech quality measure, ranging between 1 and 5. We propose using a Beta distribution to model the posterior of the bounded MOS for a given speech clip. We use a deep neural network (DNN), trained using a maximum-likelihood principle, providing the parameters of the posterior Beta distribution. A self-teacher learning setup is used to achieve robustness against the inherent challenge of training on a noisy dataset. The dataset noise comes from the subjective nature of the MOS labels, and only a handful of quality score ratings are provided for each speech clip. To compare with existing state-of-the-art methods, we use the mean of Beta posterior as a point estimate of the MOS. The proposed method shows competitive performance vis-a-vis several existing DNN-based methods that provide MOS point estimates, and an ablation study shows the importance of various components of the proposed method.

Place, publisher, year, edition, pages
European Signal Processing Conference, EUSIPCO, 2024
Keywords
Bayesian estimation, deep neural network, maximum-likelihood, speech quality assessment
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-356662 (URN)2-s2.0-85208437864 (Scopus ID)
Conference
32nd European Signal Processing Conference, EUSIPCO 2024, Lyon, France, August 26-30, 2024
Note

Part of ISBN 9789464593617

QC 20241121

Available from: 2024-11-20 Created: 2024-11-20 Last updated: 2024-11-21Bibliographically approved
Liang, X., Cumlin, F., Ungureanu, V., Reddy, C. K. A., Schuldt, C. & Chatterjee, S. (2024). DeePMOS-B: Deep Posterior Mean-Opinion-Score using Beta Distribution. In: 32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024: . Paper presented at 32nd European Signal Processing Conference (EUSIPCO), AUG 26-30, 2024, Lyon, FRANCE (pp. 416-420). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>DeePMOS-B: Deep Posterior Mean-Opinion-Score using Beta Distribution
Show others...
2024 (English)In: 32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 416-420Conference paper, Published paper (Refereed)
Abstract [en]

Mean opinion score (MOS) is a bounded speech quality measure, ranging between 1 and 5. We propose using a Beta distribution to model the posterior of the bounded MOS for a given speech clip. We use a deep neural network (DNN), trained using a maximum-likelihood principle, providing the parameters of the posterior Beta distribution. A self-teacher learning setup is used to achieve robustness against the inherent challenge of training on a noisy dataset. The dataset noise comes from the subjective nature of the MOS labels, and only a handful of quality score ratings are provided for each speech clip. To compare with existing state-of-the-art methods, we use the mean of Beta posterior as a point estimate of the MOS. The proposed method shows competitive performance vis-a-vis several existing DNN-based methods that provide MOS point estimates, and an ablation study shows the importance of various components of the proposed method.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Series
European Signal Processing Conference, ISSN 2076-1465
Keywords
speech quality assessment, deep neural network, maximum-likelihood, Bayesian estimation
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-358710 (URN)10.23919/EUSIPCO63174.2024.10715351 (DOI)001349787000083 ()
Conference
32nd European Signal Processing Conference (EUSIPCO), AUG 26-30, 2024, Lyon, FRANCE
Note

Part of ISBN 9789464593617, 9798331519773

QC 20250923

Available from: 2025-01-21 Created: 2025-01-21 Last updated: 2025-09-23Bibliographically approved
Cumlin, F., Liang, X., Ungureanu, V., Reddy, C. K. .., Schüldt, C. & Chatterjee, S. (2024). DNSMOS Pro: A Reduced-Size DNN for Probabilistic MOS of Speech. In: Interspeech 2024: . Paper presented at 25th Interspeech Conferece 2024, Kos Island, Greece, September 1-5, 2024 (pp. 4818-4822). International Speech Communication Association
Open this publication in new window or tab >>DNSMOS Pro: A Reduced-Size DNN for Probabilistic MOS of Speech
Show others...
2024 (English)In: Interspeech 2024, International Speech Communication Association , 2024, p. 4818-4822Conference paper, Published paper (Refereed)
Abstract [en]

We propose a deep neural network-based architecture and training design for objective non-intrusive speech quality assessment. The proposed method builds on DNSMOS, and we call the proposed model DNSMOS Pro. DNSMOS Pro has a reduced-size architecture suitable for VoIP, a relatively simple training design using only the mean opinion score (MOS) as the target label, and predicts the posterior distribution of MOS given an input speech clip. This means DNSMOS Pro can be trained when only the MOS is reported on a subjectively rated dataset. Furthermore, we implement several non-intrusive speech quality methods and compare them to DNSMOS Pro when training and testing on different subjectively rated datasets. DNSMOS Pro has significantly better performance on these benchmark datasets compared to similar DNN-based non-intrusive speech quality methods, and competitive results to methods assuming auxiliary information in the datasets.

Place, publisher, year, edition, pages
International Speech Communication Association, 2024
Keywords
deep neural network, maximum-likelihood, Speech quality assessment, voice conversion challenge
National Category
Computer Sciences Signal Processing
Identifiers
urn:nbn:se:kth:diva-358881 (URN)10.21437/Interspeech.2024-478 (DOI)001331850104185 ()2-s2.0-85214800319 (Scopus ID)
Conference
25th Interspeech Conferece 2024, Kos Island, Greece, September 1-5, 2024
Note

QC 20251021

Available from: 2025-01-23 Created: 2025-01-23 Last updated: 2025-10-21Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2638-6047

Search in DiVA

Show all publications