kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 190) Show all publications
Zhang, J., Hu, J. & Johansson, M. (2026). Non-convex composite federated learning with heterogeneous data. Automatica, 183, Article ID 112695.
Open this publication in new window or tab >>Non-convex composite federated learning with heterogeneous data
2026 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 183, article id 112695Article in journal (Refereed) Published
Abstract [en]

We propose an innovative algorithm for non-convex composite federated learning that decouples the proximal operator evaluation and the communication between server and clients. Moreover, each client uses local updates to communicate less frequently with the server, sends only a single d-dimensional vector per communication round, and overcomes issues with client drift. In the analysis, challenges arise from the use of decoupling strategies and local updates in the algorithm, as well as from the non-convex and non-smooth nature of the problem. We establish sublinear and linear convergence to a bounded residual error under general non-convexity and the proximal Polyak-& Lstrok;ojasiewicz inequality, respectively. In the numerical experiments, we demonstrate the superiority of our algorithm over state-of-the-art methods on both synthetic and real datasets.

Place, publisher, year, edition, pages
Elsevier BV, 2026
Keywords
Non-convex composite federated learning, Heterogeneous data, Local update
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-375597 (URN)10.1016/j.automatica.2025.112695 (DOI)001617904300001 ()2-s2.0-105021269947 (Scopus ID)
Note

QC 20260121

Available from: 2026-01-21 Created: 2026-01-21 Last updated: 2026-01-21Bibliographically approved
Taghavian, H., Dörfler, F. & Johansson, M. (2026). Optimal control of continuous-time symmetric systems with unknown dynamics and noisy measurements. Automatica, 183, Article ID 112609.
Open this publication in new window or tab >>Optimal control of continuous-time symmetric systems with unknown dynamics and noisy measurements
2026 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 183, article id 112609Article in journal (Refereed) Published
Abstract [en]

An iterative learning algorithm is presented for continuous-time linear–quadratic optimal control problems where the system is externally symmetric with unknown dynamics. Both finite-horizon and infinite-horizon problems are considered. It is shown that the proposed algorithm is globally convergent to the optimal solution and has some advantages over adaptive dynamic programming, including unbiased performance under noisy measurements, relatively low computational burden, and no requirement for exploration noise. Numerical experiments show the effectiveness of the results.

Place, publisher, year, edition, pages
Elsevier BV, 2026
Keywords
Optimal control, Linear–quadratic regulation, Symmetric systems
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-371755 (URN)10.1016/j.automatica.2025.112609 (DOI)001596556000004 ()2-s2.0-105017959708 (Scopus ID)
Note

QC 20251019

Available from: 2025-10-17 Created: 2025-10-17 Last updated: 2025-11-05Bibliographically approved
Chung, N. N., Taghavian, H., Johansson, M. & Chew, L. Y. (2025). A demonstration on the construction of modular neural network using elevator system that operates based on reinforcement learning. Journal of Computational Science, 91, Article ID 102678.
Open this publication in new window or tab >>A demonstration on the construction of modular neural network using elevator system that operates based on reinforcement learning
2025 (English)In: Journal of Computational Science, ISSN 1877-7503, E-ISSN 1877-7511, Vol. 91, article id 102678Article in journal (Refereed) Published
Abstract [en]

We study how neural networks can perform the task of elevator dispatching of commuters from their origins to their destinations. Instead of applying a neural network in the conventional way, we construct a specific neural network architecture that optimizes the commuters’ traveling time after taking into account the domain knowledge and the efficacy of potential future actions. The constructed architecture is modular with building blocks of neuronal structure that serve specified functional roles. By relaxing the weights and then training this network via reinforcement learning, we show that it outperforms an agent that implements the standard elevator algorithm. More remarkably, we observe the spontaneous emergence of functional modules within the structure of the network in consequence of the action sequences experienced during training. This behavioral feature of the neural network makes it less of a black box, with specific aspects of its functions being explicitly discernible from its network connections.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Elevator dispatching, Gray-box model, Modular neural network, Reinforcement learning
National Category
Computer Sciences Other Civil Engineering
Identifiers
urn:nbn:se:kth:diva-369993 (URN)10.1016/j.jocs.2025.102678 (DOI)001543734000002 ()2-s2.0-105012034994 (Scopus ID)
Note

QC 20250917

Available from: 2025-09-17 Created: 2025-09-17 Last updated: 2025-09-17Bibliographically approved
Zhang, J., Zhu, L., Fay, D. & Johansson, M. (2025). Locally Differentially Private Online Federated Learning With Correlated Noise. IEEE Transactions on Signal Processing, 73, 1518-1531
Open this publication in new window or tab >>Locally Differentially Private Online Federated Learning With Correlated Noise
2025 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 73, p. 1518-1531Article in journal (Refereed) Published
Abstract [en]

We introduce a locally differentially private (LDP) algorithm for online federated learning that employs temporally correlated noise to improve utility while preserving privacy. To address challenges posed by the correlated noise and local updates with streaming non-IID data, we develop a perturbed iterate analysis that controls the impact of the noise on the utility. Moreover, we demonstrate how the drift errors from local updates can be effectively managed for several classes of nonconvex loss functions. Subject to an (ε, δ)-LDP budget, we establish a dynamic regret bound that quantifies the impact of key parameters and the intensity of changes in the dynamic environment on the learning performance. Numerical experiments confirm the efficacy of the proposed algorithm.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
correlated noise, differential privacy, dynamic regret, Online federated learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-363125 (URN)10.1109/TSP.2025.3553355 (DOI)001463431100004 ()2-s2.0-105003029029 (Scopus ID)
Note

QC 20250506

Available from: 2025-05-06 Created: 2025-05-06 Last updated: 2025-11-03Bibliographically approved
Berglund, E., Zhang, J. & Johansson, M. (2025). Soft quasi-Newton: guaranteed positive definiteness by relaxing the secant constraint. Optimization Methods and Software, 1-30
Open this publication in new window or tab >>Soft quasi-Newton: guaranteed positive definiteness by relaxing the secant constraint
2025 (English)In: Optimization Methods and Software, ISSN 1055-6788, E-ISSN 1029-4937, p. 1-30Article in journal (Refereed) Epub ahead of print
Abstract [en]

We propose a novel algorithm, termed soft quasi-Newton (soft QN), for optimization in the presence of bounded noise. Traditional quasi-Newton algorithms are vulnerable to such noise-induced perturbations. To develop a more robust quasi-Newton method, we replace the secant condition in the matrix optimization problem for the Hessian update with a penalty term in its objective and derive a closed-form update formula. A key feature of our approach is its ability to maintain positive definiteness of the Hessian inverse approximation throughout the iterations. Furthermore, we establish the following properties of soft QN: it recovers the BFGS method under specific limits, it treats positive and negative curvature equally, and it is scale invariant. Collectively, these features enhance the efficacy of soft QN in noisy environments. For strongly convex objective functions and Hessian approximations obtained using soft QN, we develop an algorithm that exhibits linear convergence toward a neighborhood of the optimal solution even when gradient and function evaluations are subject to bounded perturbations. Through numerical experiments, we demonstrate that soft QN consistently outperforms state-of-the-art methods across a range of scenarios.

Place, publisher, year, edition, pages
Informa UK Limited, 2025
Keywords
quasi-Newton methods, general bounded noise, secant condition, penalty
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-362428 (URN)10.1080/10556788.2025.2475406 (DOI)001449014500001 ()2-s2.0-105000489741 (Scopus ID)
Note

QC 20250425

Available from: 2025-04-15 Created: 2025-04-15 Last updated: 2025-04-25Bibliographically approved
Zhang, J., Hu, J. & Johansson, M. (2024). COMPOSITE FEDERATED LEARNING WITH HETEROGENEOUS DATA. In: 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings: . Paper presented at 49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024, Seoul, Korea, Apr 14 2024 - Apr 19 2024 (pp. 8946-8950). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>COMPOSITE FEDERATED LEARNING WITH HETEROGENEOUS DATA
2024 (English)In: 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 8946-8950Conference paper, Published paper (Refereed)
Abstract [en]

We propose a novel algorithm for solving the composite Federated Learning (FL) problem. This algorithm manages non-smooth regularization by strategically decoupling the proximal operator and communication, and addresses client drift without any assumptions about data similarity. Moreover, each worker uses local updates to reduce the communication frequency with the server and transmits only a d-dimensional vector per communication round. We prove that our algorithm converges linearly to a neighborhood of the optimal solution and demonstrate the superiority of our algorithm over state-of-the-art methods in numerical experiments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Series
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, ISSN 1520-6149
Keywords
Composite federated learning, heterogeneous data, local update
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-348288 (URN)10.1109/ICASSP48485.2024.10447718 (DOI)001396233802047 ()2-s2.0-85195366479 (Scopus ID)
Conference
49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024, Seoul, Korea, Apr 14 2024 - Apr 19 2024
Note

QC 20240626

Part of ISBN 979-835034485-1

Available from: 2024-06-20 Created: 2024-06-20 Last updated: 2025-03-24Bibliographically approved
Zhang, J., Zhu, L. & Johansson, M. (2024). Differentially Private Online Federated Learning with Correlated Noise. In: 2024 IEEE 63rd Conference on Decision and Control, CDC 2024: . Paper presented at 63rd IEEE Conference on Decision and Control, CDC 2024, Milan, Italy, Dec 16 2024 - Dec 19 2024 (pp. 3140-3146). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Differentially Private Online Federated Learning with Correlated Noise
2024 (English)In: 2024 IEEE 63rd Conference on Decision and Control, CDC 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 3140-3146Conference paper, Published paper (Refereed)
Abstract [en]

We introduce a novel differentially private algorithm for online federated learning that employs temporally correlated noise to enhance utility while ensuring privacy of continuously released models. To address challenges posed by DP noise and local updates with streaming non-iid data, we develop a perturbed iterate analysis to control the impact of the DP noise on the utility. Moreover, we demonstrate how the drift errors from local updates can be effectively managed under a quasi-strong convexity condition. Subject to an (, δ) DP budget, we establish a dynamic regret bound over the entire time horizon, quantifying the impact of key parameters and the intensity of changes in dynamic environments. Numerical experiments confirm the efficacy of the proposed algorithm.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-361764 (URN)10.1109/CDC56724.2024.10886177 (DOI)001445827202108 ()2-s2.0-86000650544 (Scopus ID)
Conference
63rd IEEE Conference on Decision and Control, CDC 2024, Milan, Italy, Dec 16 2024 - Dec 19 2024
Note

Part of ISBN 9798350316339

QC 20250401

Available from: 2025-03-27 Created: 2025-03-27 Last updated: 2025-12-05Bibliographically approved
Zhang, J., Fay, D. & Johansson, M. (2024). DYNAMIC PRIVACY ALLOCATION FOR LOCALLY DIFFERENTIALLY PRIVATE FEDERATED LEARNING WITH COMPOSITE OBJECTIVES. In: 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings: . Paper presented at 49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024, Seoul, Korea, Apr 14 2024 - Apr 19 2024 (pp. 9461-9465). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>DYNAMIC PRIVACY ALLOCATION FOR LOCALLY DIFFERENTIALLY PRIVATE FEDERATED LEARNING WITH COMPOSITE OBJECTIVES
2024 (English)In: 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 9461-9465Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a locally differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems that protects the gradients of each worker against an honest but curious server. The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error subject to a predefined privacy budget constraint. This allows for an arbitrarily large but finite number of iterations to achieve both privacy protection and utility up to a neighborhood of the optimal solution, removing the need for tuning the number of iterations. Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
dynamic allocation, Federated learning, local differential privacy
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-348291 (URN)10.1109/ICASSP48485.2024.10448141 (DOI)001396233802150 ()2-s2.0-85195409957 (Scopus ID)
Conference
49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024, Seoul, Korea, Apr 14 2024 - Apr 19 2024
Note

QC 20240625 

Part of ISBN [9798350344851]

Available from: 2024-06-20 Created: 2024-06-20 Last updated: 2025-03-26Bibliographically approved
Andersson, M., Streb, M., Prathimala, V. G., Siddiqui, A., Lodge, A., Klass, V. L., . . . Lindbergh, G. (2024). Electrochemical model-based aging-adaptive fast charging of automotive lithium-ion cells. Applied Energy, 372, Article ID 123644.
Open this publication in new window or tab >>Electrochemical model-based aging-adaptive fast charging of automotive lithium-ion cells
Show others...
2024 (English)In: Applied Energy, ISSN 0306-2619, E-ISSN 1872-9118, Vol. 372, article id 123644Article in journal (Refereed) Published
Abstract [en]

Fast charging of electric vehicles remains a compromise between charging time and degradation penalty. Conventional battery management systems use experience-based charging protocols that are expected to meet vehicle lifetime goals. Novel electrochemical model-based battery fast charging uses a model to observe internal battery states. This enables control of charging rates based on states such as the lithium-plating potential but relies on an accurate model as well as accurate model parameters. However, the impact of battery degradation on the model's accuracy and therefore the fitness of the estimated optimal charging procedure is often not considered. In this work, we therefore investigate electrochemical model-based aging-adaptive fast charging of automotive lithium-ion cells. First, an electrochemical model is identified at the beginning of life for 6 automotive prototype cells and the electrochemically constrained fast-charge is designed. The model parameters are then periodically re-evaluated during a cycling study and the charging procedure is updated to account for cell degradation. The proposed method is compared with two reference protocols to investigate both the effectiveness of selected electrochemical constraints as well as the benefit of aging-adaptive usage. Finally, post-mortem characterization is presented to highlight the benefit of aging-adaptive battery utilization.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
Electrochemical control, Fast charging, Battery parametrization, Battery degradation, Aging-aware usage
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-350842 (URN)10.1016/j.apenergy.2024.123644 (DOI)001266077700001 ()2-s2.0-85197451633 (Scopus ID)
Note

QC 20240722

Available from: 2024-07-22 Created: 2024-07-22 Last updated: 2024-08-19Bibliographically approved
Zhang, J., Hu, J., So, A. M. & Johansson, M. (2024). Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data. In: Advances in Neural Information Processing Systems 37 - 38th Conference on Neural Information Processing Systems, NeurIPS 2024: . Paper presented at 38th Conference on Neural Information Processing Systems, NeurIPS 2024, Vancouver, Canada, Dec 9 2024 - Dec 15 2024. Neural information processing systems foundation, 37
Open this publication in new window or tab >>Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data
2024 (English)In: Advances in Neural Information Processing Systems 37 - 38th Conference on Neural Information Processing Systems, NeurIPS 2024, Neural information processing systems foundation , 2024, Vol. 37Conference paper, Published paper (Refereed)
Abstract [en]

Many machine learning tasks, such as principal component analysis and low-rank matrix completion, give rise to manifold optimization problems. Although there is a large body of work studying the design and analysis of algorithms for manifold optimization in the centralized setting, there are currently very few works addressing the federated setting. In this paper, we consider nonconvex federated learning over a compact smooth submanifold in the setting of heterogeneous client data. We propose an algorithm that leverages stochastic Riemannian gradients and a manifold projection operator to improve computational efficiency, uses local updates to improve communication efficiency, and avoids client drift. Theoretically, we show that our proposed algorithm converges sub-linearly to a neighborhood of a first-order optimal solution by using a novel analysis that jointly exploits the manifold structure and properties of the loss functions. Numerical experiments demonstrate that our algorithm has significantly smaller computational and communication overhead than existing methods.

Place, publisher, year, edition, pages
Neural information processing systems foundation, 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-361952 (URN)2-s2.0-105000497181 (Scopus ID)
Conference
38th Conference on Neural Information Processing Systems, NeurIPS 2024, Vancouver, Canada, Dec 9 2024 - Dec 15 2024
Note

QC 20250409

Available from: 2025-04-03 Created: 2025-04-03 Last updated: 2025-04-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2237-2580

Search in DiVA

Show all publications