kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (9 of 9) Show all publications
Zhang, J., Zhu, L., Fay, D. & Johansson, M. (2025). Locally Differentially Private Online Federated Learning With Correlated Noise. IEEE Transactions on Signal Processing, 73, 1518-1531
Open this publication in new window or tab >>Locally Differentially Private Online Federated Learning With Correlated Noise
2025 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 73, p. 1518-1531Article in journal (Refereed) Published
Abstract [en]

We introduce a locally differentially private (LDP) algorithm for online federated learning that employs temporally correlated noise to improve utility while preserving privacy. To address challenges posed by the correlated noise and local updates with streaming non-IID data, we develop a perturbed iterate analysis that controls the impact of the noise on the utility. Moreover, we demonstrate how the drift errors from local updates can be effectively managed for several classes of nonconvex loss functions. Subject to an (ε, δ)-LDP budget, we establish a dynamic regret bound that quantifies the impact of key parameters and the intensity of changes in the dynamic environment on the learning performance. Numerical experiments confirm the efficacy of the proposed algorithm.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
correlated noise, differential privacy, dynamic regret, Online federated learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-363125 (URN)10.1109/TSP.2025.3553355 (DOI)001463431100004 ()2-s2.0-105003029029 (Scopus ID)
Note

QC 20250506

Available from: 2025-05-06 Created: 2025-05-06 Last updated: 2025-11-03Bibliographically approved
Fay, D. (2025). Machine Learning with Decentralized Data and Differential Privacy: New Methods for Training, Inference and Sampling. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Machine Learning with Decentralized Data and Differential Privacy: New Methods for Training, Inference and Sampling
2025 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

Scale has been an essential driver of progress in recent machine learning research. Data sets and computing resources have grown rapidly, complemented by models and algorithms capable of leveraging these resources. However, in many important applications, there are two limits to such data collection. First, data is often locked in silos, and cannot be shared. This is common in the medical domain, where patient data is controlled by different clinics. Second, machine learning models are prone to memorization. Therefore, when dealing with sensitive data, it is often desirable to have formal privacy guarantees to ensure that no sensitive information can be reconstructed from the trained model.

The topic of this thesis is the design of machine learning algorithms that adhere to these two restrictions: to operate on decentralized data and to satisfy formal privacy guarantees. We study two broad categories of machine learning algorithms for decentralized data: federated learning and ensembling of local models. Federated learning is a form of machine learning in which multiple clients collaborate during training via the coordination of a central server. In ensembling of local models, each client first trains a local model on its own data, and then collaborates with other clients during inference. As a formal privacy guarantee, we consider differential privacy, which is based on introducing artificial noise to ensure membership privacy. Differential privacy is typically applied to federated learning by adding noise to the model updates sent to the server, and to ensembling of local models by adding noise to the predictions of the local models.

Our research addresses the following core areas in the context of privacy-preserving machine learning with decentralized data: First, we examine the implications of data dimensionality on privacy for ensembling of medical image segmentation models. We extend the classification algorithm Private Aggregation of Teacher Ensembles (PATE) to high-dimensional labels, and demonstrate that dimensionality reduction can improve the privacy-utility trade-off. Second, we consider the impact of hyperparameter selection on privacy. Here, we propose a novel adaptive technique for hyperparameter selection in differentially private gradient descent; as well as an adaptive technique for federated learning with non-smooth loss functions. Third, we investigate sampling-based solutions to scale differentially private machine learning to datasets with a large number of data points. We study the privacy-enhancing properties of importance sampling and find that it can outperform uniform sub-sampling not only in terms of sample efficiency but also in terms of privacy. Fourth, we study the problem of systematic label shift in ensembling of local models. We propose a novel method based on label clustering to enable flexible collaboration at inference time.

The techniques developed in this thesis improve the scalability and locality of machine learning while ensuring robust privacy protection. This constitutes progress on the goal of a safe application of machine learning to large and diverse data sets for medical image analysis and similar domains.

Abstract [sv]

Skalning har varit en avgörande drivkraft för framsteg inom den senaste maskininlärningsforskningen. Datamängder och beräkningsresurser har vuxit kraftigt och i takt med detta modeller och algoritmer som kan utnyttja dessa. Dock finns det i många viktiga tillämpningar två begränsningar för datainsamling. För det första finns data ofta bakom lås och kan inte delas mellan aktörer. Detta är vanligt inom medicinområdet, där patientdata kontrolleras av olika kliniker. För det andra är maskininlärningsmodeller benägna att memorera. När det gäller känsliga data är det därför ofta önskvärt att ha formella integritetsgarantier för att säkerställa att ingen känslig information kan rekonstrueras från tränade modeller.

Ämnet för denna avhandling är utformningen av maskininlärningsalgoritmer som anpassar sig till dessa två begränsningar: att fungera på decentraliserade data och att uppfylla formella integritetsgarantier. Vi studerar två breda kategorier av maskininlärningsalgoritmer för decentraliserade data: federerad inlärning och ensemblemetoder för lokala modeller. I federerad inlärning samarbetar flera klienter under träningen, samordnade av en central server. I ensemblemetoder för lokala modeller tränar varje klient först en lokal modell på sina egna data och samarbetar sedan med andra klienter under inferens. Som en formell integritetsgaranti använder vi differentiell integritet, som bygger på att lägga till artificiellt brus för att säkerställa medlemsintegritet. Differentiell integritet tillämpas vanligtvis på federerad inlärning genom att lägga till brus i modelluppdateringarna som skickas till servern, och på ensemblemetoder för lokala modeller genom att lägga till brus i förutsägelserna från de lokala modellerna.

Vår forskning behandlar följande kärnområden inom ramen för skalbar, integritetsbevarande maskininlärning: För det första undersöker vi implikationerna av datadimensionalitet på integriteten i samband med ensemblemetoder för medicinsk bildsegmentering. Vi utvidgar klassificeringsalgoritmen Private Aggregation of Teacher Ensembles (PATE) för att hantera högdimensionella etiketter, och visar att dimensionsreduktion kan förbättra avvägningen mellan integritet och nytta. För det andra beaktar vi hur valet av hyperparametrar påverkar integriteten. Här föreslår vi en ny adaptiv teknik för hyperparameterinställning i differentiellt privat gradientnedstigning, samt en adaptiv teknik för federerad inlärning med icke-släta förlustfunktioner. För det tredje undersöker vi samplingbaserade lösningar för att skala differentiellt privat maskininlärning till datamängder med ett stort antal poster. Vi studerar de integritetsförbättrande egenskaperna hos viktad sampling, och framhåller att den inte bara kan överträffa likformig underprovtagning vad gäller samplingeffektivitet, utan även integritet. För det fjärde studerar vi problemet med systematiska etikettsskillnader i ensemblemetoder för lokala modeller. Vi föreslår en ny metod baserad på etikettklustring för att möjliggöra flexibel samarbetevid inferens.

Teknikerna som utvecklats i denna avhandling förbättrar skalbarheten och lokaliteten hos maskininlärning samtidigt som robust integritetsskydd säkerställs. Detta utgör framsteg mot målet att säkert tillämpa maskininlärning på stora och mångsidiga datamängder för medicinsk bildanalys och liknande områden.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2025. p. x, 140
Series
TRITA-EECS-AVL ; 2025:66
Keywords
Machine Learning, Privacy, Differential Privacy, Dimensionality Reduction, Image Segmentation, Hyperparameter Selection, Adaptive Optimization, Privacy Amplification, Importance Sampling, Maskininlärning, Dataskydd, Differentiell Integritet, Dimensionsreducering, Bildsegmentering, Hyperparameterurval, Adaptiv Optimering, Integritetsförstärkning, Importance Sampling
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-363514 (URN)978-91-8106-309-7 (ISBN)
Public defence
2025-06-11, https://kth-se.zoom.us/j/69506042503, D3, Lindstedtsvägen 9, Stockholm, 10:00 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 3309
Note

QC 20250519

Available from: 2025-05-19 Created: 2025-05-19 Last updated: 2025-06-30Bibliographically approved
Zhang, J., Fay, D. & Johansson, M. (2024). DYNAMIC PRIVACY ALLOCATION FOR LOCALLY DIFFERENTIALLY PRIVATE FEDERATED LEARNING WITH COMPOSITE OBJECTIVES. In: 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings: . Paper presented at 49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024, Seoul, Korea, Apr 14 2024 - Apr 19 2024 (pp. 9461-9465). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>DYNAMIC PRIVACY ALLOCATION FOR LOCALLY DIFFERENTIALLY PRIVATE FEDERATED LEARNING WITH COMPOSITE OBJECTIVES
2024 (English)In: 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 9461-9465Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a locally differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems that protects the gradients of each worker against an honest but curious server. The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error subject to a predefined privacy budget constraint. This allows for an arbitrarily large but finite number of iterations to achieve both privacy protection and utility up to a neighborhood of the optimal solution, removing the need for tuning the number of iterations. Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
dynamic allocation, Federated learning, local differential privacy
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-348291 (URN)10.1109/ICASSP48485.2024.10448141 (DOI)001396233802150 ()2-s2.0-85195409957 (Scopus ID)
Conference
49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024, Seoul, Korea, Apr 14 2024 - Apr 19 2024
Note

QC 20240625 

Part of ISBN [9798350344851]

Available from: 2024-06-20 Created: 2024-06-20 Last updated: 2025-03-26Bibliographically approved
Fay, D., Mair, S. & Sjölund, J. (2024). Personalized Privacy Amplification via Importance Sampling. Transactions on Machine Learning Research, 2024
Open this publication in new window or tab >>Personalized Privacy Amplification via Importance Sampling
2024 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856, Vol. 2024Article in journal (Refereed) Published
Abstract [en]

For scalable machine learning on large data sets, subsampling a representative subset is a common approach for efficient model training. This is often achieved through importance sampling, whereby informative data points are sampled more frequently. In this paper, we examine the privacy properties of importance sampling, focusing on an individualized privacy analysis. We find that, in importance sampling, privacy is well aligned with utility but at odds with sample size. Based on this insight, we propose two approaches for constructing sampling distributions: one that optimizes the privacy-efficiency trade-off; and one based on a utility guarantee in the form of coresets. We evaluate both approaches empirically in terms of privacy, efficiency, and accuracy on the differentially private k-means problem. We observe that both approaches yield similar outcomes and consistently outperform uniform sampling across a wide range of data sets. Our code is available on GitHub.

Place, publisher, year, edition, pages
Transactions on Machine Learning Research, 2024
National Category
Computer Sciences Probability Theory and Statistics Signal Processing
Identifiers
urn:nbn:se:kth:diva-361194 (URN)2-s2.0-85219504011 (Scopus ID)
Note

QC 20250313

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-03-13Bibliographically approved
Morton Colbert, Z., Arrington, D., Foote, M., Gårding, J., Fay, D., Huo, M., . . . Ramachandran, P. (2024). Repurposing traditional U-Net predictions for sparse SAM prompting in medical image segmentation. Biomedical Engineering & Physics Express, 10(2)
Open this publication in new window or tab >>Repurposing traditional U-Net predictions for sparse SAM prompting in medical image segmentation
Show others...
2024 (English)In: Biomedical Engineering & Physics Express, E-ISSN 2057-1976, Vol. 10, no 2Article in journal (Refereed) Published
Abstract [en]

Objective:Automated medical image segmentation (MIS) using deep learning has traditionally relied on models built and trained from scratch, or at least fine-tuned on a target dataset. The Segment Anything Model (SAM) by Meta challenges this paradigm by providing zero-shot generalisation capabilities. This study aims to develop and compare methods for refining traditional U-Net segmentations by repurposing them for automated SAM prompting.Approach:A 2D U-Net with EfficientNet-B4 encoder was trained using 4-fold cross-validation on an in-house brain metastases dataset. Segmentation predictions from each validation set were used for automatic sparse prompt generation via a bounding box prompting method (BBPM) and novel implementations of the point prompting method (PPM). The PPMs frequently produced poor slice predictions (PSPs) that required identification and substitution. A slice was identified as a PSP if it (1) contained multiple predicted regions per lesion or (2) possessed outlier foreground pixel counts relative to the patient's other slices. Each PSP was substituted with a corresponding initial U-Net or SAM BBPM prediction. The patients' mean volumetric dice similarity coefficient (DSC) was used to evaluate and compare the methods' performances.Main results:Relative to the initial U-Net segmentations, the BBPM improved mean patient DSC by 3.93 ± 1.48% to 0.847 ± 0.008 DSC. PSPs constituted 20.01-21.63% of PPMs' predictions and without substitution performance dropped by 82.94 ± 3.17% to 0.139 ± 0.023 DSC. Pairing the two PSP identification techniques yielded a sensitivity to PSPs of 92.95 ± 1.20%. By combining this approach with BBPM prediction substitution, the PPMs achieved segmentation accuracies on par with the BBPM, improving mean patient DSC by up to 4.17 ± 1.40% and reaching 0.849 ± 0.007 DSC.Significance:The proposed PSP identification and substitution techniques bridge the gap between PPM and BBPM performance for MIS. Additionally, the uniformity observed in our experiments' results demonstrates the robustness of SAM to variations in prompting style. These findings can assist in the design of both automatically and manually prompted pipelines.

Place, publisher, year, edition, pages
IOP Publishing, 2024
Keywords
autosegmentation, brain metastasis, computer vision, medical image segmentation, segment anything model
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-342390 (URN)10.1088/2057-1976/ad17a7 (DOI)001136607400001 ()38118182 (PubMedID)2-s2.0-85181584261 (Scopus ID)
Note

QC 20240122

Available from: 2024-01-17 Created: 2024-01-17 Last updated: 2024-07-24Bibliographically approved
Fay, D., Magnússon, S., Sjölund, J. & Johansson, M. (2023). Adaptive Hyperparameter Selection for Differentially Private Gradient Descent. Transactions on Machine Learning Research, 2023(9)
Open this publication in new window or tab >>Adaptive Hyperparameter Selection for Differentially Private Gradient Descent
2023 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856, Vol. 2023, no 9Article in journal (Refereed) Published
Abstract [en]

We present an adaptive mechanism for hyperparameter selection in differentially private optimization that addresses the inherent trade-off between utility and privacy. The mechanism eliminates the often unstructured and time-consuming manual effort of selecting hyperpa-rameters and avoids the additional privacy costs that hyperparameter selection otherwise incurs on top of that of the actual algorithm. We instantiate our mechanism for noisy gradient descent on non-convex, convex and strongly convex loss functions, respectively, to derive schedules for the noise variance and step size. These schedules account for the properties of the loss function and adapt to convergence metrics such as the gradient norm. When using these schedules, we show that noisy gradient descent converges at essentially the same rate as its noise-free counterpart. Numerical experiments show that the schedules consistently perform well across a range of datasets without manual tuning.

Place, publisher, year, edition, pages
Transactions on Machine Learning Research, 2023
National Category
Computer Sciences Control Engineering
Identifiers
urn:nbn:se:kth:diva-361461 (URN)2-s2.0-86000063307 (Scopus ID)
Note

QC 20250325

Available from: 2025-03-19 Created: 2025-03-19 Last updated: 2025-03-25Bibliographically approved
Fay, D. (2023). Towards Scalable Machine Learning with Privacy Protection. (Licentiate dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Towards Scalable Machine Learning with Privacy Protection
2023 (English)Licentiate thesis, monograph (Other academic)
Abstract [en]

The increasing size and complexity of datasets have accelerated the development of machine learning models and exposed the need for more scalable solutions. This thesis explores challenges associated with large-scale machine learning under data privacy constraints. With the growth of machine learning models, traditional privacy methods such as data anonymization are becoming insufficient. Thus, we delve into alternative approaches, such as differential privacy.

Our research addresses the following core areas in the context of scalable privacy-preserving machine learning: First, we examine the implications of data dimensionality on privacy for the application of medical image analysis. We extend the classification algorithm Private Aggregation of Teacher Ensembles (PATE) to deal with high-dimensional labels, and demonstrate that dimensionality reduction can be used to improve privacy. Second, we consider the impact of hyperparameter selection on privacy. Here, we propose a novel adaptive technique for hyperparameter selection in differentially gradient-based optimization. Third, we investigate sampling-based solutions to scale differentially private machine learning to dataset with a large number of records. We study the privacy-enhancing properties of importance sampling, highlighting that it can outperform uniform sub-sampling not only in terms of sample efficiency but also in terms of privacy.

The three techniques developed in this thesis improve the scalability of machine learning while ensuring robust privacy protection, and aim to offer solutions for the effective and safe application of machine learning in large datasets.

Abstract [sv]

Den ständigt ökande storleken och komplexiteten hos datamängder har accelererat utvecklingen av maskininlärningsmodeller och gjort behovet av mer skalbara lösningar alltmer uppenbart. Den här avhandlingen utforskar tre utmaningar förknippade med storskalig maskininlärning under dataskyddskrav. För stora och komplexa maskininlärningsmodeller blir traditionella metoder för integritet, såsom datananonymisering, otillräckliga. Vi undersöker därför alternativa tillvägagångssätt, såsom differentiell integritet.

Vår forskning behandlar följande utmaningar inom skalbar och integitetsmedveten maskininlärning: För det första undersöker vi hur hög data-dimensionalitet påverkar integriteten för medicinsk bildanalys. Vi utvidgar klassificeringsalgoritmen Private Aggregation of Teacher Ensembles (PATE) för att hantera högdimensionella etiketter och visar att dimensionsreducering kan användas för att förbättra integriteten. För det andra studerar vi hur valet av hyperparametrar påverkar integriteten. Här föreslår vi en ny adaptiv teknik för val av hyperparametrar i gradient-baserad optimering med garantier på differentiell integritet. För det tredje granskar vi urvalsbaserade lösningar för att skala differentiellt privat maskininlärning till stora datamängder. Vi studerar de integritetsförstärkande egenskaperna hos importance sampling och visar att det kan överträffa ett likformigt urval av sampel, inte bara när det gäller effektivitet utan även för integritet.

De tre teknikerna som utvecklats i denna avhandling förbättrar skalbarheten för integritetsskyddad maskininlärning och syftar till att erbjuda lösningar för effektiv och säker tillämpning av maskininlärning på stora datamängder.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2023. p. xi, 94
Series
TRITA-EECS-AVL ; 2023:79
Keywords
Machine Learning, Privacy, Differential Privacy, Dimensionality Reduction, Image Segmentation, Hyperparameter Selection, Adaptive Optimization, Privacy Amplification, Importance Sampling, Maskininlärning, Dataskydd, Differentiell Integritet, Dimensionsreducering, Bildsegmentering, Hyperparameterurval, Adaptiv Optimering, Integritetsförstärkning, Importance Sampling
National Category
Computer Sciences
Research subject
Computer Science; Information and Communication Technology
Identifiers
urn:nbn:se:kth:diva-338979 (URN)978-91-8040-751-9 (ISBN)
Presentation
2023-11-21, D31, Lindstedtsvägen 9, Stockholm, 10:00 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 3309
Note

QC 20231101

Available from: 2023-11-01 Created: 2023-10-31 Last updated: 2023-11-06Bibliographically approved
Fay, D., Sjölund, J. & Oechtering, T. J. (2022). Private Learning via Knowledge Transfer with High-Dimensional Targets. In: 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP): . Paper presented at 47th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), MAY 22-27, 2022, Singapore, SINGAPORE (pp. 3873-3877). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Private Learning via Knowledge Transfer with High-Dimensional Targets
2022 (English)In: 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 3873-3877Conference paper, Published paper (Refereed)
Abstract [en]

Preventing unintentional leakage of information about the training set has high relevance for many machine learning tasks, such as medical image segmentation. While differential privacy (DP) offers mathematically rigorous protection, the high output dimensionality of segmentation tasks prevents the direct application of state-of-the-art algorithms such as Private Aggregation of Teacher Ensembles (PATE). In order to alleviate this problem, we propose to learn dimensionality-reducing transformations to map the prediction target into a bounded lower-dimensional space to reduce the required noise level during the aggregation stage. To this end, we assess the suitability of principal component analysis (PCA) and autoencoders. We conclude that autoencoders are an effective means to reduce the noise in the target variables.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
Keywords
Differential Privacy, Machine Learning, Knowledge Transfer, Image Segmentation, Compression
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-323026 (URN)10.1109/ICASSP43922.2022.9747159 (DOI)000864187904032 ()2-s2.0-85131260920 (Scopus ID)
Conference
47th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), MAY 22-27, 2022, Singapore, SINGAPORE
Note

Part of proceedings: ISBN 978-1-6654-0540-9

QC 20230112

Available from: 2023-01-12 Created: 2023-01-12 Last updated: 2023-01-12Bibliographically approved
Ghoddousiboroujeni, M., Fay, D., Dimitrakakis, C. & Kamgarpour, M. (2019). Privacy of Real-Time Pricing in Smart Grid. In: Proceedings of the IEEE Conference on Decision and Control: . Paper presented at 58th IEEE Conference on Decision and Control, CDC 2019, 11 December 2019 through 13 December 2019 (pp. 5162-5167). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Privacy of Real-Time Pricing in Smart Grid
2019 (English)In: Proceedings of the IEEE Conference on Decision and Control, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 5162-5167Conference paper, Published paper (Refereed)
Abstract [en]

Installing smart meters to publish real-time electricity rates has been controversial while it might lead to privacy concerns. Dispatched rates include fine-grained data on aggregate electricity consumption in a zone and could potentially be used to infer a household's pattern of energy use or its occupancy. In this paper, we propose Blowfish privacy to protect the occupancy state of the houses connected to a smart grid. First, we introduce a Markov model of the relationship between electricity rate and electricity consumption. Next, we develop an algorithm that perturbs electricity rates before publishing them to ensure users' privacy. Last, the proposed algorithm is tested on data inspired by household occupancy models and its performance is compared to an alternative solution.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Keywords
Costs, Electric power transmission networks, Electric power utilization, Markov processes, Smart power grids, Alternative solutions, Electricity-consumption, Energy use, Fine grained, Markov model, Privacy concerns, Real time pricing, Smart grid, Electric power measurement
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-274101 (URN)10.1109/CDC40024.2019.9029924 (DOI)000560779004117 ()2-s2.0-85082452397 (Scopus ID)
Conference
58th IEEE Conference on Decision and Control, CDC 2019, 11 December 2019 through 13 December 2019
Note

QC 20200702

Part of ISBN 9781728113982

Available from: 2020-07-02 Created: 2020-07-02 Last updated: 2024-10-22Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-5530-2714

Search in DiVA

Show all publications