kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Federated Naive Bayes under Differential Privacy
Fdn Res & Technol Hellas, Inst Comp Sci, Iraklion, Greece..
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-0223-8907
Fdn Res & Technol Hellas, Inst Comp Sci, Iraklion, Greece..
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0003-4516-7317
2022 (English)In: Proceedings of the 19th International Conference on Security and Cryptography - SECRYPT / [ed] DiVimercati, SDC Samarati, P, Scitepress , 2022, p. 170-180Conference paper, Published paper (Refereed)
Abstract [en]

Growing privacy concerns regarding personal data disclosure are contrasting with the constant need of such information for data-driven applications. To address this issue, the combination of federated learning and differential privacy is now well-established in the domain of machine learning. These techniques allow to train deep neural networks without collecting the data and while preventing information leakage. However, there are many scenarios where simpler and more robust machine learning models are preferable. In this paper, we present a federated and differentially-private version of the Naive Bayes algorithm for classification. Our results show that, without data collection, the same performance of a centralized solution can be achieved on any dataset with only a slight increase in the privacy budget. Furthermore, if certain conditions are met, our federated solution can outperform a centralized approach.

Place, publisher, year, edition, pages
Scitepress , 2022. p. 170-180
Keywords [en]
Federated Learning, Naive Bayes, Differential Privacy
National Category
Other Computer and Information Science
Identifiers
URN: urn:nbn:se:kth:diva-319088DOI: 10.5220/0011275300003283ISI: 000853004900014Scopus ID: 2-s2.0-85174498579OAI: oai:DiVA.org:kth-319088DiVA, id: diva2:1698897
Conference
19th International Conference on Security and Cryptography (SECRYPT), JUL 11-13, 2022, Lisbon, Portugal
Note

QC 20220926

Part of proceedings: ISBN 978-989-758-590-6

Available from: 2022-09-26 Created: 2022-09-26 Last updated: 2024-08-28Bibliographically approved
In thesis
1. Towards Decentralized Graph Learning
Open this publication in new window or tab >>Towards Decentralized Graph Learning
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Current Machine Learning (ML) approaches typically present either a centralized or federated architecture. However, these architectures cannot easily keep up with some of the challenges introduced by recent trends, such as the growth in the number of IoT devices, increasing awareness about the privacy and security implications of extensive data collection, and the rise of graph-structured data and Graph Representation Learning. Systems based on either direct data collection or Federated Learning contain centralized, privileged systems that may act as scalability bottlenecks and dangerous single points of failure, while requiring users to trust the privacy protections and security practices in place. The combination of these issues ultimately leads to data waste, as opportunities to extract insights from available data are missed and thus the full societal benefits of advanced data analytics and ML are not realized.

In this thesis, we argue for a paradigm shift towards a completely decentralized and trustless architecture for privacy-aware Graph Representation Learning, which employs Gossip Learning and other gossip-based peer-to-peer techniques to achieve high levels of scalability and resilience while reducing the risk of privacy leaks. We then identify and pursue three key research directions necessary to achieve our vision: lifting unrealistic assumptions on Gossip Learning, identifying and developing specific use cases that are enabled or improved by gossip-based decentralization, and overcoming the obstacles to the deployment of decentralized training and inference for Graph Representation Learning models.

 Based on these key directions, our contributions are as follows. First, we analyze the robustness of Gossip Learning when several unrealistic but often assumed conditions are lifted. Then, we exploit Gossip Learning and gossip-based peer-to-peer protocols more in general across three use cases: the collaborative training of differentially-private Naive Bayes classifiers across organizations holding sensitive user data; the construction of decentralized, privacy-preserving data marketplaces; and the development and decentralization of early-stage IoT botnet detection systems based on Graph Representation Learning. Finally, we introduce a general framework for the fully-decentralized training of Graph Neural Networks, overcoming the typical requirement of these models to access non-local information during training and inference.

 The combination of these contributions removes major roadblocks towards decentralized graph learning, and also opens a new research direction aimed at further developing and optimizing the fully-decentralized training of Graph Representation Learning models.

Abstract [sv]

Dagens metoder för maskininlärning (ML) har vanligtvis antingen en centraliserad eller federerad arkitektur. Dessa arkitekturer kan dock inte lätt hålla jämna steg med några av de utmaningar som introducerats av de senaste trenderna, som till exempel ökningen av antalet IoT-enheter, ökad medvetenhet om integritets- och säkerhetskonsekvenserna av omfattande datainsamling samt ökningen av grafstrukturerad data och Graph Representation Learning. System baserade på antingen direkt datainsamling eller federerad inlärning innehåller centraliserade, privilegierade system som kan vara flaskhalsar och riskerar bli kritiska sårbarhetspunkter. Samtidigt måste användarna lita på integritetsskyddet och säkerhetspraxis som finns. Kombinationen av dessa problem leder i slutändan till ett ineffektivt nyttjande av data, eftersom möjligheter att utvinna insikter från tillgänglig data inte utnyttjas och därmed inte realiserar de fulla samhällsnyttorna som är möjliga med avancerad dataanalys och ML.

I denna avhandling argumenterar vi för ett paradigmskifte mot en helt decentraliserad och tillitslös arkitektur för integritetsmedveten Graph Representation Learning, som använder Gossip Learning och andra gossip-baserade peer-to-peer-tekniker för att uppnå höga nivåer av skalbarhet och motståndskraft, samtidigt som den minskar risken för integritetsläckor. Vi identifierar och driver sedan tre viktiga forskningsinriktningar som är nödvändiga för att uppnå vår vision; att lyfta orealistiska antaganden om Gossip Learning, identifiera och utveckla specifika användningsfall som möjliggörs eller förbättras av gossip-baserad decentralisering, samt övervinna hindren för utplacering av decentraliserad utbildning och inferens för Graph Representation Learning modeller.

Baserat på dessa nyckelriktlinjer våra bidrag är följande. Först analyserar vi robustheten i Gossip Learning när flera orealistiska men ofta antagna villkor upphävs. Vi utnyttjar sedan Gossip Learning och gossip-baserade peer-to-peer-protokoll mer generellt i tre användningsfall: kollaborativ inlärning av differentiellt privata Naive Bayes-klassificerare över entiteter med känslig användardata; byggandet av decentraliserade datamarknadsplatser som bevarar integriteten; samt utveckling och decentralisering av IoT-botnätdetekterings\-system i ett tidigt skede baserade på Graph Representation Learning. Slutligen introducerar vi ett allmänt ramverk för helt decentraliserad utbildning av Graph Neural Networks, som eliminerar de typiska kraven för dessa modeller för att få tillgång till icke-lokal information under träning och inferens.

Kombinationen av dessa bidrag tar bort stora hinder mot decentraliserad grafinlärning, och öppnar också en ny forskningsriktning som syftar till att vidareutveckla och optimera den helt decentraliserade utbildningen av Graph Representation Learning modeller.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2023. p. vii, 59
Series
TRITA-EECS-AVL ; 2023:42
National Category
Computer Sciences
Research subject
Information and Communication Technology
Identifiers
urn:nbn:se:kth:diva-327016 (URN)978-91-8040-584-3 (ISBN)
Public defence
2023-06-09, Sal-C, Kistagången 16, Stockholm, 09:00 (English)
Opponent
Supervisors
Funder
EU, Horizon 2020, 813162
Note

QC 20230517

Available from: 2023-05-17 Created: 2023-05-17 Last updated: 2023-05-26Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Giaretta, LodovicoGirdzijauskas, Sarunas

Search in DiVA

By author/editor
Giaretta, LodovicoGirdzijauskas, Sarunas
By organisation
Software and Computer systems, SCS
Other Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 329 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf