kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Private, Fair and Secure Collaborative Learning Framework for Human Activity Recognition
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0001-6780-7755
Qatar Computing Research Institute Doha, Qatar.ORCID iD: 0000-0001-5783-8638
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0003-4516-7317
University of Insubria Varese, Italy.ORCID iD: 0000-0002-7502-4731
Show others and affiliations
2023 (English)In: UbiComp/ISWC '23 Adjunct: Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing, Cancun: Association for Computing Machinery (ACM) , 2023, p. 352-358Conference paper, Published paper (Refereed)
Abstract [en]

Federated learning (FL), a decentralized machine learning technique, enhances privacy by enabling multiple devices to collaboratively train a model without transferring data to a central server. FL is used in Human Activity Recognition (HAR) problems, where multiple users generating private wearable data share models with a server to learn a useful global model. However, FL may compromise data privacy through model information sharing during training. Moreover, it adheres to a one-size-fits-all approach toward data privacy, potentially neglecting varied user preferences in collaborative scenarios such as HAR. In response to these challenges, this paper presents a collaborative learning framework integrating differential privacy (DP) and FL, thus providing a tailored approach to privacy protection. While some existing works integrate DP and FL, they do not allow clients to have different privacy preferences. In this work, we introduce a framework that allows different clients to have different privacy preferences and hence more flexibility in terms of privacy. In our framework, DP adds individualized noise to individual clients’ gradient updates for privacy. However, such noised updates can also be interpreted as an attack on the FL system. Defending against these attacks might result in excluding honest private clients altogether from training, posing a fairness concern. On the other hand, not having any defensive measures might allow malicious users to attack the system, posing a security issue. Thus, to address security and fairness, our framework incorporates a client selection strategy that protects the global model from malicious clients and provides fair model access to honest private clients. We have demonstrated the effectiveness of our system on a HAR dataset and provided insights into our framework’s privacy, utility, and fairness.

Place, publisher, year, edition, pages
Cancun: Association for Computing Machinery (ACM) , 2023. p. 352-358
Keywords [en]
Privacy, Security, Machine Learning, Federated Learning
National Category
Engineering and Technology
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-339762DOI: 10.1145/3594739.3610675Scopus ID: 2-s2.0-85175448979OAI: oai:DiVA.org:kth-339762DiVA, id: diva2:1812854
Conference
2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2023 ACM International Symposium on Wearable Computing, UbiComp/ISWC 2023, Cancun, Quintana Roo, 8 October 2023
Note

Part of ISBN 9798400702006

QC 20231117

Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2024-02-07Bibliographically approved
In thesis
1. Towards Trustworthy Machine Learning For Human Activity Recognition
Open this publication in new window or tab >>Towards Trustworthy Machine Learning For Human Activity Recognition
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Human Activity Recognition presents a multifaceted challenge, encompassing the complexity of human activities, the diversity of sensors used, and the imperative to safeguard user data privacy. Recent advancements in machine learning, deep learning, and sensor technology have opened up new possibilities for human activity recognition. Wearable sensor-based human activity recognition involves collecting time-series data from various sensors, capturing intricate aspects of human activities. The focus of the above activity recognition problem is classifying human activities from the time-series data. Hence, this time-series classification problem demands efficient utilization of temporal properties. Moreover, while accurate prediction is crucial in human activity recognition, the reliability of predictions often goes unnoticed. Ensuring that predictions are reliable involves addressing two issues: calibrating miscalibrated predictions that fail to accurately represent the true likelihood of the data and addressing the challenges around uncertain predictions. Modern deep learning models, used extensively in human activity recognition, often struggle with the above issues. In addition to reliability concerns, machine learning algorithms employed in Human Activity Recognition are also plagued by privacy issues stemming from the utilization of sensitive activity data during model training. While existing techniques such as federated learning can provide some degree of privacy protection in these scenarios, they tend to adhere to a uniform concept of privacy and lack quantifiable privacy metrics that can be effectively conveyed to users and customized to cater to their individual privacy preferences. Hence, in the thesis, we identify the challenges around the effective use of temporal data, reliability, and privacy issues of machine learning models used for wearable sensor-based human activity recognition. To tackle these challenges, we put forth novel solutions, striving to enhance the overall performance and trustworthiness of machine learning models employed in human activity recognition.

Firstly, to improve classification performance, we propose a new temporal ensembling framework that uses data temporality effectively. The framework accommodates various window sizes for time-series data and trains an ensemble of deep-learning models based on that. It enhances classification accuracy and preserves temporal information.

Secondly, we address reliability through calibration and uncertainty estimation. The aforementioned temporal ensembling framework is used for calibration and uncertainty estimation. It provides well-calibrated predictions for human activity recognition and detects out-of-distribution activities, an important task of uncertainty estimation. Furthermore, we apply these methods to real-world scenarios, enhancing the reliability of human activity recognition models.

Thirdly, to address the privacy concern, we introduce a differentially private framework for time-series human activity recognition, quantifying privacy. Additionally, we develop a collaborative federated learning framework, allowing users to define their privacy preferences, advancing privacy preservation in human activity recognition.

These contributions address major challenges and promote improved classification, reliability, and privacy preservation in human activity recognition. It helps us to move towards trustworthy machine learning in human activity recognition, facilitating their usage in realistic and practical scenarios.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. xii, 56
Series
TRITA-EECS-AVL ; 2024:12
National Category
Computer Sciences
Research subject
Computer Science; Information and Communication Technology
Identifiers
urn:nbn:se:kth:diva-343130 (URN)978-91-8040-826-4 (ISBN)
Public defence
2024-03-06, https://kth-se.zoom.us/j/63687967257, Sal C, Kistagången 16, Kista, Stockholm, 13:00 (English)
Opponent
Supervisors
Funder
EU, Horizon 2020, 813162
Note

QC 20240207

Available from: 2024-02-07 Created: 2024-02-07 Last updated: 2024-02-29Bibliographically approved

Open Access in DiVA

fulltext(5450 kB)37 downloads
File information
File name FULLTEXT01.pdfFile size 5450 kBChecksum SHA-512
07a227c30f7710e7110f9a682ececd48921ee955063ab680c9c7086e05335ce918f8e9a0b2273524d4fac5930c060f53b0686292f4a8be17fd25326211a2b4fd
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Roy, DebadityaGirdzijauskas, Sarunas

Search in DiVA

By author/editor
Roy, DebadityaLekssays, AhmedGirdzijauskas, SarunasCarminati, BarbaraFerrari, Elena
By organisation
Software and Computer systems, SCS
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 37 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 242 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf