kth.sePublications
Change search
Link to record
Permanent link

Direct link
Girdzijauskas, SarunasORCID iD iconorcid.org/0000-0003-4516-7317
Alternative names
Publications (10 of 90) Show all publications
Isaksson, M., Listo Zec, E., Coster, R., Gillblad, D. & Girdzijauskas, S. (2023). Adaptive Expert Models for Federated Learning. In: Goebel, R Yu, H Faltings, B Fan, L Xiong, Z (Ed.), Trustworthy Federated Learning: First International Workshop, FL 2022. Paper presented at Trustworthy Federated Learning - First International Workshop, FL 2022, Held in Conjunction with IJCAI 2022, Vienna, Austria, July 23, 2022 (pp. 1-16). Springer Nature, 13448
Open this publication in new window or tab >>Adaptive Expert Models for Federated Learning
Show others...
2023 (English)In: Trustworthy Federated Learning: First International Workshop, FL 2022 / [ed] Goebel, R Yu, H Faltings, B Fan, L Xiong, Z, Springer Nature , 2023, Vol. 13448, p. 1-16Conference paper, Published paper (Refereed)
Abstract [en]

Federated Learning (FL) is a promising framework for distributed learning when data is private and sensitive. However, the state-of-the-art solutions in this framework are not optimal when data is heterogeneous and non-IID. We propose a practical and robust approach to personalization in FL that adjusts to heterogeneous and non-IID data by balancing exploration and exploitation of several global models. To achieve our aim of personalization, we use a Mixture of Experts (MoE) that learns to group clients that are similar to each other, while using the global models more efficiently. We show that our approach achieves an accuracy up to 29.78% better than the state-of-the-art and up to 4.38% better compared to a local model in a pathological non-IID setting, even though we tune our approach in the IID setting.

Place, publisher, year, edition, pages
Springer Nature, 2023
Series
Lecture Notes in Artificial Intelligence, ISSN 2945-9133
Keywords
Federated learning, Personalization, Privacy preserving
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-330493 (URN)10.1007/978-3-031-28996-5_1 (DOI)000999818400001 ()2-s2.0-85152565856 (Scopus ID)
Conference
Trustworthy Federated Learning - First International Workshop, FL 2022, Held in Conjunction with IJCAI 2022, Vienna, Austria, July 23, 2022
Note

Part of proceedings ISBN 978-3-031-28995-8  978-3-031-28996-5

QC 20230630

Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-06-30Bibliographically approved
Roy, D., Komini, V. & Girdzijauskas, S. (2023). Classifying falls using out-of-distribution detection in human activity recognition. AI Communications, 36(4), 251-267
Open this publication in new window or tab >>Classifying falls using out-of-distribution detection in human activity recognition
2023 (English)In: AI Communications, ISSN 0921-7126, E-ISSN 1875-8452, Vol. 36, no 4, p. 251-267Article in journal (Refereed) Published
Abstract [en]

As the research community focuses on improving the reliability of deep learning, identifying out-of-distribution (OOD) data has become crucial. Detecting OOD inputs during test/prediction allows the model to account for discriminative features unknown to the model. This capability increases the model's reliability since this model provides a class prediction solely at incoming data similar to the training one. Although OOD detection is well-established in computer vision, it is relatively unexplored in other areas, like time series-based human activity recognition (HAR). Since uncertainty has been a critical driver for OOD in vision-based models, the same component has proven effective in time-series applications. In this work, we propose an ensemble-based temporal learning framework to address the OOD detection problem in HAR with time-series data. First, we define different types of OOD for HAR that arise from realistic scenarios. Then we apply our ensemble-based temporal learning framework incorporating uncertainty to detect OODs for the defined HAR workloads. This particular formulation also allows a novel approach to fall detection. We train our model on non-fall activities and detect falls as OOD. Our method shows state-of-The-Art performance in a fall detection task using much lesser data. Furthermore, the ensemble framework outperformed the traditional deep-learning method (our baseline) on the OOD detection task across all the other chosen datasets.

Place, publisher, year, edition, pages
IOS Press, 2023
Keywords
deep learning, human activity recognition, Out-of-distribution detection, time-series classification, uncertainty estimation
National Category
Computer Sciences Bioinformatics (Computational Biology)
Identifiers
urn:nbn:se:kth:diva-339522 (URN)10.3233/AIC-220205 (DOI)001087274200001 ()2-s2.0-85175210057 (Scopus ID)
Note

QC 20231114

Available from: 2023-11-14 Created: 2023-11-14 Last updated: 2024-04-11Bibliographically approved
Samy, A., Kefato, Z. T. & Girdzijauskas, S. (2023). Data-Driven Self-Supervised Graph Representation Learning. In: ECAI 2023: 26th European Conference on Artificial Intelligence, including 12th Conference on Prestigious Applications of Intelligent Systems, PAIS 2023 - Proceedings. Paper presented at 26th European Conference on Artificial Intelligence, ECAI 2023, Krakow, Poland, Sep 30 2023 - Oct 4 2023 (pp. 629-636). IOS Press BV
Open this publication in new window or tab >>Data-Driven Self-Supervised Graph Representation Learning
2023 (English)In: ECAI 2023: 26th European Conference on Artificial Intelligence, including 12th Conference on Prestigious Applications of Intelligent Systems, PAIS 2023 - Proceedings, IOS Press BV , 2023, p. 629-636Conference paper, Published paper (Refereed)
Abstract [en]

Self-supervised graph representation learning (SSGRL) is a representation learning paradigm used to reduce or avoid manual labeling. An essential part of SSGRL is graph data augmentation. Existing methods usually rely on heuristics commonly identified through trial and error and are effective only within some application domains. Also, it is not clear why one heuristic is better than another. Moreover, recent studies have argued against some techniques (e.g., dropout: that can change the properties of molecular graphs or destroy relevant signals for graph-based document classification tasks). In this study, we propose a novel data-driven SSGRL approach that automatically learns a suitable graph augmentation from the signal encoded in the graph (i.e., the nodes' predictive feature and topological information). We propose two complementary approaches that produce learnable feature and topological augmentations. The former learns multi-view augmentation of node features, and the latter learns a high-order view of the topology. Moreover, the augmentations are jointly learned with the representation. Our approach is general that it can be applied to homogeneous and heterogeneous graphs. We perform extensive experiments on node classification (using nine homogeneous and heterogeneous datasets) and graph property prediction (using another eight datasets). The results show that the proposed method matches or outperforms the SOTA SSGRL baselines and performs similarly to semi-supervised methods. The anonymised source code is available at https://github.com/AhmedESamy/dsgrl/

Place, publisher, year, edition, pages
IOS Press BV, 2023
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-339683 (URN)10.3233/FAIA230325 (DOI)2-s2.0-85175858097 (Scopus ID)
Conference
26th European Conference on Artificial Intelligence, ECAI 2023, Krakow, Poland, Sep 30 2023 - Oct 4 2023
Note

Part of ISBN 9781643684369

QC 20231116

Available from: 2023-11-16 Created: 2023-11-16 Last updated: 2023-11-16Bibliographically approved
Listo Zec, E., Ekblom, E., Willbo, M., Mogren, O. & Girdzijauskas, S. (2023). Decentralized Adaptive Clustering of Deep Nets is Beneficial for Client Collaboration. In: Goebel, R Yu, H Faltings, B Fan, L Xiong, Z (Ed.), FL 2022: Trustworthy Federated Learning. Paper presented at 1st International Workshop on Trustworthy Federated Learning (FL), JUL 23, 2022, Vienna, AUSTRIA (pp. 59-71). Springer Nature, 13448
Open this publication in new window or tab >>Decentralized Adaptive Clustering of Deep Nets is Beneficial for Client Collaboration
Show others...
2023 (English)In: FL 2022: Trustworthy Federated Learning / [ed] Goebel, R Yu, H Faltings, B Fan, L Xiong, Z, Springer Nature , 2023, Vol. 13448, p. 59-71Conference paper, Published paper (Refereed)
Abstract [en]

We study the problem of training personalized deep learning models in a decentralized peer-to-peer setting, focusing on the setting where data distributions differ between the clients and where different clients have different local learning tasks. We study both covariate and label shift, and our contribution is an algorithm which for each client finds beneficial collaborations based on a similarity estimate for the local task. Our method does not rely on hyperparameters which are hard to estimate, such as the number of client clusters, but rather continuously adapts to the network topology using soft cluster assignment based on a novel adaptive gossip algorithm. We test the proposed method in various settings where data is not independent and identically distributed among the clients. The experimental evaluation shows that the proposed method performs better than previous state-of-the-art algorithms for this problem setting, and handles situations well where previous methods fail.

Place, publisher, year, edition, pages
Springer Nature, 2023
Series
Lecture Notes in Artificial Intelligence, ISSN 2945-9133
Keywords
decentralized learning, federated learning, deep learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-330522 (URN)10.1007/978-3-031-28996-5_5 (DOI)000999818400005 ()2-s2.0-85152516432 (Scopus ID)
Conference
1st International Workshop on Trustworthy Federated Learning (FL), JUL 23, 2022, Vienna, AUSTRIA
Note

QC 20230630

Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-06-30Bibliographically approved
Bonvalet, M., Kefato, Z. T. & Girdzijauskas, S. (2023). Graph2Feat: Inductive Link Prediction via Knowledge Distillation. In: ACM Web Conference 2023: Companion of the World Wide Web Conference, WWW 2023. Paper presented at 2023 World Wide Web Conference, WWW 2023, Austin, United States of America, Apr 30 2023 - May 4 2023 (pp. 805-812). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Graph2Feat: Inductive Link Prediction via Knowledge Distillation
2023 (English)In: ACM Web Conference 2023: Companion of the World Wide Web Conference, WWW 2023, Association for Computing Machinery (ACM) , 2023, p. 805-812Conference paper, Published paper (Refereed)
Abstract [en]

Link prediction between two nodes is a critical task in graph machine learning. Most approaches are based on variants of graph neural networks (GNNs) that focus on transductive link prediction and have high inference latency. However, many real-world applications require fast inference over new nodes in inductive settings where no information on connectivity is available for these nodes. Thereby, node features provide an inevitable alternative in the latter scenario. To that end, we propose Graph2Feat, which enables inductive link prediction by exploiting knowledge distillation (KD) through the Student-Teacher learning framework. In particular, Graph2Feat learns to match the representations of a lightweight student multi-layer perceptron (MLP) with a more expressive teacher GNN while learning to predict missing links based on the node features, thus attaining both GNN's expressiveness and MLP's fast inference. Furthermore, our approach is general; it is suitable for transductive and inductive link predictions on different types of graphs regardless of them being homogeneous or heterogeneous, directed or undirected. We carry out extensive experiments on seven real-world datasets including homogeneous and heterogeneous graphs. Our experiments demonstrate that Graph2Feat significantly outperforms SOTA methods in terms of AUC and average precision in homogeneous and heterogeneous graphs. Finally, Graph2Feat has the minimum inference time compared to the SOTA methods, and 100x acceleration compared to GNNs. The code and datasets are available on GitHub1.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
graph representation learning, heterogeneous networks, inductive link prediction, knowledge distillation
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-333310 (URN)10.1145/3543873.3587596 (DOI)001124276300163 ()2-s2.0-85159575698 (Scopus ID)
Conference
2023 World Wide Web Conference, WWW 2023, Austin, United States of America, Apr 30 2023 - May 4 2023
Note

Part of ISBN 9781450394161

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2024-03-05Bibliographically approved
Jin, Y., Daoutis, M., Girdzijauskas, Š. & Gionis, A. (2023). Learning Cellular Coverage from Real Network Configurations using GNNs. In: 2023 IEEE 97th Vehicular Technology Conference, VTC 2023-Spring - Proceedings: . Paper presented at 97th IEEE Vehicular Technology Conference, VTC 2023-Spring, Florence, Italy, Jun 20 2023 - Jun 23 2023. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Learning Cellular Coverage from Real Network Configurations using GNNs
2023 (English)In: 2023 IEEE 97th Vehicular Technology Conference, VTC 2023-Spring - Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Cellular coverage quality estimation has been a critical task for self-organized networks. In real-world scenarios, deep-learning-powered coverage quality estimation methods cannot scale up to large areas due to little ground truth can be provided during network design & optimization. In addition, they fall short in producing expressive embeddings to adequately capture the variations of the cells' configurations. To deal with this challenge, we formulate the task in a graph representation and so that we can apply state-of-the-art graph neural networks, that show exemplary performance. We propose a novel training framework that can both produce quality cell configuration embeddings for estimating multiple KPIs, while we show it is capable of generalising to large (area-wide) scenarios given very few labeled cells. We show that our framework yields comparable accuracy with models that have been trained using massively labeled samples.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Cellular Coverage Estimation, Few-shot Learning, Graph Neural Network, Self-supervised Learning
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-336725 (URN)10.1109/VTC2023-Spring57618.2023.10199469 (DOI)2-s2.0-85169786270 (Scopus ID)
Conference
97th IEEE Vehicular Technology Conference, VTC 2023-Spring, Florence, Italy, Jun 20 2023 - Jun 23 2023
Note

Part of ISBN 9798350311143

QC 20230919

Available from: 2023-09-19 Created: 2023-09-19 Last updated: 2023-11-27Bibliographically approved
Samy, A. E. & Girdzijauskas, S. (2023). Mitigating Sybil Attacks in Federated Learning. In: Meng, W Yan, Z Piuri, V (Ed.), INFORMATION SECURITY PRACTICE AND EXPERIENCE, ISPEC 2023: . Paper presented at 18th International Conference on Information Security Practice and Experience (ISPEC), AUG 24-25, 2023, Copenhagen, DENMARK (pp. 36-51). Springer Nature, 14341
Open this publication in new window or tab >>Mitigating Sybil Attacks in Federated Learning
2023 (English)In: INFORMATION SECURITY PRACTICE AND EXPERIENCE, ISPEC 2023 / [ed] Meng, W Yan, Z Piuri, V, Springer Nature , 2023, Vol. 14341, p. 36-51Conference paper, Published paper (Refereed)
Abstract [en]

Federated learning (FL) is a distributed learning paradigm that facilities a basic data-privacy level, as the clients do not have to share their raw data. Since the clients send local model updates, it increases the attack surface of FLwith possible attackers sharing poisoning updates with the aggregation server. In this work, we focus on the Sybil attacks, a type of poisoning attack where attackers can have multiple identities to overpower the honest clients in the system. In particular, we define a cosine-similarity-based measurement to track the clients ' behavior. To mitigate the Sybil attacks, we propose FedSybil, a behavior-based defense with a reputation mechanism for FL under independent and identically distributed (IID) and non-IID data settings. In extensive experiments, we demonstrate the effectiveness of our approach with an improved model accuracy over the state-of-the-art approaches reaching over 50% improvement under attacks.

Place, publisher, year, edition, pages
Springer Nature, 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 14341
Keywords
Federated learning, neural networks, poisoning attacks, security
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-344271 (URN)10.1007/978-981-99-7032-2_3 (DOI)001166763200003 ()
Conference
18th International Conference on Information Security Practice and Experience (ISPEC), AUG 24-25, 2023, Copenhagen, DENMARK
Note

QC 20240312

Part of ISBN 978-981-99-7031-5

978-981-99-7032-2

Available from: 2024-03-12 Created: 2024-03-12 Last updated: 2024-03-12Bibliographically approved
Pozzoli, S. & Girdzijauskas, S. (2023). On Learning Embeddings at the Intersection of Communities and Roles. In: Proceedings - 2023 10th International Conference on Social Networks Analysis, Management and Security, SNAMS 2023: . Paper presented at 10th International Conference on Social Networks Analysis, Management and Security, SNAMS 2023, Abu Dhabi, United Arab Emirates, Nov 21 2023 - Nov 24 2023. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>On Learning Embeddings at the Intersection of Communities and Roles
2023 (English)In: Proceedings - 2023 10th International Conference on Social Networks Analysis, Management and Security, SNAMS 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Graph Neural Networks (GNNs) have established themselves as the state of the art of encoding the nodes of a graph into a low-dimensional space by extracting features from the connectivity structure of the graph as well as the features of the nodes. However, since the embedding of a node is updated according to the information aggregated from the immediate neighborhood, a GNN tends to capture the community memberships of the nodes better than the other side of the coin: the role memberships, which quantify how much nodes carry out specific functions from a structural point of view. In this paper, we present RC-GNNs, a category of GNNs designed to learn embeddings from the community and the role memberships as well as the features of the nodes. RC-GNNs learn from different versions of the same graph, in which the nodes are connected according to either the community or the role memberships. Results show that, compared with models such as k-hop GNNs, k-GNNs, and MixHop, RC-GNNs are up to 4% more accurate in classifying the nodes of CiteSeer, Cora, and PubMed and up to 3% in classifying the graphs of MUTAG, PROTEINS, and Synthie.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Community Detection, Graph Classification, Graph Clustering, Graph Representation Learning, Node Classification, Role Discovery, Semi-Supervised Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-343178 (URN)10.1109/SNAMS60348.2023.10375479 (DOI)2-s2.0-85183474186 (Scopus ID)
Conference
10th International Conference on Social Networks Analysis, Management and Security, SNAMS 2023, Abu Dhabi, United Arab Emirates, Nov 21 2023 - Nov 24 2023
Note

Part of ISBN: 979-8-3503-1890-6

QC 20240209

Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2024-02-09Bibliographically approved
Roy, D., Lekssays, A., Girdzijauskas, S., Carminati, B. & Ferrari, E. (2023). Private, Fair and Secure Collaborative Learning Framework for Human Activity Recognition. In: UbiComp/ISWC '23 Adjunct: Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing. Paper presented at 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2023 ACM International Symposium on Wearable Computing, UbiComp/ISWC 2023, Cancun, Quintana Roo, 8 October 2023 (pp. 352-358). Cancun: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Private, Fair and Secure Collaborative Learning Framework for Human Activity Recognition
Show others...
2023 (English)In: UbiComp/ISWC '23 Adjunct: Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing, Cancun: Association for Computing Machinery (ACM) , 2023, p. 352-358Conference paper, Published paper (Refereed)
Abstract [en]

Federated learning (FL), a decentralized machine learning technique, enhances privacy by enabling multiple devices to collaboratively train a model without transferring data to a central server. FL is used in Human Activity Recognition (HAR) problems, where multiple users generating private wearable data share models with a server to learn a useful global model. However, FL may compromise data privacy through model information sharing during training. Moreover, it adheres to a one-size-fits-all approach toward data privacy, potentially neglecting varied user preferences in collaborative scenarios such as HAR. In response to these challenges, this paper presents a collaborative learning framework integrating differential privacy (DP) and FL, thus providing a tailored approach to privacy protection. While some existing works integrate DP and FL, they do not allow clients to have different privacy preferences. In this work, we introduce a framework that allows different clients to have different privacy preferences and hence more flexibility in terms of privacy. In our framework, DP adds individualized noise to individual clients’ gradient updates for privacy. However, such noised updates can also be interpreted as an attack on the FL system. Defending against these attacks might result in excluding honest private clients altogether from training, posing a fairness concern. On the other hand, not having any defensive measures might allow malicious users to attack the system, posing a security issue. Thus, to address security and fairness, our framework incorporates a client selection strategy that protects the global model from malicious clients and provides fair model access to honest private clients. We have demonstrated the effectiveness of our system on a HAR dataset and provided insights into our framework’s privacy, utility, and fairness.

Place, publisher, year, edition, pages
Cancun: Association for Computing Machinery (ACM), 2023
Keywords
Privacy, Security, Machine Learning, Federated Learning
National Category
Engineering and Technology
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-339762 (URN)10.1145/3594739.3610675 (DOI)2-s2.0-85175448979 (Scopus ID)
Conference
2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2023 ACM International Symposium on Wearable Computing, UbiComp/ISWC 2023, Cancun, Quintana Roo, 8 October 2023
Note

Part of ISBN 9798400702006

QC 20231117

Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2024-02-07Bibliographically approved
Roy, D. & Girdzijauskas, S. (2023). Temporal Differential Privacy for Human Activity Recognition. In: : . Paper presented at 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA), Thessaloniki, Greece, 9 - 13 October 2023 (pp. 1-10). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Temporal Differential Privacy for Human Activity Recognition
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Differential privacy (DP) is a method to protect individual privacy when the data is used for downstream analytical tasks. The core ability of DP to quantity privacy numerically separates it from other privacy-preserving methods. In human activity recognition (HAR), differential privacy can protect users’ privacy who contribute their data to train machine learning algorithms. While some methods are developed for privacy protection in such cases, no method quantifies privacy and seamlessly integrates into machine learning frameworks like DP. The paper proposes a DP framework called TEMPDIFF (short for temporal differential privacy), which guarantees privacy preserving human activity recognition for wearable time-series data with competitive classification performance and works with any machine-learning/deep-learning methods. TEMPDIFF capitalizes on the temporal characteristics of wearable sensor data to improve the modelling task, which enhances the privacy-utility tradeoff. TEMPDIFF uses ensembling and a novel temporal partitioning algorithm for time-series data to ensure optimal training of ensemble models. In TEMPDIFF, consensus through ensembling and the addition of controlled Laplacian noise obscures sensitive information used to train the models, guaranteeing strict levels of differential privacy. The proposed method is evaluated on two popular HAR datasets. It outperforms the classification accuracy and privacy budget for both datasets compared to the state-of-the-art approaches.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Differential Privacy, Machine Learning, Human Activity Recognition
National Category
Engineering and Technology
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-339761 (URN)10.1109/DSAA60987.2023.10302475 (DOI)
Conference
2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA), Thessaloniki, Greece, 9 - 13 October 2023
Funder
EU, Horizon 2020, 813162
Note

Part of ISBN 979-8-3503-4503-2

QC 20231117

Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2024-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4516-7317

Search in DiVA

Show all publications