kth.sePublications
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation
KTH. Hive Streaming AB, Stockholm, Sweden..ORCID iD: 0000-0002-1135-8863
Maastricht Univ, Maastricht, Netherlands..
2020 (English)In: 2020 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM) / [ed] Atzmuller, M Coscia, M Missaoui, R, Institute of Electrical and Electronics Engineers (IEEE) , 2020, p. 60-64Conference paper, Published paper (Refereed)
Abstract [en]

Dynamic graph representation learning strategies are based on different neural architectures to capture the graph evolution over time. However, the underlying neural architectures require a large amount of parameters to train and suffer from high online inference latency, that is several model parameters have to be updated when new data arrive online. In this study we propose Distill2Vec, a knowledge distillation strategy to train a compact model with a low number of trainable parameters, so as to reduce the latency of online inference and maintain the model accuracy high. We design a distillation loss function based on Kullback-Leibler divergence to transfer the acquired knowledge from a teacher model trained on offline data, to a small-size student model for online data. Our experiments with publicly available datasets show the superiority of our proposed model over several state-of-the-art approaches with relative gains up to 5% in the link prediction task. In addition, we demonstrate the effectiveness of our knowledge distillation strategy, in terms of number of required parameters, where Distill2Vec achieves a compression ratio up to 7:100 when compared with baseline approaches. For reproduction purposes, our implementation is publicly available at https://stefanosantaris.github.io/Distill2Vec.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2020. p. 60-64
Series
Proceedings of the IEEE-ACM International Conference on Advances in Social Networks Analysis and Mining, ISSN 2473-9928
Keywords [en]
Dynamic graph representation learning, knowledge distillation, model compression
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-300226DOI: 10.1109/ASONAM49781.2020.9381315ISI: 000678816900011Scopus ID: 2-s2.0-85103692778OAI: oai:DiVA.org:kth-300226DiVA, id: diva2:1588942
Conference
IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), DEC 07-10, 2020, ELECTR NETWORK
Note

QC 20230307

Available from: 2021-08-30 Created: 2021-08-30 Last updated: 2023-03-07Bibliographically approved
In thesis
1. Enabling Enterprise Live Video Streaming with Reinforcement Learning and Graph Neural Networks
Open this publication in new window or tab >>Enabling Enterprise Live Video Streaming with Reinforcement Learning and Graph Neural Networks
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Over the last decade, video has vastly become the most popular way the world consumes content. Due to the increased popularity, video has been a strategic tool for enterprises. More specifically, enterprises organize live video streaming events for both internal and external purposes in order to attract large audiences and disseminate important information. However, streaming a high- quality video internally in large multinational corporations, with thousands of employees spread around the world, is a challenging task. The main challenge is to prevent catastrophic network congestion in the enterprise network when thousand of employees attend a high-quality video event simultaneously. Given that large enterprises invest a significant amount of their annual budget on live video streaming events, it is essential to ensure that the office network will not be congested and each viewer will have high quality of experience during the event.

To address this challenge, large enterprises employ distributed live video streaming solutions to distribute high-quality video content between viewers of the same network. Such solutions rely on prior knowledge of the enterprise network topology to efficiently reduce the network bandwidth requirements during the event. Given that such knowledge is not always feasible to acquire, the distributed solutions must detect the network topology in real-time during the event. However, distributed solutions require a service to detect the network topology in the first minutes of the event, also known as the joining phase. Failing to promptly detect the enterprise network topology negatively impacts the event’s performance. In particular, distributed solutions may establish connections between viewers of different offices with limited network capacity. As a result, the enterprise network will be congested, and the employees will drop the event from the beginning of the event if they experience video quality issues.

In this thesis, we investigate and propose novel machine learning models allowing the enterprise network topology service to detect the topology in real- time. In particular, we investigate the network distribution of live video streaming events caused by the distributed software solutions. In doing so, we propose several graph neural network models to detect the network topology in the first minutes of the event. Live video streaming solutions can adjust the viewers’ connections to distribute high-quality video content between viewers of the same office, avoiding the risk of network congestion. We compare our models with several baselines in real-world datasets and show that our models achieve significant improvement via empirical evaluations.

Another critical factor for the efficiency of live video streaming events is the enterprise network topology service latency. Distributed live video streaming solutions require minimum latency to infer the network topology and adjust the viewers’ connections. We study the impact of the graph neural network size on the model’s online inference latency and propose several knowledge distillation strategies to generate compact models. Therefore, we create models with significantly fewer parameters, reducing the online inference latency while achieving high accuracy in the network topology detection task. Compared with state-of-the-art approaches, our proposed models have several orders of magnitude fewer parameters while maintaining high accuracy.

Furthermore, we address the continuously evolving enterprise network topology problem. Modern enterprise networks frequently change their topology to manage their business needs. Therefore, distributed live video streaming solutions must capture the network topology changes and adjust their network topology detection service in real time. To tackle this problem, we propose several novel machine learning models that exploit historical events to assist the models in detecting the network topology in the first minutes of the event. We investigate the distribution of the viewers participating in the events. We propose efficient reinforcement learning and meta-learning techniques to learn the enterprise network topology for each new event. By applying meta-learning and reinforcement learning, we can generalize network topology changes and ensure that every viewer will have a high-quality experience during an event. Compared with baseline approaches, we achieved superior performance in establishing connections between viewers of the same office in the first minutes of the event. Therefore, we ensure that distributed solutions provide a high return on investment in every live video streaming event without risking any enterprise network congestion. 

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2022
Series
TRITA-EECS-AVL ; 2022:69
Keywords
Graph Neural Networks, Reinforcement Learning, Meta-Learning, Knowledge Distillation, Enterprise Liveo Video Streaming
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-320919 (URN)978-91-8040-398-6 (ISBN)
Public defence
2022-11-21, SAL-C, Electrum building, Kistagången 16, Stockholm, 09:00 (English)
Opponent
Supervisors
Note

QC 20221102

Available from: 2022-11-02 Created: 2022-11-02 Last updated: 2022-11-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Antaris, Stefanos

Search in DiVA

By author/editor
Antaris, Stefanos
By organisation
KTH
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 89 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf