kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Knowledge distillation on neural networks for evolving graphs
KTH. HiveStreaming AB, Stockholm, Sweden..ORCID iD: 0000-0002-1135-8863
Univ Thessaly, Volos, Greece..
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0003-4516-7317
2021 (English)In: Social Network Analysis and Mining, ISSN 1869-5450, E-ISSN 1869-5469, Vol. 11, no 1, article id 100Article in journal (Refereed) Published
Abstract [en]

Graph representation learning on dynamic graphs has become an important task on several real-world applications, such as recommender systems, email spam detection, and so on. To efficiently capture the evolution of a graph, representation learning approaches employ deep neural networks, with large amount of parameters to train. Due to the large model size, such approaches have high online inference latency. As a consequence, such models are challenging to deploy to an industrial setting with vast number of users/nodes. In this study, we propose DynGKD, a distillation strategy to transfer the knowledge from a large teacher model to a small student model with low inference latency, while achieving high prediction accuracy. We first study different distillation loss functions to separately train the student model with various types of information from the teacher model. In addition, we propose a hybrid distillation strategy for evolving graph representation learning to combine the teacher's different types of information. Our experiments with five publicly available datasets demonstrate the superiority of our proposed model against several baselines, with average relative drop 40.60% in terms of RMSE in the link prediction task. Moreover, our DynGKD model achieves a compression ratio of 21: 100, accelerating the inference latency with a speed up factor x30, when compared with the teacher model. For reproduction purposes, we make our datasets and implementation publicly available at https://github.com/stefanosantaris/DynGKD.

Place, publisher, year, edition, pages
SPRINGER WIEN , 2021. Vol. 11, no 1, article id 100
Keywords [en]
Graph representation learning, Evolving graphs, Knowledge distillation
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-304551DOI: 10.1007/s13278-021-00816-1ISI: 000709352300001Scopus ID: 2-s2.0-85117587192OAI: oai:DiVA.org:kth-304551DiVA, id: diva2:1609636
Note

QC 20211109

Available from: 2021-11-09 Created: 2021-11-09 Last updated: 2022-11-02Bibliographically approved
In thesis
1. Enabling Enterprise Live Video Streaming with Reinforcement Learning and Graph Neural Networks
Open this publication in new window or tab >>Enabling Enterprise Live Video Streaming with Reinforcement Learning and Graph Neural Networks
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Over the last decade, video has vastly become the most popular way the world consumes content. Due to the increased popularity, video has been a strategic tool for enterprises. More specifically, enterprises organize live video streaming events for both internal and external purposes in order to attract large audiences and disseminate important information. However, streaming a high- quality video internally in large multinational corporations, with thousands of employees spread around the world, is a challenging task. The main challenge is to prevent catastrophic network congestion in the enterprise network when thousand of employees attend a high-quality video event simultaneously. Given that large enterprises invest a significant amount of their annual budget on live video streaming events, it is essential to ensure that the office network will not be congested and each viewer will have high quality of experience during the event.

To address this challenge, large enterprises employ distributed live video streaming solutions to distribute high-quality video content between viewers of the same network. Such solutions rely on prior knowledge of the enterprise network topology to efficiently reduce the network bandwidth requirements during the event. Given that such knowledge is not always feasible to acquire, the distributed solutions must detect the network topology in real-time during the event. However, distributed solutions require a service to detect the network topology in the first minutes of the event, also known as the joining phase. Failing to promptly detect the enterprise network topology negatively impacts the event’s performance. In particular, distributed solutions may establish connections between viewers of different offices with limited network capacity. As a result, the enterprise network will be congested, and the employees will drop the event from the beginning of the event if they experience video quality issues.

In this thesis, we investigate and propose novel machine learning models allowing the enterprise network topology service to detect the topology in real- time. In particular, we investigate the network distribution of live video streaming events caused by the distributed software solutions. In doing so, we propose several graph neural network models to detect the network topology in the first minutes of the event. Live video streaming solutions can adjust the viewers’ connections to distribute high-quality video content between viewers of the same office, avoiding the risk of network congestion. We compare our models with several baselines in real-world datasets and show that our models achieve significant improvement via empirical evaluations.

Another critical factor for the efficiency of live video streaming events is the enterprise network topology service latency. Distributed live video streaming solutions require minimum latency to infer the network topology and adjust the viewers’ connections. We study the impact of the graph neural network size on the model’s online inference latency and propose several knowledge distillation strategies to generate compact models. Therefore, we create models with significantly fewer parameters, reducing the online inference latency while achieving high accuracy in the network topology detection task. Compared with state-of-the-art approaches, our proposed models have several orders of magnitude fewer parameters while maintaining high accuracy.

Furthermore, we address the continuously evolving enterprise network topology problem. Modern enterprise networks frequently change their topology to manage their business needs. Therefore, distributed live video streaming solutions must capture the network topology changes and adjust their network topology detection service in real time. To tackle this problem, we propose several novel machine learning models that exploit historical events to assist the models in detecting the network topology in the first minutes of the event. We investigate the distribution of the viewers participating in the events. We propose efficient reinforcement learning and meta-learning techniques to learn the enterprise network topology for each new event. By applying meta-learning and reinforcement learning, we can generalize network topology changes and ensure that every viewer will have a high-quality experience during an event. Compared with baseline approaches, we achieved superior performance in establishing connections between viewers of the same office in the first minutes of the event. Therefore, we ensure that distributed solutions provide a high return on investment in every live video streaming event without risking any enterprise network congestion. 

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2022
Series
TRITA-EECS-AVL ; 2022:69
Keywords
Graph Neural Networks, Reinforcement Learning, Meta-Learning, Knowledge Distillation, Enterprise Liveo Video Streaming
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-320919 (URN)978-91-8040-398-6 (ISBN)
Public defence
2022-11-21, SAL-C, Electrum building, Kistagången 16, Stockholm, 09:00 (English)
Opponent
Supervisors
Note

QC 20221102

Available from: 2022-11-02 Created: 2022-11-02 Last updated: 2022-11-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Antaris, StefanosGirdzijauskas, Sarunas

Search in DiVA

By author/editor
Antaris, StefanosGirdzijauskas, Sarunas
By organisation
KTHSoftware and Computer systems, SCS
In the same journal
Social Network Analysis and Mining
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 200 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf