kth.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Enabling Enterprise Live Video Streaming with Reinforcement Learning and Graph Neural Networks
KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Programvaruteknik och datorsystem, SCS.ORCID-id: 0000-0002-1135-8863
2022 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Over the last decade, video has vastly become the most popular way the world consumes content. Due to the increased popularity, video has been a strategic tool for enterprises. More specifically, enterprises organize live video streaming events for both internal and external purposes in order to attract large audiences and disseminate important information. However, streaming a high- quality video internally in large multinational corporations, with thousands of employees spread around the world, is a challenging task. The main challenge is to prevent catastrophic network congestion in the enterprise network when thousand of employees attend a high-quality video event simultaneously. Given that large enterprises invest a significant amount of their annual budget on live video streaming events, it is essential to ensure that the office network will not be congested and each viewer will have high quality of experience during the event.

To address this challenge, large enterprises employ distributed live video streaming solutions to distribute high-quality video content between viewers of the same network. Such solutions rely on prior knowledge of the enterprise network topology to efficiently reduce the network bandwidth requirements during the event. Given that such knowledge is not always feasible to acquire, the distributed solutions must detect the network topology in real-time during the event. However, distributed solutions require a service to detect the network topology in the first minutes of the event, also known as the joining phase. Failing to promptly detect the enterprise network topology negatively impacts the event’s performance. In particular, distributed solutions may establish connections between viewers of different offices with limited network capacity. As a result, the enterprise network will be congested, and the employees will drop the event from the beginning of the event if they experience video quality issues.

In this thesis, we investigate and propose novel machine learning models allowing the enterprise network topology service to detect the topology in real- time. In particular, we investigate the network distribution of live video streaming events caused by the distributed software solutions. In doing so, we propose several graph neural network models to detect the network topology in the first minutes of the event. Live video streaming solutions can adjust the viewers’ connections to distribute high-quality video content between viewers of the same office, avoiding the risk of network congestion. We compare our models with several baselines in real-world datasets and show that our models achieve significant improvement via empirical evaluations.

Another critical factor for the efficiency of live video streaming events is the enterprise network topology service latency. Distributed live video streaming solutions require minimum latency to infer the network topology and adjust the viewers’ connections. We study the impact of the graph neural network size on the model’s online inference latency and propose several knowledge distillation strategies to generate compact models. Therefore, we create models with significantly fewer parameters, reducing the online inference latency while achieving high accuracy in the network topology detection task. Compared with state-of-the-art approaches, our proposed models have several orders of magnitude fewer parameters while maintaining high accuracy.

Furthermore, we address the continuously evolving enterprise network topology problem. Modern enterprise networks frequently change their topology to manage their business needs. Therefore, distributed live video streaming solutions must capture the network topology changes and adjust their network topology detection service in real time. To tackle this problem, we propose several novel machine learning models that exploit historical events to assist the models in detecting the network topology in the first minutes of the event. We investigate the distribution of the viewers participating in the events. We propose efficient reinforcement learning and meta-learning techniques to learn the enterprise network topology for each new event. By applying meta-learning and reinforcement learning, we can generalize network topology changes and ensure that every viewer will have a high-quality experience during an event. Compared with baseline approaches, we achieved superior performance in establishing connections between viewers of the same office in the first minutes of the event. Therefore, we ensure that distributed solutions provide a high return on investment in every live video streaming event without risking any enterprise network congestion. 

Ort, förlag, år, upplaga, sidor
KTH Royal Institute of Technology, 2022.
Serie
TRITA-EECS-AVL ; 2022:69
Nyckelord [en]
Graph Neural Networks, Reinforcement Learning, Meta-Learning, Knowledge Distillation, Enterprise Liveo Video Streaming
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:kth:diva-320919ISBN: 978-91-8040-398-6 (tryckt)OAI: oai:DiVA.org:kth-320919DiVA, id: diva2:1708093
Disputation
2022-11-21, SAL-C, Electrum building, Kistagången 16, Stockholm, 09:00 (Engelska)
Opponent
Handledare
Anmärkning

QC 20221102

Tillgänglig från: 2022-11-02 Skapad: 2022-11-02 Senast uppdaterad: 2022-11-18Bibliografiskt granskad
Delarbeten
1. VStreamDRLS: Dynamic Graph Representation Learning with Self-Attention for Enterprise Distributed Video Streaming Solutions
Öppna denna publikation i ny flik eller fönster >>VStreamDRLS: Dynamic Graph Representation Learning with Self-Attention for Enterprise Distributed Video Streaming Solutions
2020 (Engelska)Ingår i: 2020 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM) / [ed] Atzmuller, M Coscia, M Missaoui, R, Institute of Electrical and Electronics Engineers (IEEE) , 2020, s. 486-493Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Live video streaming has become a mainstay as a standard communication solution for several enterprises worldwide. To efficiently stream high-quality live video content to a large amount of offices, companies employ distributed video streaming solutions which rely on prior knowledge of the underlying evolving enterprise network. However, such networks are highly complex and dynamic. Hence, to optimally coordinate the live video distribution, the available network capacity between viewers has to be accurately predicted. In this paper we propose a graph representation learning technique on weighted and dynamic graphs to predict the network capacity, that is the weights of connections/links between viewers/nodes. We propose VStreamDRLS, a graph neural network architecture with a self-attention mechanism to capture the evolution of the graph structure of live video streaming events. VStreamDRLS employs the graph convolutional network (GCN) model over the duration of a live video streaming event and introduces a self-attention mechanism to evolve the GCN parameters. In doing so, our model focuses on the GCN weights that are relevant to the evolution of the graph and generate the node representation, accordingly. We evaluate our proposed approach on the link prediction task on two real-world datasets, generated by enterprise live video streaming events. The duration of each event lasted an hour. The experimental results demonstrate the effectiveness of VStreamDRLS when compared with state-of-the-art strategies. Our evaluation datasets and implementation are publicly available at https://github.com/stefanosantaris/vstreamdrls.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2020
Serie
Proceedings of the IEEE-ACM International Conference on Advances in Social Networks Analysis and Mining, ISSN 2473-9928
Nyckelord
Dynamic graph representation learning, Self-attention mechanism, Video streaming
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:kth:diva-300221 (URN)10.1109/ASONAM49781.2020.9381430 (DOI)000678816900077 ()2-s2.0-85098823882 (Scopus ID)
Konferens
IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), DEC 07-10, 2020, ELECTR NETWORK
Anmärkning

QC 20230307

Tillgänglig från: 2021-08-30 Skapad: 2021-08-30 Senast uppdaterad: 2023-03-07Bibliografiskt granskad
2. Meta-reinforcement learning via buffering graph signatures for live video streaming events
Öppna denna publikation i ny flik eller fönster >>Meta-reinforcement learning via buffering graph signatures for live video streaming events
2021 (Engelska)Ingår i: Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2021, Association for Computing Machinery (ACM) , 2021, s. 385-392Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this study, we present a meta-learning model to adapt the predictions of the network's capacity between viewers who participate in a live video streaming event. We propose the MELANIE model, where an event is formulated as a Markov Decision Process, performing meta-learning on reinforcement learning tasks. By considering a new event as a task, we design an actor-critic learning scheme to compute the optimal policy on estimating the viewers' high-bandwidth connections. To ensure fast adaptation to new connections or changes among viewers during an event, we implement a prioritized replay memory buffer based on the Kullback-Leibler divergence of the reward/throughput of the viewers' connections. Moreover, we adopt a model-agnostic meta-learning framework to generate a global model from past events. As viewers scarcely participate in several events, the challenge resides on how to account for the low structural similarity of different events. To combat this issue, we design a graph signature buffer to calculate the structural similarities of several streaming events and adjust the training of the global model accordingly. We evaluate the proposed model on the link weight prediction task on three real-world datasets of live video streaming events. Our experiments demonstrate the effectiveness of our proposed model, with an average relative gain of 25% against state-of-the-art strategies. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://github.com/stefanosantaris/melanie

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery (ACM), 2021
Nyckelord
graph signatures, meta-reinforcement learning, video streaming, Cell proliferation, Computer vision, Markov processes, Reinforcement learning, Global models, Graph signature, Live video streaming, Markov Decision Processes, Meta-learning models, Metalearning, Network Capacity, Structural similarity, Video-streaming
Nationell ämneskategori
Programvaruteknik
Identifikatorer
urn:nbn:se:kth:diva-316104 (URN)10.1145/3487351.3490973 (DOI)2-s2.0-85124416909 (Scopus ID)
Konferens
ASONAM '21: International Conference on Advances in Social Networks Analysis and Mining, Virtual Event, The Netherlands, November 8-11, 2021
Anmärkning

Part of ISBN 9781450391283

QC 20220822

Tillgänglig från: 2022-08-22 Skapad: 2022-08-22 Senast uppdaterad: 2022-11-02Bibliografiskt granskad
3. A Deep Graph Reinforcement Learning Model for Improving User Experience in Live Video Streaming
Öppna denna publikation i ny flik eller fönster >>A Deep Graph Reinforcement Learning Model for Improving User Experience in Live Video Streaming
2021 (Engelska)Ingår i: 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) / [ed] Chen, Y Ludwig, H Tu, Y Fayyad, U Zhu, X Hu, X Byna, S Liu, X Zhang, J Pan, S Papalexakis, V Wang, J Cuzzocrea, A Ordonez, C, Institute of Electrical and Electronics Engineers (IEEE) , 2021, s. 1787-1796Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this paper we present a deep graph reinforcement learning model to predict and improve the user experience during a live video streaming event, orchestrated by an agent/tracker. We first formulate the user experience prediction problem as a classification task, accounting for the fact that most of the viewers at the beginning of an event have poor quality of experience due to low-bandwidth connections and limited interactions with the tracker. In our model we consider different factors that influence the quality of user experience and train the proposed model on diverse state-action transitions when viewers interact with the tracker. In addition, provided that past events have various user experience characteristics we follow a gradient boosting strategy to compute a global model that learns from different events. Our experiments with three real-world datasets of live video streaming events demonstrate the superiority of the proposed model against several baseline strategies. Moreover, as the majority of the viewers at the beginning of an event has poor experience, we show that our model can significantly increase the number of viewers with high quality experience by at least 75% over the first streaming minutes. Our evaluation datasets and implementation are publicly available at https://publicresearch.z13.web.core.windows.net

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2021
Serie
IEEE International Conference on Big Data, ISSN 2639-1589
Nyckelord
User experience, live video streaming, graph reinforcement learning
Nationell ämneskategori
Lärande Datorsystem
Identifikatorer
urn:nbn:se:kth:diva-315414 (URN)10.1109/BigData52589.2021.9671949 (DOI)000800559501107 ()2-s2.0-85125360036 (Scopus ID)
Konferens
9th IEEE International Conference on Big Data (IEEE BigData), 15-18 December, 2021, Virtual
Anmärkning

Part of proceedings: ISBN 978-1-6654-3902-2

QC 20220707

Tillgänglig från: 2022-07-07 Skapad: 2022-07-07 Senast uppdaterad: 2022-11-02Bibliografiskt granskad
4. EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events
Öppna denna publikation i ny flik eller fönster >>EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events
2020 (Engelska)Ingår i: 2020 IEEE international conference on big data (big data) / [ed] Wu, XT Jermaine, C Xiong, L Hu, XH Kotevska, O Lu, SY Xu, WJ Aluru, S Zhai, CX Al-Masri, E Chen, ZY Saltz, J, Institute of Electrical and Electronics Engineers (IEEE) , 2020, s. 1455-1464Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this study, we present a dynamic graph representation learning model on weighted graphs to accurately predict the network capacity of connections between viewers in a live video streaming event. We propose EGAD, a neural network architecture to capture the graph evolution by introducing a self-attention mechanism on the weights between consecutive graph convolutional networks. In addition, we account for the fact that neural architectures require a huge amount of parameters to train, thus increasing the online inference latency and negatively influencing the user experience in a live video streaming event. To address the problem of the high online inference of a vast number of parameters, we propose a knowledge distillation strategy. In particular, we design a distillation loss function, aiming to first pretrain a teacher model on offline data, and then transfer the knowledge from the teacher to a smaller student model with less parameters. We evaluate our proposed model on the link prediction task on three real-world datasets, generated by live video streaming events. The events lasted 80 minutes and each viewer exploited the distribution solution provided by the company Hive Streaming AB. The experiments demonstrate the effectiveness of the proposed model in terms of link prediction accuracy and number of required parameters, when evaluated against state-of-the-art approaches. In addition, we study the distillation performance of the proposed model in terms of compression ratio for different distillation strategies, where we show that the proposed model can achieve a compression ratio up to 15:100, preserving high link prediction accuracy. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://stefanosantaris.github.io/EGAD.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2020
Serie
IEEE International Conference on Big Data, ISSN 2639-1589
Nyckelord
Graph representation learning, live video streaming, evolving graphs, knowledge distillation
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:kth:diva-299099 (URN)10.1109/BigData50022.2020.9378219 (DOI)000662554701073 ()2-s2.0-85098838453 (Scopus ID)
Konferens
8th IEEE International Conference on Big Data (Big Data), DEC 10-13, 2020, ELECTR NETWORK
Anmärkning

QC 20230307

Tillgänglig från: 2021-08-02 Skapad: 2021-08-02 Senast uppdaterad: 2023-03-07Bibliografiskt granskad
5. Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation
Öppna denna publikation i ny flik eller fönster >>Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation
2020 (Engelska)Ingår i: 2020 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM) / [ed] Atzmuller, M Coscia, M Missaoui, R, Institute of Electrical and Electronics Engineers (IEEE) , 2020, s. 60-64Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Dynamic graph representation learning strategies are based on different neural architectures to capture the graph evolution over time. However, the underlying neural architectures require a large amount of parameters to train and suffer from high online inference latency, that is several model parameters have to be updated when new data arrive online. In this study we propose Distill2Vec, a knowledge distillation strategy to train a compact model with a low number of trainable parameters, so as to reduce the latency of online inference and maintain the model accuracy high. We design a distillation loss function based on Kullback-Leibler divergence to transfer the acquired knowledge from a teacher model trained on offline data, to a small-size student model for online data. Our experiments with publicly available datasets show the superiority of our proposed model over several state-of-the-art approaches with relative gains up to 5% in the link prediction task. In addition, we demonstrate the effectiveness of our knowledge distillation strategy, in terms of number of required parameters, where Distill2Vec achieves a compression ratio up to 7:100 when compared with baseline approaches. For reproduction purposes, our implementation is publicly available at https://stefanosantaris.github.io/Distill2Vec.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2020
Serie
Proceedings of the IEEE-ACM International Conference on Advances in Social Networks Analysis and Mining, ISSN 2473-9928
Nyckelord
Dynamic graph representation learning, knowledge distillation, model compression
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:kth:diva-300226 (URN)10.1109/ASONAM49781.2020.9381315 (DOI)000678816900011 ()2-s2.0-85103692778 (Scopus ID)
Konferens
IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), DEC 07-10, 2020, ELECTR NETWORK
Anmärkning

QC 20230307

Tillgänglig från: 2021-08-30 Skapad: 2021-08-30 Senast uppdaterad: 2023-03-07Bibliografiskt granskad
6. Knowledge distillation on neural networks for evolving graphs
Öppna denna publikation i ny flik eller fönster >>Knowledge distillation on neural networks for evolving graphs
2021 (Engelska)Ingår i: Social Network Analysis and Mining, ISSN 1869-5450, E-ISSN 1869-5469, Vol. 11, nr 1, artikel-id 100Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Graph representation learning on dynamic graphs has become an important task on several real-world applications, such as recommender systems, email spam detection, and so on. To efficiently capture the evolution of a graph, representation learning approaches employ deep neural networks, with large amount of parameters to train. Due to the large model size, such approaches have high online inference latency. As a consequence, such models are challenging to deploy to an industrial setting with vast number of users/nodes. In this study, we propose DynGKD, a distillation strategy to transfer the knowledge from a large teacher model to a small student model with low inference latency, while achieving high prediction accuracy. We first study different distillation loss functions to separately train the student model with various types of information from the teacher model. In addition, we propose a hybrid distillation strategy for evolving graph representation learning to combine the teacher's different types of information. Our experiments with five publicly available datasets demonstrate the superiority of our proposed model against several baselines, with average relative drop 40.60% in terms of RMSE in the link prediction task. Moreover, our DynGKD model achieves a compression ratio of 21: 100, accelerating the inference latency with a speed up factor x30, when compared with the teacher model. For reproduction purposes, we make our datasets and implementation publicly available at https://github.com/stefanosantaris/DynGKD.

Ort, förlag, år, upplaga, sidor
SPRINGER WIEN, 2021
Nyckelord
Graph representation learning, Evolving graphs, Knowledge distillation
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:kth:diva-304551 (URN)10.1007/s13278-021-00816-1 (DOI)000709352300001 ()2-s2.0-85117587192 (Scopus ID)
Anmärkning

QC 20211109

Tillgänglig från: 2021-11-09 Skapad: 2021-11-09 Senast uppdaterad: 2022-11-02Bibliografiskt granskad

Open Access i DiVA

Kappa(1011 kB)140 nedladdningar
Filinformation
Filnamn FULLTEXT04.pdfFilstorlek 1011 kBChecksumma SHA-512
accea329f748482e80f7544e7e0f997259bcea9c46e102c70a0b289df6d5530fac487001886c834714b0703fa861ef08e31c581ce9e682d30066eea754948ae0
Typ fulltextMimetyp application/pdf

Övriga länkar

zoom link for online defense

Person

Antaris, Stefanos

Sök vidare i DiVA

Av författaren/redaktören
Antaris, Stefanos
Av organisationen
Programvaruteknik och datorsystem, SCS
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 156 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 714 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf