kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
NetConfEval: Can LLMs Facilitate Network Configuration?
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0009-0000-4604-1180
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-9780-873X
NVIDIA, Stockholm, Sweden.ORCID iD: 0000-0001-5083-4052
Red Hat, Stockholm, Sweden.ORCID iD: 0000-0002-0722-2656
Show others and affiliations
2024 (English)In: Proceedings of the ACM on Networking, ISSN 2834-5509, Vol. 2, no CoNEXT2, article id 7Article in journal (Refereed) Published
Abstract [en]

This paper explores opportunities to utilize Large Language Models (LLMs) to make network configuration human-friendly, simplifying the configuration of network devices & development of routing algorithms and minimizing errors. We design a set of benchmarks (NetConfEval) to examine the effectiveness of different models in facilitating and automating network configuration. More specifically, we focus on the scenarios where LLMs translate high-level policies, requirements, and descriptions (i.e., specified in natural language) into low-level network configurations & Python code. NetConfEval considers four tasks that could potentially facilitate network configuration, such as (i) generating high-level requirements into a formal specification format, (ii) generating API/function calls from high-level requirements, (iii) developing routing algorithms based on high-level descriptions, and (iv) generating low-level configuration for existing and new protocols based on input documentation. Learning from the results of our study, we propose a set of principles to design LLM-based systems to configure networks. Finally, we present two GPT-4-based prototypes to (i) automatically configure P4-enabled devices from a set of high-level requirements and (ii) integrate LLMs into existing network synthesizers.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2024. Vol. 2, no CoNEXT2, article id 7
Keywords [en]
benchmark, code generation, function calling, large language models (llms), network configuration, network synthesizer, p4, rag, routing algorithms
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-357124DOI: 10.1145/3656296OAI: oai:DiVA.org:kth-357124DiVA, id: diva2:1918106
Projects
Digital Futures
Funder
Vinnova, 2023-03003EU, European Research Council, 770889Swedish Research Council, 2021-0421
Note

QC 20241211

Available from: 2024-12-04 Created: 2024-12-04 Last updated: 2024-12-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Wang, ChangjieScazzariello, MarianoKostic, DejanChiesa, Marco

Search in DiVA

By author/editor
Wang, ChangjieScazzariello, MarianoFarshin, AlirezaFerlin, SimoneKostic, DejanChiesa, Marco
By organisation
Software and Computer systems, SCS
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 128 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf