Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
NetConfEval: Can LLMs Facilitate Network Configuration?
KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Programvaruteknik och datorsystem, SCS.ORCID-id: 0009-0000-4604-1180
KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Programvaruteknik och datorsystem, SCS.ORCID-id: 0000-0002-9780-873X
NVIDIA, Stockholm, Sweden.ORCID-id: 0000-0001-5083-4052
Red Hat, Stockholm, Sweden.ORCID-id: 0000-0002-0722-2656
Vise andre og tillknytning
2024 (engelsk)Inngår i: Proceedings of the ACM on Networking, ISSN 2834-5509, Vol. 2, nr CoNEXT2, artikkel-id 7Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This paper explores opportunities to utilize Large Language Models (LLMs) to make network configuration human-friendly, simplifying the configuration of network devices & development of routing algorithms and minimizing errors. We design a set of benchmarks (NetConfEval) to examine the effectiveness of different models in facilitating and automating network configuration. More specifically, we focus on the scenarios where LLMs translate high-level policies, requirements, and descriptions (i.e., specified in natural language) into low-level network configurations & Python code. NetConfEval considers four tasks that could potentially facilitate network configuration, such as (i) generating high-level requirements into a formal specification format, (ii) generating API/function calls from high-level requirements, (iii) developing routing algorithms based on high-level descriptions, and (iv) generating low-level configuration for existing and new protocols based on input documentation. Learning from the results of our study, we propose a set of principles to design LLM-based systems to configure networks. Finally, we present two GPT-4-based prototypes to (i) automatically configure P4-enabled devices from a set of high-level requirements and (ii) integrate LLMs into existing network synthesizers.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM) , 2024. Vol. 2, nr CoNEXT2, artikkel-id 7
Emneord [en]
benchmark, code generation, function calling, large language models (llms), network configuration, network synthesizer, p4, rag, routing algorithms
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-357124DOI: 10.1145/3656296OAI: oai:DiVA.org:kth-357124DiVA, id: diva2:1918106
Prosjekter
Digital Futures
Forskningsfinansiär
Vinnova, 2023-03003EU, European Research Council, 770889Swedish Research Council, 2021-0421
Merknad

QC 20241211

Tilgjengelig fra: 2024-12-04 Laget: 2024-12-04 Sist oppdatert: 2024-12-11bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Person

Wang, ChangjieScazzariello, MarianoKostic, DejanChiesa, Marco

Søk i DiVA

Av forfatter/redaktør
Wang, ChangjieScazzariello, MarianoFarshin, AlirezaFerlin, SimoneKostic, DejanChiesa, Marco
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 352 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf