kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Optimizing Distribution and Feedback for Short LT Codes With Reinforcement Learning
Beijing Inst Technol, Sch Informat & Elect, Beijing 100081, Peoples R China..
Beijing Inst Technol, Sch Informat & Elect, Beijing 100081, Peoples R China..
Beijing Inst Technol, Sch Informat & Elect, Beijing 100081, Peoples R China..
China Mobile Res Inst, Beijing 100053, Peoples R China..
Show others and affiliations
2025 (English)In: IEEE Transactions on Communications, ISSN 0090-6778, E-ISSN 1558-0857, Vol. 73, no 2, p. 1169-1185Article in journal (Refereed) Published
Abstract [en]

Designing short Luby transformation (LT) codes with low overhead and good error performance is crucial and challenging for the deployment of vehicle-to-everything networks, which require high reliability, high spectral efficiency, and low latency. In this paper, we investigate the design of globally optimal transmission strategies that consider interactions between feedback for short LT codes using reinforcement learning (RL), where traditional asymptotic analysis based on random graph theory is known to be inaccurate in this context. First, in order to reduce the decoding overhead of short LT codes, we derive the gradient expression for optimizing the degree distribution of LT codes, and propose a RL-based distribution optimization (RL-DO) algorithm for designing short LT codes. Then, to improve the reliability and overhead of LT codes under limited feedback, we model the feedback optimization problem as a Markov decision process, and propose the RL-based joint feedback and distribution optimization (RL-JFDO) algorithm, which aims to design globally-optimal feedback schemes. Simulations show that our methods have lower decoding overhead, error rate, and decoding complexity compared to existing feedback fountain codes.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025. Vol. 73, no 2, p. 1169-1185
Keywords [en]
LT codes, reinforcement learning, feedback, Optimization
National Category
Telecommunications
Identifiers
URN: urn:nbn:se:kth:diva-361093DOI: 10.1109/TCOMM.2024.3445303ISI: 001426306700001Scopus ID: 2-s2.0-85201763935OAI: oai:DiVA.org:kth-361093DiVA, id: diva2:1943641
Note

QC 20250311

Available from: 2025-03-11 Created: 2025-03-11 Last updated: 2025-03-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Xiao, Ming

Search in DiVA

By author/editor
Xiao, Ming
By organisation
Information Science and Engineering
In the same journal
IEEE Transactions on Communications
Telecommunications

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 72 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf