Optimizing Distribution and Feedback for Short LT Codes With Reinforcement LearningShow others and affiliations
2025 (English)In: IEEE Transactions on Communications, ISSN 0090-6778, E-ISSN 1558-0857, Vol. 73, no 2, p. 1169-1185Article in journal (Refereed) Published
Abstract [en]
Designing short Luby transformation (LT) codes with low overhead and good error performance is crucial and challenging for the deployment of vehicle-to-everything networks, which require high reliability, high spectral efficiency, and low latency. In this paper, we investigate the design of globally optimal transmission strategies that consider interactions between feedback for short LT codes using reinforcement learning (RL), where traditional asymptotic analysis based on random graph theory is known to be inaccurate in this context. First, in order to reduce the decoding overhead of short LT codes, we derive the gradient expression for optimizing the degree distribution of LT codes, and propose a RL-based distribution optimization (RL-DO) algorithm for designing short LT codes. Then, to improve the reliability and overhead of LT codes under limited feedback, we model the feedback optimization problem as a Markov decision process, and propose the RL-based joint feedback and distribution optimization (RL-JFDO) algorithm, which aims to design globally-optimal feedback schemes. Simulations show that our methods have lower decoding overhead, error rate, and decoding complexity compared to existing feedback fountain codes.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025. Vol. 73, no 2, p. 1169-1185
Keywords [en]
LT codes, reinforcement learning, feedback, Optimization
National Category
Telecommunications
Identifiers
URN: urn:nbn:se:kth:diva-361093DOI: 10.1109/TCOMM.2024.3445303ISI: 001426306700001Scopus ID: 2-s2.0-85201763935OAI: oai:DiVA.org:kth-361093DiVA, id: diva2:1943641
Note
QC 20250311
2025-03-112025-03-112025-03-11Bibliographically approved