Reinforcement Learning for Efficient and Tuning-Free Link Adaptation
2022 (English)In: IEEE Transactions on Wireless Communications, ISSN 1536-1276, E-ISSN 1558-2248, Vol. 21, no 2, p. 768-780Article in journal (Refereed) Published
Abstract [en]
Wireless links adapt the data transmission parameters to the dynamic channel state - this is called link adaptation. Classical link adaptation relies on tuning parameters that are challenging to configure for optimal link performance. Recently, reinforcement learning has been proposed to automate link adaptation, where the transmission parameters are modeled as discrete arms of a multi-armed bandit. In this context, we propose a latent learning model for link adaptation that exploits the correlation between data transmission parameters. Further, motivated by the recent success of Thompson sampling for multi-armed bandit problems, we propose a latent Thompson sampling (LTS) algorithm that quickly learns the optimal parameters for a given channel state. We extend LTS to fading wireless channels through a tuning-free mechanism that automatically tracks the channel dynamics. In numerical evaluations with fading wireless channels, LTS improves the link throughout by up to 100% compared to the state-of-the-art link adaptation algorithms.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2022. Vol. 21, no 2, p. 768-780
Keywords [en]
Wireless communication, Interference, Signal to noise ratio, Reinforcement learning, Fading channels, Throughput, Channel estimation, Wireless networks, adaptive modulation and coding, thompson sampling, outer loop link adaptation
National Category
Telecommunications
Identifiers
URN: urn:nbn:se:kth:diva-309545DOI: 10.1109/TWC.2021.3098972ISI: 000754251000008Scopus ID: 2-s2.0-85111567571OAI: oai:DiVA.org:kth-309545DiVA, id: diva2:1645158
Note
Not duplicate with DiVA: 1548043
QC 20220315
2022-03-162022-03-162022-06-25Bibliographically approved