We investigate the problem of Remote Electrical Tilt (RET) optimization using off-policy learning techniques devised or Contextual Bandits (CBs). The goal in RET optimization is to control the vertical tilt angle of the antenna to optimize key Performance Indicators (KPIs) representing the Quality of Service (QoS) perceived by the users in cellular networks. Learning an improved tilt update policy is hard. On the one hand, coming up with a new policy in an online manner in a real network requires exploring tilt updates that have never been used before, and is operationally too risky. On the other hand, devising this policy via simulations suffers from the simulation-to-reality gap. In this paper, we circumvent these issues by learning an improved policy in an offline manner using existing data collected on real networks. We formulate the problem of devising such a policy using the off-policy CMAB framework. We propose CBlearning algorithms to extract optimal tilt update policies from the data. We train and evaluate these policies on real-world cellular network data. Our policies show consistent improvements over the rule-based logging policy used to collect the data
QC 20211103