Learning to rank: Regret lower bounds and efficient algorithms
2015 (English)In: Performance Evaluation Review, ISSN 0163-5999, E-ISSN 1557-9484, Vol. 43, no 1, 231-244 p.Article in journal (Refereed) PublishedText
Algorithms for learning to rank Web documents, display ads, or other types of items constitute a fundamental component of search engines and more generally of online services. In such systems, when a user makes a request or visits a web page, an ordered list of items (e.g. documents or ads) is displayed; the user scans this list in order, and clicks on the first relevant item if any. When the user clicks on an item, the reward collected by the system typically decreases with the position of the item in the displayed list. The main challenge in the design of sequential list selection algorithms stems from the fact that the probabilities with which the user clicks on the various items are unknown and need to be learned. We formulate the design of such algorithms as a stochastic bandit optimization problem. This problem differs from the classical bandit framework: (1) the type of feedback received by the system depends on the actual relevance of the various items in the displayed list (if the user clicks on the last item, we know that none of the previous items in the list are relevant); (2) there are inherent correlations between the average relevance of the items (e.g. the user may be interested in a specific topic only). We assume that items are categorized according to their topic and that users are clustered, so that users of the same cluster are interested in the same topic. We investigate several scenarios depending on the available sideinformation on the user before selecting the displayed list: (a) we first treat the case where the topic the user is interested in is known when she places a request; (b) we then study the case where the user cluster is known but the mapping between user clusters and topics is unknown. For both scenarios, we derive regret lower bounds and devise algorithms that approach these fundamental limits.
Place, publisher, year, edition, pages
ACM Press, 2015. Vol. 43, no 1, 231-244 p.
Ad-display optimization, Learning, Multi-armed bandits, Search engines
IdentifiersURN: urn:nbn:se:kth:diva-187384DOI: 10.1145/2745844.2745852ScopusID: 2-s2.0-84955606870OAI: oai:DiVA.org:kth-187384DiVA: diva2:931278
QC 201605272016-05-272016-05-232016-05-27Bibliographically approved