Two-target algorithms for infinite-armed bandits with Bernoulli rewards
2013 (English)In: Advances in Neural Information Processing Systems 26 (2013), Morgan Kaufmann Publishers, 2013Conference paper (Refereed)
We consider an infinite-armed bandit problem with Bernoulli rewards. The mean rewards are independent, uniformly distributed over [0; 1]. Rewards 0 and 1 are referred to as a success and a failure, respectively. We propose a novel algorithm where the decision to exploit any arm is based on two successive targets, namely, the total number of successes until the first failure and until the first m failures, respectively, where m is a fixed parameter. This two-target algorithm achieves a long-term average regret in 2√n for a large parameter m and a known time horizon n. This regret is optimal and strictly less than the regret achieved by the best known algorithms, which is in 2√n The results are extended to any mean-reward distribution whose support contains 1 and to unknown time horizons. Numerical experiments show the performance of the algorithm for finite time horizon.
Place, publisher, year, edition, pages
Morgan Kaufmann Publishers, 2013.
, Advances in Neural Information Processing Systems, ISSN 1049-5258 ; 26
Bandit problems, Bernoulli, Best-known algorithms, Finite time horizon, M-failure, Novel algorithm, Numerical experiments, Time horizons
Electrical Engineering, Electronic Engineering, Information Engineering
IdentifiersURN: urn:nbn:se:kth:diva-139319ScopusID: 2-s2.0-84898996413OAI: oai:DiVA.org:kth-139319DiVA: diva2:685137
27th Annual Conference on Neural Information Processing Systems, NIPS 2013; Lake Tahoe, NV; United States; 5 December 2013 through 10 December 2013
FunderEU, European Research CouncilSwedish Research Council
QC 201406252014-01-082014-01-082014-06-25Bibliographically approved