Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Minimal Exploration in Structured Stochastic Bandits
Centrale-Supelec, L2S, France.
KTH, School of Electrical Engineering (EES), Automatic Control.
KTH, School of Electrical Engineering (EES), Automatic Control.
2017 (English)In: Advances in Neural Information Processing Systems, Neural information processing systems foundation , 2017, p. 1764-1772Conference paper, Published paper (Refereed)
Abstract [en]

This paper introduces and addresses a wide class of stochastic bandit problems where the function mapping the arm to the corresponding reward exhibits some known structural properties. Most existing structures (e.g. linear, lipschitz, unimodal, combinatorial, dueling,...) are covered by our framework. We derive an asymptotic instance-specific regret lower bound for these problems, and develop OSSB, an algorithm whose regret matches this fundamental limit. OSSB is not based on the classical principle of " role="presentation" style="box-sizing: border-box; display: inline-block; line-height: 0; font-size: 16.38px; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; margin: 0px; padding: 1px 0px; color: rgb(51, 51, 51); font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; position: relative;">optimism in the face of uncertainty'' or on Thompson sampling, and rather aims at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. We illustrate the efficiency of OSSB using numerical experiments in the case of the linear bandit problem and show that OSSB outperforms existing algorithms, including Thompson sampling.

Place, publisher, year, edition, pages
Neural information processing systems foundation , 2017. p. 1764-1772
Series
Advances in Neural Information Processing Systems, ISSN 1049-5258 ; 2017
National Category
Probability Theory and Statistics Other Computer and Information Science
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-219883Scopus ID: 2-s2.0-85046992885OAI: oai:DiVA.org:kth-219883DiVA, id: diva2:1165875
Conference
31st Annual Conference on Neural Information Processing Systems, NIPS 2017, Long Beach, United States, 4 December 2017 through 9 December 2017
Note

QC 20171218

Available from: 2017-12-14 Created: 2017-12-14 Last updated: 2018-05-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusPublished version

Search in DiVA

By author/editor
Combes, RichardMagureanu, StefanProutiere, Alexandre
By organisation
Automatic Control
Probability Theory and StatisticsOther Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 17 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf