kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Low-rank bandits via tight two-to-infinity singular subspace recovery
Laboratory for Information and Decision Systems, MIT, Cambridge, MA, USA.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0001-5779-1649
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-4679-4673
2024 (English)In: International Conference on Machine Learning, ICML 2024, ML Research Press , 2024, p. 21430-21485Conference paper, Published paper (Refereed)
Abstract [en]

We study contextual bandits with low-rank structure where, in each round, if the (context, arm) pair (i, j) ∈ [m] × [n] is selected, the learner observes a noisy sample of the (i, j)-th entry of an unknown low-rank reward matrix. Successive contexts are generated randomly in an i.i.d. manner and are revealed to the learner. For such bandits, we present efficient algorithms for policy evaluation, best policy identification and regret minimization. For policy evaluation and best policy identification, we show that our algorithms are nearly minimax optimal. For instance, the number of samples required to return an ε-optimal policy with probability at least 1 - δ typically scales as m + n/ε2 log(1/δ). Our regret minimization algorithm enjoys minimax guarantees typically scaling as r5/4(m + n)3/4 √T, which improves over existing algorithms. All the proposed algorithms consist of two phases: they first leverage spectral methods to estimate the left and right singular subspaces of the low-rank reward matrix. We show that these estimates enjoy tight error guarantees in the two-to-infinity norm. This in turn allows us to reformulate our problems as a misspecified linear bandit problem with dimension roughly r(m + n) and misspecification controlled by the subspace recovery error, as well as to design the second phase of our algorithms efficiently.

Place, publisher, year, edition, pages
ML Research Press , 2024. p. 21430-21485
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-353951Scopus ID: 2-s2.0-85203793840OAI: oai:DiVA.org:kth-353951DiVA, id: diva2:1901027
Conference
41st International Conference on Machine Learning, ICML 2024, July 21-27, 2024, Vienna, Austria
Note

QC 20240926

Available from: 2024-09-25 Created: 2024-09-25 Last updated: 2024-09-26Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Réveillard, WilliamStojanovic, StefanProutiere, Alexandre

Search in DiVA

By author/editor
Réveillard, WilliamStojanovic, StefanProutiere, Alexandre
By organisation
Decision and Control Systems (Automatic Control)
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 26 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf