Ergodic mirror descent
2012 (English)In: SIAM Journal on Optimization, ISSN 1052-6234, E-ISSN 1095-7189, Vol. 22, no 4, 1549-1578 p.Article in journal (Refereed) Published
We generalize stochastic subgradient descent methods to situations in which we do not receive independent samples from the distribution over which we optimize, instead receiving samples coupled over time. We show that as long as the source of randomness is suitably ergodic it converges quickly enough to a stationary distribution-the method enjoys strong convergence guarantees, both in expectation and with high probability. This result has implications for stochastic optimization in high-dimensional spaces, peer-to-peer distributed optimization schemes, decision problems with dependent data, and stochastic optimization problems over combinatorial spaces.
Place, publisher, year, edition, pages
2012. Vol. 22, no 4, 1549-1578 p.
convex programming, stochastic optimization, Markov chain, Monte Carlo sampling mixing, mirror descent algorithm
Mathematics Engineering and Technology
IdentifiersURN: urn:nbn:se:kth:diva-116742DOI: 10.1137/110836043ISI: 000312734300015ScopusID: 2-s2.0-84871567576OAI: oai:DiVA.org:kth-116742DiVA: diva2:600765
QC 201301252013-01-252013-01-252013-01-25Bibliographically approved