Ergodic mirror descent
2011 (English)In: Annual Allerton Conference on Communication, Control, and Computing, 2011, 701-706 p.Conference paper (Refereed)
We generalize stochastic subgradient methods to situations in which we do not receive independent samples from the distribution over which we optimize, but instead receive samples that are coupled over time. We show that as long as the source of randomness is suitably ergodic - it converges quickly enough to a stationary distribution - the method enjoys strong convergence guarantees, both in expectation and with high probability. This result has implications for high-dimensional stochastic optimization, peer-to-peer distributed optimization schemes, and stochastic optimization problems over combinatorial spaces.
Place, publisher, year, edition, pages
2011. 701-706 p.
Distributed optimization, Ergodics, High probability, High-dimensional, Peer to peer, Stationary distribution, Stochastic optimization problems, Stochastic optimizations, Strong convergence, Subgradient methods, Communication, Probability distributions, Stochastic systems, Optimization
IdentifiersURN: urn:nbn:se:kth:diva-149919DOI: 10.1109/Allerton.2011.6120236ScopusID: 2-s2.0-84856096317ISBN: 9781457718168OAI: oai:DiVA.org:kth-149919DiVA: diva2:742227
2011 49th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2011, 28 September 2011 through 30 September 2011, Monticello, IL
QC 201409012014-09-012014-08-282014-09-01Bibliographically approved