Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
CONVERGENCE BOUNDS FOR COMPRESSED GRADIENT METHODS WITH MEMORY BASED ERROR COMPENSATION
KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.ORCID iD: 0000-0003-4473-2011
Harvard Univ, Sch Engn & Appl Sci, 33 Oxford St, Cambridge, MA 02138 USA..
KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
2019 (English)In: 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) / [ed] RTSEKAS D. P., 2011, Optimization for Machine Learning, V2010, P1 ngni J., 2017, arXiv preprint arXiv: 1710. 09854, u Shengyu, 2016, 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGSIEEE International Conference on Acoustics, Speech, and Signal Processing, MAR 20-25, 2016, Shanghai, PEOPLES R CHINA, P4134, IEEE , 2019, p. 2857-2861Conference paper, Published paper (Refereed)
Abstract [en]

The veritable scale of modern data necessitates information compression in parallel/distributed big-data optimization. Compression schemes using memory-based error compensation have displayed superior performance in practice, however, to date there are no theoretical explanations for these observed advantages. This paper provides the first theoretical support for why such compression schemes yields higher accuracy solutions in optimization. Our results cover both gradient and incremental gradient algorithms for quadratic optimization. Unlike previous works, our theoretical results explicitly quantify the accuracy gains from error compensation, especially for ill-conditioned problems. Finally, the numerical results on linear least-squares problems validate the benefit of error compensation and demonstrate tightness of our convergence guarantees.

Place, publisher, year, edition, pages
IEEE , 2019. p. 2857-2861
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
Keywords [en]
Quadratic optimization, quantization, gradient descent, incremental gradient methods
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-261056DOI: 10.1109/ICASSP.2019.8682931ISI: 000482554003018Scopus ID: 2-s2.0-85068974918ISBN: 978-1-4799-8131-1 (print)OAI: oai:DiVA.org:kth-261056DiVA, id: diva2:1356411
Conference
44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), MAY 12-17, 2019, Brighton, ENGLAND
Note

QC 20191001

Available from: 2019-10-01 Created: 2019-10-01 Last updated: 2019-10-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Khirirat, SaritJohansson, Mikael

Search in DiVA

By author/editor
Khirirat, SaritJohansson, Mikael
By organisation
Automatic Control
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf