kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Alternating Mixed-Integer Programming and Neural Network Training for Approximating Stochastic Two-Stage Problems
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.ORCID iD: 0000-0003-0299-5745
ABB Corporate Research Center, Wallstadter Str. 59, 68526, Ladenburg, Germany, Wallstadter Str. 59.
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.ORCID iD: 0000-0002-5415-1715
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.ORCID iD: 0000-0001-6352-0968
2024 (English)In: Machine Learning, Optimization, and Data Science - 9th International Conference, LOD 2023, Revised Selected Papers, Springer Nature , 2024, Vol. 14506, p. 124-139Conference paper, Published paper (Refereed)
Abstract [en]

The presented work addresses two-stage stochastic programs (2SPs), a broadly applicable model to capture optimization problems subject to uncertain parameters with adjustable decision variables. In case the adjustable or second-stage variables contain discrete decisions, the corresponding 2SPs are known to be NP-complete. The standard approach of forming a single-stage deterministic equivalent problem can be computationally challenging even for small instances, as the number of variables and constraints scales with the number of scenarios. To avoid forming a potentially huge MILP problem, we build upon an approach of approximating the expected value of the second-stage problem by a neural network (NN) and encoding the resulting NN into the first-stage problem. The proposed algorithm alternates between optimizing the first-stage variables and retraining the NN. We demonstrate the value of our approach with the example of computing operating points in power systems by showing that the alternating approach provides improved first-stage decisions and a tighter approximation between the expected objective and its neural network approximation.

Place, publisher, year, edition, pages
Springer Nature , 2024. Vol. 14506, p. 124-139
Series
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN 0302-9743 ; 14506
Keywords [en]
Neural Network, Power Systems, Stochastic Optimization
National Category
Computational Mathematics
Identifiers
URN: urn:nbn:se:kth:diva-344367DOI: 10.1007/978-3-031-53966-4_10ISI: 001217090300010Scopus ID: 2-s2.0-85186266492OAI: oai:DiVA.org:kth-344367DiVA, id: diva2:1844371
Conference
9th International Conference on Machine Learning, Optimization, and Data Science, LOD 2023, Grasmere, United Kingdom of Great Britain and Northern Ireland, Sep 22 2023 - Sep 26 2023
Note

QC 20240314

 Part of ISBN 9783031539657

Available from: 2024-03-13 Created: 2024-03-13 Last updated: 2024-06-14Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kronqvist, JanRolfes, JanZhao, Shudian

Search in DiVA

By author/editor
Kronqvist, JanRolfes, JanZhao, Shudian
By organisation
Optimization and Systems Theory
Computational Mathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 146 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf