kth.sePublications KTH
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deep learning for solving initial path optimization of mean-field systems with memory
Department of Computer Science, University of Barika, Barika, Algeria.
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Probability, Mathematical Physics and Statistics.ORCID iD: 0000-0003-1662-0215
Department of Stochastics and its Applications, University of Cottbus & FU Berlin, Cottbus, Germany.
Department of Mathematics, University of Oslo, Oslo, Norway.ORCID iD: 0000-0002-5168-142X
2025 (English)In: Stochastics: An International Journal of Probablitiy and Stochastic Processes, ISSN 1744-2508, E-ISSN 1744-2516, Vol. 97, no 8, p. 1016-1037Article in journal (Refereed) Published
Abstract [en]

We consider the problem of finding the optimal initial investment strategy for a system modelled by a linear McKean–Vlasov (mean-field) stochastic differential equation with delay, driven by Brownian motion and a pure jump Poisson random measure. The goal is to determine the optimal initial values for the system in the period [−𝛿,0], where 𝛿>0 is a delay constant, before the system starts at t = 0. Due to the delay in the dynamics, the system will, after startup, be influenced by these initial investment values. It is known that linear stochastic delay differential equations are equivalent to stochastic Volterra integral equations. By utilizing this equivalence, we can find implicit expressions for the optimal investment. Moreover, we propose a deep neural network-based algorithm to solve the stochastic control problem with delay. Specifically, we employ a multi-layer feed-forward neural network for control modelling in the interval [−𝛿,0], and use back-propagation to train the feed-forward neural network. The gradient of the loss function is computed using stochastic gradient descent (SGD) with respect to the weights of the network.

Place, publisher, year, edition, pages
Informa UK Limited , 2025. Vol. 97, no 8, p. 1016-1037
National Category
Mathematical sciences
Identifiers
URN: urn:nbn:se:kth:diva-366373DOI: 10.1080/17442508.2024.2402741ISI: 001325330200001Scopus ID: 2-s2.0-85205341015OAI: oai:DiVA.org:kth-366373DiVA, id: diva2:1982135
Funder
Swedish Research Council, 2020-04697
Note

QC 20260123

Available from: 2025-07-07 Created: 2025-07-07 Last updated: 2026-01-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Agram, NaciraØksendal, Bernt

Search in DiVA

By author/editor
Agram, NaciraØksendal, Bernt
By organisation
Probability, Mathematical Physics and Statistics
In the same journal
Stochastics: An International Journal of Probablitiy and Stochastic Processes
Mathematical sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 55 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf