kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Time Series Forecasting Models Copy the Past: How to Mitigate
Ecole Polytechn, Palaiseau, France..
Ecole Polytechn, Palaiseau, France..
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0003-2404-6030
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS. Ecole Polytechn, Palaiseau, France..ORCID iD: 0000-0001-5923-4440
2022 (English)In: 31st International Conference on Artificial Neural Networks, ICANN 2022 / [ed] Pimenidis, E Angelov, P Jayne, C Papaleonidas, A Aydin, M, Springer Nature , 2022, Vol. 13529, p. 366-378Conference paper, Published paper (Refereed)
Abstract [en]

Time series forecasting is at the core of important application domains posing significant challenges to machine learning algorithms. Recently neural network architectures have been widely applied to the problem of time series forecasting. Most of these models are trained by minimizing a loss function that measures predictions' deviation from the real values. Typical loss functions include mean squared error (MSE) and mean absolute error (MAE). In the presence of noise and uncertainty, neural network models tend to replicate the last observed value of the time series, thus limiting their applicability to real-world data. In this paper, we provide a formal definition of the above problem and we also give some examples of forecasts where the problem is observed. We also propose a regularization term penalizing the replication of previously seen values. We evaluate the proposed regularization term both on synthetic and real-world datasets. Our results indicate that the regularization term mitigates to some extent the aforementioned problem and gives rise to more robust models.

Place, publisher, year, edition, pages
Springer Nature , 2022. Vol. 13529, p. 366-378
Series
Lecture Notes in Computer Science, ISSN 0302-9743
Keywords [en]
Time-series forecasting, Deep learning, Loss functions
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-321029DOI: 10.1007/978-3-031-15919-0_31ISI: 000866210600031Scopus ID: 2-s2.0-85138770233OAI: oai:DiVA.org:kth-321029DiVA, id: diva2:1708618
Conference
31st International Conference on Artificial Neural Networks (ICANN), SEP 06-09, 2022, Univ W England, Bristol, ENGLAND
Note

Part of proceedings: ISBN 978-3-031-15919-0, ISBN 978-3-031-15918-3

QC 20221104

Available from: 2022-11-04 Created: 2022-11-04 Last updated: 2022-11-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Xu, NancyVazirgiannis, Michalis

Search in DiVA

By author/editor
Xu, NancyVazirgiannis, Michalis
By organisation
Software and Computer systems, SCS
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 32 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf