kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
WildfireSpreadTS: A dataset of multi-modal time series for wildfire spread prediction
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geoinformatics.ORCID iD: 0000-0002-4230-2467
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-2784-7300
2023 (English)In: Advances in Neural Information Processing Systems 36 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023, Neural Information Processing Systems Foundation , 2023Conference paper, Published paper (Refereed)
Abstract [en]

We present a multi-temporal, multi-modal remote-sensing dataset for predicting how active wildfires will spread at a resolution of 24 hours.The dataset consists of 13 607 images across 607 fire events in the United States from January 2018 to October 2021.For each fire event, the dataset contains a full time series of daily observations, containing detected active fires and variables related to fuel, topography and weather conditions.The dataset is challenging due to: a) its inputs being multi-temporal, b) the high number of 23 multi-modal input channels, c) highly imbalanced labels and d) noisy labels, due to smoke, clouds, and inaccuracies in the active fire detection.The underlying complexity of the physical processes adds to these challenges.Compared to existing public datasets in this area, WILDFIRESPREADTS allows for multi-temporal modeling of spreading wildfires, due to its time series structure.Furthermore, we provide additional input modalities and a high spatial resolution of 375m for the active fire maps.We publish this dataset to encourage further research on this important task with multi-temporal, noise-resistant or generative methods, uncertainty estimation or advanced optimization techniques that deal with the high-dimensional input space.

Place, publisher, year, edition, pages
Neural Information Processing Systems Foundation , 2023.
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-346140ISI: 001230083405038Scopus ID: 2-s2.0-85191155663OAI: oai:DiVA.org:kth-346140DiVA, id: diva2:1855925
Conference
37th Conference on Neural Information Processing Systems, NeurIPS 2023, New Orleans, United States of America, Dec 10 2023 - Dec 16 2023
Note

QC 20240506

Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2024-08-20Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Zhao, YuSullivan, Josephine

Search in DiVA

By author/editor
Gerard, SebastianZhao, YuSullivan, Josephine
By organisation
Robotics, Perception and Learning, RPLGeoinformatics
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 68 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf