kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 203) Show all publications
Liu, X., Lu, L., Guo, H., Melchiorri, M., Hafner, S., Ban, Y., . . . Baqa, M. F. (2025). Assessing urban sustainability in the Belt and Road region: a city-level analysis of SDG 11 indicators using earth observations. International Journal of Digital Earth, 18(1), Article ID 2522395.
Open this publication in new window or tab >>Assessing urban sustainability in the Belt and Road region: a city-level analysis of SDG 11 indicators using earth observations
Show others...
2025 (English)In: International Journal of Digital Earth, ISSN 1753-8947, E-ISSN 1753-8955, Vol. 18, no 1, article id 2522395Article in journal (Refereed) Published
Abstract [en]

Tracking progress toward sustainable urban development requires detailed assessments of key indicators to support evidence-based policy and planning. However, data limitations often constrain evaluations of Sustainable Development Goal (SDG) indicators to national or sub-national levels. This study utilizes global urban spatial products derived from Earth observation data to evaluate the spatiotemporal dynamics of two urban SDG indicators–11.3.1 (Land Use Efficiency, LUE) and 11.6.2 (annual average concentration of fine particulate matter in cities weighted by population or population-weighted PM₂.₅ concentrations, PPM₂.₅)–for over 7000 cities in the Belt and Road region between 2000 and 2020. The results show that 30.6% of cities improved LUE from 2010 to 2020 compared to 2000-2010, while 24.3% experienced deterioration. For PM₂.₅ exposure, 67.8% of cities experienced increased PPM₂.₅ by 2020 compared to 2000 levels, with the largest increases occurring in South Asia. The integrated assessment of the two indicators demonstrate the necessity for coordinated strategies that concurrently enhance land use efficiency and strengthen air pollution control measures. These findings demonstrate the value of Earth observation data for revealing regional disparities and informing localized SDG implementation strategies.

Place, publisher, year, edition, pages
Informa UK Limited, 2025
Keywords
Belt and Road, Earth observation, SDG11, sustainable development goals
National Category
Environmental Sciences
Identifiers
urn:nbn:se:kth:diva-368861 (URN)10.1080/17538947.2025.2522395 (DOI)001518910700001 ()2-s2.0-105009410136 (Scopus ID)
Note

QC 20250828

Available from: 2025-08-28 Created: 2025-08-28 Last updated: 2025-09-24Bibliographically approved
Zhao, Y. & Ban, Y. (2025). Assessment of L-Band and C-Band SAR on Burned Area Mapping of Multiseverity Forest Fires Using Deep Learning. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 18, 14148-14159
Open this publication in new window or tab >>Assessment of L-Band and C-Band SAR on Burned Area Mapping of Multiseverity Forest Fires Using Deep Learning
2025 (English)In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, ISSN 1939-1404, E-ISSN 2151-1535, Vol. 18, p. 14148-14159Article in journal (Refereed) Published
Abstract [en]

Earth observation-based burned area mapping is critical for evaluating the impact of wildfires on ecosystems. Optical satellite data from Landsat and Sentinel-2 are often used to map burned areas. However, they suffer from interference caused by clouds and smoke. Capable of penetrating through clouds and smoke, synthetic aperture radar (SAR) at C- and L-band is also widely used for burned area mapping. With a longer wavelength than C-band SAR, L-band SAR is more sensitive to trunks and branches. Conversely, C-band SAR is sensitive to tree canopy leaves. Thus, the wavelength differences between the two types of sensors result in varying abilities to detect burned areas with different burn severities, as different burn severities cause structural changes in the forests. This research compares ALOS Phased-Array L-band Synthetic Aperture Radar-2 to Sentinel-1 C-band SAR for mapping burned areas across low, medium, and high burn severities. Moreover, a deep-learning-based workflow is utilized to segment burned area maps from both C-band and L-band images. ConvNet-based and transformer-based segmentation models are trained and tested on global wildfires in broadleaf and needle-leaf forests. The results indicate that L-band data show higher backscatter changes compared to C-band data for low and medium severity. In addition, the segmentation models with L-band data as input achieve higher F1 (0.840) and IoU (0.729) scores than models with C-band data (0.757, 0.630). Finally, the ablation study tested different combinations of input bands and the effectiveness of total-variation loss. The study highlights the importance of SAR Log-ratio images as input and demonstrates that total-variation loss can reduce the noise in SAR images and improve segmentation accuracy.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
L-band, C-band, Synthetic aperture radar, Image segmentation, Deep learning, Wildfires, Transformers, Sentinel-1, Forestry, Backscatter, Burned area mapping, Phased Array L-band Synthetic Aperture Radar-2 (PALSAR), remote sensing, wildfire monitoring
National Category
Earth Observation Physical Geography
Identifiers
urn:nbn:se:kth:diva-368400 (URN)10.1109/JSTARS.2025.3560287 (DOI)001508110200001 ()2-s2.0-105002474080 (Scopus ID)
Note

QC 20250818

Available from: 2025-08-18 Created: 2025-08-18 Last updated: 2025-08-18Bibliographically approved
Hafner, S., Fang, H., Azizpour, H. & Ban, Y. (2025). Continuous Urban Change Detection from Satellite Image Time Series with Temporal Feature Refinement and Multi-Task Integration. IEEE Transactions on Geoscience and Remote Sensing, 63, 1-18
Open this publication in new window or tab >>Continuous Urban Change Detection from Satellite Image Time Series with Temporal Feature Refinement and Multi-Task Integration
2025 (English)In: IEEE Transactions on Geoscience and Remote Sensing, ISSN 0196-2892, E-ISSN 1558-0644, Vol. 63, p. 1-18Article in journal (Refereed) Published
Abstract [en]

Urbanization advances at unprecedented rates, leading to negative environmental and societal impacts. Remote sensing can help mitigate these effects by supporting sustainable development strategies with accurate information on urban growth. Deep learning-based methods have achieved promising urban change detection results from optical satellite image pairs using convolutional neural networks (ConvNets), transformers, and a multi-task learning setup. However, bi-temporal methods are limited for continuous urban change detection, i.e., the detection of changes in consecutive image pairs of satellite image time series (SITS), as they fail to fully exploit multi-temporal data (> 2 images). Existing multi-temporal change detection methods, on the other hand, collapse the temporal dimension, restricting their ability to capture continuous urban changes. Additionally, multi-task learning methods lack integration approaches that combine change and segmentation outputs. To address these challenges, we propose a continuous urban change detection framework incorporating two key modules. The temporal feature refinement (TFR) module employs self-attention to improve ConvNet-based multi-temporal building representations. The temporal dimension is preserved in the TFR module, enabling the detection of continuous changes. The multi-task integration (MTI) module utilizes Markov networks to find an optimal building map time series based on segmentation and dense change outputs. The proposed framework effectively identifies urban changes based on high-resolution SITS acquired by the PlanetScope constellation (F1 score 0.551), Gaofen-2 (F1 score 0.440), and WorldView-2 (F1 score 0.543). Moreover, our experiments on three challenging datasets demonstrate the effectiveness of the proposed framework compared to bi-temporal and multi-temporal urban change detection and segmentation methods.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Earth observation, Multi-task learning, Multi-temporal, Remote sensing, Transformers
National Category
Earth Observation
Identifiers
urn:nbn:se:kth:diva-366565 (URN)10.1109/TGRS.2025.3578866 (DOI)001512531900009 ()2-s2.0-105007921391 (Scopus ID)
Note

QC 20250710

Available from: 2025-07-10 Created: 2025-07-10 Last updated: 2025-11-03Bibliographically approved
Hafner, S., Gerard, S., Sullivan, J. & Ban, Y. (2025). DisasterAdaptiveNet: A robust network for multi-hazard building damage detection from very-high-resolution satellite imagery. International Journal of Applied Earth Observation and Geoinformation, 143, Article ID 104756.
Open this publication in new window or tab >>DisasterAdaptiveNet: A robust network for multi-hazard building damage detection from very-high-resolution satellite imagery
2025 (English)In: International Journal of Applied Earth Observation and Geoinformation, ISSN 1569-8432, E-ISSN 1872-826X, Vol. 143, article id 104756Article in journal (Refereed) Published
Abstract [en]

Earth observation satellites play a crucial role in disaster response and management, offering timely and large-scale data for damage assessment. Recent studies have demonstrated the potential of deep learning techniques for automated building damage detection from satellite imagery, often based on the xBD dataset. This high-quality dataset features bi-temporal very-high-resolution image pairs of several disaster events. Notably, several studies have proposed new network architectures and demonstrated their improved performance on xBD. Although such highly engineered model-centric approaches achieve promising results on the original dataset split of xBD, we show that they underperform on a new event-based split, which evaluates them on unseen events. To reduce this generalization gap, we propose to follow a data-centric approach. For this, we first derive a simplified baseline method from the winning solution of the xView2 competition, with greatly reduced complexity. With a simple adjustment to this baseline method, we incorporate readily available disaster-type information, allowing it to account for disaster-specific damage characteristics. We evaluate the resulting disaster-adaptive model on the event-based split of xBD and demonstrate its improved ability to generalize to unseen events compared to several competing methods. These results highlight the potential of our data-centric approach for practical and robust building damage assessment in real-world disaster scenarios. Code including the strong baseline model is available at: https://github.com/SebastianHafner/DisasterAdaptiveNet.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Deep learning, Earth observation, Model conditioning, Multi-task learning
National Category
Earth Observation
Identifiers
urn:nbn:se:kth:diva-369922 (URN)10.1016/j.jag.2025.104756 (DOI)2-s2.0-105013632237 (Scopus ID)
Note

Not duplicate with DiVA 1915661

QC 20250918

Available from: 2025-09-18 Created: 2025-09-18 Last updated: 2025-09-18Bibliographically approved
Yadav, R., Nascetti, A. & Ban, Y. (2025). How high are we? Large-scale building height estimation at 10 m using Sentinel-1 SAR and Sentinel-2 MSI time series. Remote Sensing of Environment, 318, Article ID 114556.
Open this publication in new window or tab >>How high are we? Large-scale building height estimation at 10 m using Sentinel-1 SAR and Sentinel-2 MSI time series
2025 (English)In: Remote Sensing of Environment, ISSN 0034-4257, E-ISSN 1879-0704, Vol. 318, article id 114556Article in journal (Refereed) Published
Abstract [en]

Accurate building height estimation is essential to support urbanization monitoring, environmental impact analysis and sustainable urban planning. However, conducting large-scale building height estimation remains a significant challenge. While deep learning (DL) has proven effective for large-scale mapping tasks, there is a lack of advanced DL models specifically tailored for height estimation, particularly when using open-source Earth observation data. In this study, we propose T-SwinUNet, an advanced DL model for large-scale building height estimation leveraging Sentinel-1 SAR and Sentinel-2 multispectral time series. T-SwinUNet model contains a feature extractor with local/global feature comprehension capabilities, a temporal attention module to learn the correlation between constant and variable features of building objects over time and an efficient multitask decoder to predict building height at 10 m spatial resolution. The model is trained and evaluated on data from the Netherlands, Switzerland, Estonia, and Germany, and its generalizability is evaluated on an out-of-distribution (OOD) test set from ten additional cities from other European countries. Our study incorporates extensive model evaluations, ablation experiments, and comparisons with established models. T-SwinUNet predicts building height with a Root Mean Square Error (RMSE) of 1.89 m, outperforming state-of-the-art models at 10 m spatial resolution. Its strong generalization to the OOD test set (RMSE of 3.2 m) underscores its potential for low-cost building height estimation across Europe, with future scalability to other regions. Furthermore, the assessment at 100 m resolution reveals that T-SwinUNet (0.29 m RMSE, 0.75 R2) also outperformed the global building height product GHSL-Built-H R2023A product(0.56 m RMSE and 0.37 R2). Our implementation is available at: https://github.com/RituYadav92/Building-Height-Estimation.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Building height estimation, Multitask learning, Out-of-distribution generalization, Regression, Sentinel, Time series
National Category
Earth Observation
Identifiers
urn:nbn:se:kth:diva-358166 (URN)10.1016/j.rse.2024.114556 (DOI)001413894800001 ()2-s2.0-85212150378 (Scopus ID)
Note

QC 20250217

Available from: 2025-01-07 Created: 2025-01-07 Last updated: 2025-10-16Bibliographically approved
Zhao, Y. & Ban, Y. (2025). Near real-time wildfire progression mapping with VIIRS time-series and autoregressive SwinUNETR. International Journal of Applied Earth Observation and Geoinformation, 136, Article ID 104358.
Open this publication in new window or tab >>Near real-time wildfire progression mapping with VIIRS time-series and autoregressive SwinUNETR
2025 (English)In: International Journal of Applied Earth Observation and Geoinformation, ISSN 1569-8432, E-ISSN 1872-826X, Vol. 136, article id 104358Article in journal (Refereed) Published
Abstract [en]

Wildfire management and response requires frequent and accurate burned area mapping. How to map daily burned areas with satisfactory accuracy remains challenging due to missed detections caused by accumulating active fire points as well as the low temporal resolution of sensors onboard satellites like Sentinel-2/Landsat-8/9 and monthly burned area product generated from the Visible Infrared Imaging Radiometer Suite (VIIRS) data. ConvNet-based and Transformer-based deep-learning models are widely applied to mid-spatial-resolution satellite images. But these models perform poorly on low-spatial-resolution images. Also, cloud interference is one major issue when continuously monitoring the burned area. To improve detection accuracy and reduce cloud inference by combining temporal and spatial information, we propose an autoregressive spatial–temporal model AR-SwinUNETR to segment daily burned areas from VIIRS time-series. AR-SwinUNETR processes the image time-series as a 3D tensor but considers the temporal connections between images in the time-series by applying an autoregressive mask in Swin-Transformer Block. The model is trained with 2017-2020 wildfire events in the US and validated on 2021 US wildfire events. The quantitative results indicate AR-SwinUNETR can achieve a higher F1-Score than baseline deep learning models. The quantitative results of testset which consists of eight 2023 long-duration wildfires in Canada show a better F1 Score (0.757) and IoU Score (0.607) than baseline accumulated VIIRS Active Fire Hotspots (0.715) and IoU Score (0.557) compared with labels generated from Sentinel-2 images. In conclusion, the proposed AR-SwinUNETR with VIIRS image time-series can efficiently detect daily burned area providing better accuracy than direct burned area mapping with VIIRS active fire hotspots. Also, burned area mapping using VIIRS time-series and AR-SwinUNETR keeps a high temporal resolution (daily) compared to other burned area mapping products. The qualitative results also show improvements in detecting burned areas with cloudy images.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Burned area mapping, Disaster response, Image segmentation, Remote sensing, Swin-Transformer, VIIRS, Wildfire monitoring
National Category
Earth Observation
Identifiers
urn:nbn:se:kth:diva-358890 (URN)10.1016/j.jag.2025.104358 (DOI)001416930000001 ()2-s2.0-85214833274 (Scopus ID)
Note

Not duplicate with DiVA 1913766

QC 20250123

Available from: 2025-01-23 Created: 2025-01-23 Last updated: 2025-02-26Bibliographically approved
Zhang, Q., Liu, L., Yang, X., Sun, Z. & Ban, Y. (2025). Nighttime light development index: a new evaluation method for China’s construction land utilization level. Humanities and Social Sciences Communications, 12(1), Article ID 369.
Open this publication in new window or tab >>Nighttime light development index: a new evaluation method for China’s construction land utilization level
Show others...
2025 (English)In: Humanities and Social Sciences Communications, E-ISSN 2662-9992, Vol. 12, no 1, article id 369Article in journal (Refereed) Published
Abstract [en]

Measuring construction land utilization levels is essential for the ongoing redevelopment of inefficient construction land in China. However, an efficient and spatially explicit evaluation method of construction land utilization level is under examined and remains a great challenge. In this study, we innovatively utilized nighttime light data and spatial population data to construct the Nighttime Light Development Index (NLDI), which was employed to assess the level of centralized and decentralized construction land utilization level within municipal units. The findings revealed a downward trend in average NLDI for both types of construction land over the past decade, demonstrating a significant improvement in spatial alignment between population distribution and light intensity. Notably, the development level of partially decentralized construction land showed substantial enhancement during 2010–2020. Furthermore, the relationship between construction land development density (CLDD) and NLDI varied across different regions and did not exhibit a significant correlation, which implies that NLDI can provide substantial additional land use information. The research findings offer a scientific reference for promoting efficient construction land utilization and fostering high-quality regional development. The findings also provide a new research perspective and an easy-to-use approach for assessing regional development levels, which can be extended to other countries and regions.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Construction Management
Identifiers
urn:nbn:se:kth:diva-362014 (URN)10.1057/s41599-025-04626-0 (DOI)001444372900001 ()2-s2.0-105000178100 (Scopus ID)
Note

QC 20250428

Available from: 2025-04-03 Created: 2025-04-03 Last updated: 2025-04-28Bibliographically approved
Zhao, Y. & Ban, Y. (2025). RADARSAT constellation mission compact polarisation SAR data for burned area mapping with deep learning. International Journal of Applied Earth Observation and Geoinformation, 141, Article ID 104615.
Open this publication in new window or tab >>RADARSAT constellation mission compact polarisation SAR data for burned area mapping with deep learning
2025 (English)In: International Journal of Applied Earth Observation and Geoinformation, ISSN 1569-8432, E-ISSN 1872-826X, Vol. 141, article id 104615Article in journal (Refereed) Published
Abstract [en]

Monitoring wildfires has become increasingly critical due to the sharp rise in wildfire incidents in recent years. Optical satellites like Sentinel-2 and Landsat are extensively utilised for mapping burned areas. However, the effectiveness of optical sensors is compromised by clouds and smoke, which obstruct the detection of burned areas. Thus, satellites equipped with Synthetic Aperture Radar (SAR), such as dual-polarisation Sentinel-1 and quad-polarisation RADARSAT-1/-2 C-band SAR, which can penetrate clouds and smoke, are investigated for mapping burned areas. However, there is limited research on using compact polarisation (compact-pol) C-band RADARSAT Constellation Mission (RCM) SAR data for this purpose. This study aims to investigate the capacity of compact polarisation RCM data for burned area mapping through deep learning. Compact-pol m-χ decomposition and Compact-pol Radar Vegetation Index (CpRVI) are derived from the RCM Multi-Look Complex product. A deep-learning-based processing pipeline incorporating ConvNet-based and Transformer-based models is applied for burned area mapping, with three different input settings: using only log-ratio dual-polarisation intensity images, using only compact-pol decomposition plus CpRVI, and using all three data sources. The training dataset comprises 46,295 patches, generated from 12 major wildfire events in Canada. The test dataset includes seven wildfire events from the 2023 and 2024 Canadian wildfire seasons in Alberta, British Columbia, Quebec and the Northwest Territories. The results demonstrate that compact-pol m-χ decomposition and CpRVI images significantly complement log-ratio images for burned area mapping. The best-performing Transformer-based model, UNETR, trained with log-ratio, m-χ m-decomposition, and CpRVI data, achieved an F1 Score of 0.718 and an IoU Score of 0.565, showing a notable improvement compared to the same model trained using only log-ratio images (F1 Score: 0.684, IoU Score: 0.557). This is the first study to demonstrate that RCM C-band SAR data and its derived features are effective for burned area mapping.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Burned area mapping, Compact polarisation, Decomposition, Deep learning, Radar vegetation index, RADARSAT constellation mission, SAR
National Category
Earth Observation Signal Processing
Identifiers
urn:nbn:se:kth:diva-366003 (URN)10.1016/j.jag.2025.104615 (DOI)001515278300001 ()2-s2.0-105007558441 (Scopus ID)
Note

Not duplicate with DiVA 1913771

QC 20250704

Available from: 2025-07-04 Created: 2025-07-04 Last updated: 2025-09-22Bibliographically approved
Shibli, A., Nascetti, A. & Ban, Y. (2025). Very High- to High- Resolution Imagery Transferability for Building Damage Detection Using Generative AI. In: 2025 Joint Urban Remote Sensing Event, JURSE 2025: . Paper presented at 2025 Joint Urban Remote Sensing Event, JURSE 2025, Tunis, Tunisia, May 5 2025 - May 7 2025. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Very High- to High- Resolution Imagery Transferability for Building Damage Detection Using Generative AI
2025 (English)In: 2025 Joint Urban Remote Sensing Event, JURSE 2025, Institute of Electrical and Electronics Engineers (IEEE) , 2025Conference paper, Published paper (Refereed)
Abstract [en]

Wildfires are a growing global concern, causing significant damage to urban infrastructure each year. This study presents a novel approach for building damage assessment using generative artificial intelligence, focusing on the transferability of high-resolution satellite imagery models to lower-resolution datasets. Our diffusion-based model is trained on the xView2 Wildfire Building Damage Benchmark, a dataset specifically designed for wildfire-induced building damage detection. The model is further evaluated on real-world wildfire incidents in Lahaina, Hawaii, and Athens, Greece, demonstrating its effectiveness in damage localization across varying spatial resolutions. With competitive performance on benchmark datasets and practical utility in real-world scenarios, this work highlights the potential of generative AI for geospatial disaster assessment and urban resilience.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
deep learning, diffusion models, generative artificial intelligence, geospatial data, machine learning, Natural disasters, satellite imagery, wildfire
National Category
Climate Science Multidisciplinary Geosciences Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-369406 (URN)10.1109/JURSE60372.2025.11076064 (DOI)2-s2.0-105012177567 (Scopus ID)
Conference
2025 Joint Urban Remote Sensing Event, JURSE 2025, Tunis, Tunisia, May 5 2025 - May 7 2025
Note

Part of ISBN 9798350371833

QC 20250922

Available from: 2025-09-22 Created: 2025-09-22 Last updated: 2025-09-22Bibliographically approved
Zhang, P., Hu, X., Ban, Y., Nascetti, A. & Gong, M. (2024). Assessing Sentinel-2, Sentinel-1, and ALOS-2 PALSAR-2 Data for Large-Scale Wildfire-Burned Area Mapping: Insights from the 2017–2019 Canada Wildfires. Remote Sensing, 16(3), Article ID 556.
Open this publication in new window or tab >>Assessing Sentinel-2, Sentinel-1, and ALOS-2 PALSAR-2 Data for Large-Scale Wildfire-Burned Area Mapping: Insights from the 2017–2019 Canada Wildfires
Show others...
2024 (English)In: Remote Sensing, E-ISSN 2072-4292, Vol. 16, no 3, article id 556Article in journal (Refereed) Published
Abstract [en]

Wildfires play a crucial role in the transformation of forest ecosystems and exert a significant influence on the global climate over geological timescales. Recent shifts in climate patterns and intensified human–forest interactions have led to an increase in the incidence of wildfires. These fires are characterized by their extensive coverage, higher frequency, and prolonged duration, rendering them increasingly destructive. To mitigate the impact of wildfires on climate change, ecosystems, and biodiversity, it is imperative to conduct systematic monitoring of wildfire progression and evaluate their environmental repercussions on a global scale. Satellite remote sensing is a powerful tool, offering precise and timely data on terrestrial changes, and has been extensively utilized for wildfire identification, tracking, and impact assessment at both local and regional levels. The Canada Centre for Mapping and Earth Observation, in collaboration with the Canadian Forest Service, has developed a comprehensive National Burned Area Composite (NBAC). This composite serves as a benchmark for curating a bi-temporal multi-source satellite image dataset for change detection, compiled from the archives of Sentinel-2, Sentinel-1, and ALOS-2 PALSAR-2. To our knowledge, this dataset is the inaugural large-scale, multi-source, and multi-frequency satellite image dataset with 20 m spatial resolution for wildfire mapping, monitoring, and evaluation. It harbors significant potential for enhancing wildfire management strategies, building upon the profound advancements in deep learning that have contributed to the field of remote sensing. Based on our curated dataset, which encompasses major wildfire events in Canada, we conducted a systematic evaluation of the capability of multi-source satellite earth observation data in identifying wildfire-burned areas using statistical analysis and deep learning. Our analysis compares the difference between burned and unburned areas using post-event observation solely or bi-temporal (pre- and post-event) observations across diverse land cover types. We demonstrate that optical satellite data yield higher separability than C-Band and L-Band Synthetic Aperture Radar (SAR), which exhibit considerable overlap in burned and unburned sample distribution, as evidenced by SAR-based boxplots. With U-Net, we further explore how different input channels influence the detection accuracy. Our findings reveal that deep neural networks enhance SAR’s performance in mapping burned areas. Notably, C-Band SAR shows a higher dependency on pre-event data than L-Band SAR for effective detection. A comparative analysis of U-Net and its variants indicates that U-Net works best with single-sensor data, while the late fusion architecture marginally surpasses others in the fusion of optical and SAR data. Accuracy across sensors is highest in closed forests, with sequentially lower performance in open forests, shrubs, and grasslands. Future work will extend the data from both spatial and temporal dimensions to encompass varied vegetation types and climate zones, furthering our understanding of multi-source and multi-frequency satellite remote sensing capabilities in wildfire detection and monitoring.

Place, publisher, year, edition, pages
MDPI AG, 2024
Keywords
ALOS-2 PALSAR-2, burned area mapping, change detection, data fusion, dataset, deep learning, multi-frequency, multi-source, SAR, Sentinel-1, Sentinel-2, siamese networks, wildfire
National Category
Earth Observation
Identifiers
urn:nbn:se:kth:diva-343666 (URN)10.3390/rs16030556 (DOI)001160514200001 ()2-s2.0-85184671536 (Scopus ID)
Note

QC 20240222

Available from: 2024-02-22 Created: 2024-02-22 Last updated: 2025-02-10Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1369-3216

Search in DiVA

Show all publications