Enhanced crop classification through integrated optical and SAR data: a deep learning approach for multi-source image fusion

Liu, N., Zhao, Q. , Williams, R. and Barrett, B. (2023) Enhanced crop classification through integrated optical and SAR data: a deep learning approach for multi-source image fusion. International Journal of Remote Sensing, (doi: 10.1080/01431161.2023.2232552) (Early Online Publication)

[img] Text
301477.pdf - Published Version
Available under License Creative Commons Attribution.

10MB

Abstract

Agricultural crop mapping has advanced over the last decades due to improved approaches and the increased availability of image datasets at various spatial and temporal resolutions. Considering the spatial and temporal dynamics of different crops during a growing season, multi-temporal classification frameworks are well-suited for mapping crops at large scales. Addressing the challenges posed by imbalanced class distribution, our approach combines the strengths of different deep learning models in an ensemble learning framework, enabling more accurate and robust classification by capitalizing on their complementary capabilities. This research aims to enhance the crop classification of maize, soybean, and wheat in Bei’an County, Northeast China, by developing a novel deep learning architecture that combines a three-dimensional convolutional neural network (3D-CNN) with a variant of convolutional recurrent neural networks (ConvRNN). The proposed method integrates multi-temporal Sentinel-1 polarimetric features with Sentinel-2 surface reflectance data for multi-source fusion and achieves an overall accuracy of 91.7%, a Kappa coefficient of 85.7%, and F1 scores of 93.7%, 92.2%, and 90.9% for maize, soybean, and wheat, respectively. Our proposed model is also compared with alternative data augmentation techniques, maintaining the highest mean F1 score (87.7%). The best performer was weakly supervised with ten per cent of ground truth data collected in Bei’an in 2017 and used to produce an annual crop map for measuring the model’s generalizability. The model learning reliability of the proposed method is interpreted through the visualization of model soft outputs and saliency maps.

Item Type:Articles
Status:Early Online Publication
Refereed:Yes
Glasgow Author(s) Enlighten ID:Williams, Professor Richard and Zhao, Dr Qunshan and Barrett, Dr Brian and Liu, Niantang
Authors: Liu, N., Zhao, Q., Williams, R., and Barrett, B.
College/School:College of Science and Engineering > School of Geographical and Earth Sciences
College of Social Sciences > School of Social and Political Sciences > Urban Studies
Journal Name:International Journal of Remote Sensing
Publisher:Taylor & Francis
ISSN:0143-1161
ISSN (Online):1366-5901
Published Online:17 July 2023
Copyright Holders:Copyright © 2023 The Authors
First Published:First published in International Journal of Remote Sensing 2023
Publisher Policy:Reproduced under a Creative Commons License

University Staff: Request a correction | Enlighten Editors: Update this record