Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Remote Sensing (ML4RS)

On the Relevance of SAR and Optical Modalities in Deep Learning based Data Fusion

German Aerospace Center (DLR) (DLR) · Nina Maria Gottschling


Abstract:

In preparation of SAR-optical fusion data sets often cloudy samples are removed from the optical component, if these do not contain any information for the prediction task. Although optical data contains more and easier to extract information and SAR data is more noisy, the latter is less affected by changes in the location or illumination and is not blinded by cloud coverage. By removing clouds from the data set, utilizing the SAR features and the often realistic situation of cloud coverage is withheld from the network during training. In this work we show on publicly available pre-trained networks and two remote sensing data sets, that the effort for the filtering and correction of clouds might be not needed. In contrast, the results of self trained ResNet18 networks indicate, that having cloudy examples in the data set might lead to a more informative feature extraction from the SAR modality. This leads to networks which utilize the SAR modality more properly, leading to an increased relevance of the SAR modality and improved accuracy, not only on cloudy test samples but potentially also on clear test data.\footnote{We are planning to publish our code and models}

Chat is not available.