Poster
in
Workshop: Machine Learning for Remote Sensing (ML4RS)
Super-resolution of Sentinel-1 Imagery Using an Enhanced Attention Network and Real Ground Truth Data
Christian Ayala · Juan Francisco Amieva · Mikel Galar
Active imaging systems, particularly Synthetic Aperture Radar (SAR), offer notable advantages such as the ability to operate in diverse weather conditions and provide day and night observations of Earth's surface. These attributes are especially valuable when monitoring regions consistently obscured by clouds, as seen in Northern Europe. One of the most recognized SAR constellations is Sentinel-1 (S1), known for providing imagery freely to the community. Despite this accessibility, problems arise due to the inherent limitations of the spatial resolution of S1 and the presence of speckle noise, which makes the data difficult to interpret. Although there are several commercial SAR satellites offering on-demand high-resolution data, their high costs hinder their use among remote sensing experts. Motivated by the outlined advantages and limitations, this paper introduces a novel deep learning-based methodology aimed at simultaneously reducing speckle noise and enhancing the spatial resolution of S1 data. Contrary to previous works that rely on a high-resolution satellite as ground truth (typically TerraSAR-X), we propose to use the same satellite in another operational mode as ground truth. Accordingly, the proposed method focuses on enhancing the spatial resolution of S1 Interferometric Wide Swath mode products from 10 to 5 m GSD by leveraging S1 Stripmap mode as the ground truth for training the model. As a result, super-resolved images duplicated the input spatial resolution, closing the gap between S1 and commercial SAR satellites.