Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Remote Sensing (ML4RS)

Learned Embedding Fields for Multi-Source, Multi-Temporal Earth Observation Imagery

Christopher Brown · Michal Kazmierski · William Rucklidge · Valerie Pasquarella · Evan Shelhamer


Abstract:

Earth observation data is plentiful, but the ease of analysis varies across data sources and products due to differences in packaging, processing, and the nuances of inputs and tasks.Analysis-ready datasets seek to enable broader and more convenient use of Earth observations by offering effective processing of the upstream data to facilitate analysis for downstream use cases. We propose embedding fields, our novel geo-spatial-temporal representation, as a learned form of analysis-ready dataset. Our representation is learned by optimizing a deep network on multi-source (multiple sensor) and multi-temporal (multiple time step) data without annotations. By learning our embedding fields we are able to incorporate more inputs for accuracy while compressing the size of the output for efficiency. Our representation is applied by computing it only once for the desired spatial and temporal scopes, to amortize the cost of analyses, then indexing by latitude, longitude, and year. We compare our learned embedding fields with MOSAIKS, a designed form of analysis-ready dataset, that also seeks to serve its representation as data. Because annotation scarcity is common in practice, we evaluate our embedding fields and MOSAIKS on multiple tasks in few-shot regimes. Our results indicate that embedding fields improve accuracy across all tasks considered while reducing inference computation for varied geographies (the United States, Indonesia, Malaysia) and types of annotations (land use, tree species distribution, and ecological regions).

Chat is not available.