Skip to yearly menu bar Skip to main content

Workshop: ICLR 2023 Workshop on Machine Learning for Remote Sensing

Aerial View Localization with Reinforcement Learning: Towards Emulating Search-and-Rescue

Aleksis Pirinen · Anton Samuelsson · John Backsund · Karl Åström


Climate-induced disasters are and will continue to be on the rise, and thus search-and-rescue (SAR) operations, where the task is to localize and assist one or several people who are missing, become increasingly relevant. In many cases the rough location may be known and a UAV can be deployed to explore a confined area to precisely localize the people. Due to time and battery constraints it is often critical that localization is performed efficiently. We abstract this type of problem in a framework that emulates a SAR-like setup without requiring access to actual UAVs. In this framework, an agent operates on top of an aerial image (proxy for a search area) and must localize a goal that is described through visual cues. To further mimic the situation on a UAV, the agent cannot observe the search area in its entirety, not even at low resolution, so it must operate based on partial glimpses alone. To tackle this task, we propose AiRLoc, a reinforcement learning (RL) model that decouples exploration (searching for distant goals) and exploitation (localizing nearby goals). Extensive evaluations show that AiRLoc outperforms various baselines as well as humans, and that it generalizes across datasets, e.g. to disaster-hit areas without seeing a single disaster scenario during training. Code and models are available at

Chat is not available.