Poster
Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency
Qixin ZHANG · Zongqi Wan · Yu Yang · Li Shen · Dacheng Tao
Hall 3 + Hall 2B #383
[
Abstract
]
Sat 26 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
Coordinating multiple agents to collaboratively maximize submodular functions in unpredictable environments is a critical task with numerous applications in machine learning, robot planning and control. The existing approaches, such as the OSG algorithm, are often hindered by their poor approximation guarantees and the rigid requirement for a fully connected communication graph. To address these challenges, we firstly present a MA-OSMA algorithm, which employs the multi-linear extension to transfer the discrete submodular maximization problem into a continuous optimization, thereby allowing us to reduce the strict dependence on a complete graph through consensus techniques. Moreover, MA-OSMA leverages a novel surrogate gradient to avoid sub-optimal stationary points. To eliminate the computationally intensive projection operations in MA-OSMA, we also introduce a projection-free MA-OSEA algorithm, which effectively utilizes the KL divergence by mixing a uniform distribution. Theoretically, we confirm that both algorithms achieve a regret bound of ˜O(√CTT1−β) against a (1−e−cc)-approximation to the best comparator in hindsight, where CT is the deviation of maximizer sequence, β is the spectral gap of the network and c is the joint curvature of submodular objectives. This result significantly improves the (11+c)-approximation provided by the state-of-the-art OSG algorithm. Finally, we demonstrate the effectiveness of our proposed algorithms through simulation-based multi-target tracking.
Live content is unavailable. Log in and register to view live content