Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis

Guangchen (Eric) Lan · Dong-Jun Han · Abolfazl Hashemi · Vaneet Aggarwal · Christopher Brinton

Hall 3 + Hall 2B #408
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract: To improve the efficiency of reinforcement learning (RL), we propose a novel asynchronous federated reinforcement learning (FedRL) framework termed AFedPG, which constructs a global model through collaboration among N agents using policy gradient (PG) updates. To address the challenge of lagged policies in asynchronous settings, we design a delay-adaptive lookahead technique *specifically for FedRL* that can effectively handle heterogeneous arrival times of policy gradients. We analyze the theoretical global convergence bound of AFedPG, and characterize the advantage of the proposed algorithm in terms of both the sample complexity and time complexity. Specifically, our AFedPG method achieves O(ϵ2.5N) sample complexity for global convergence at each agent on average. Compared to the single agent setting with O(ϵ2.5) sample complexity, it enjoys a linear speedup with respect to the number of agents. Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from O(tmaxN) to O(Ni=11ti)1, where ti denotes the time consumption in each iteration at agent i, and tmax is the largest one. The latter complexity O(Ni=11ti)1 is always smaller than the former one, and this improvement becomes significant in large-scale federated settings with heterogeneous computing powers (tmaxtmin). Finally, we empirically verify the improved performance of AFedPG in four widely used MuJoCo environments with varying numbers of agents. We also demonstrate the advantages of AFedPG in various computing heterogeneity scenarios.

Live content is unavailable. Log in and register to view live content