Skip to yearly menu bar Skip to main content


Poster

Don't flatten, tokenize! Unlocking the key to SoftMoE's efficacy in deep RL

Ghada Sokar · Johan S Obando Ceron · Aaron Courville · Hugo Larochelle · Pablo Samuel Castro

Hall 3 + Hall 2B #363
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

The use of deep neural networks in reinforcement learning (RL) often suffers from performance degradation as model size increases. While soft mixtures of experts (SoftMoEs) have recently shown promise in mitigating this issue for online RL, the reasons behind their effectiveness remain largely unknown. In this work we provide an in-depth analysis identifying the key factors driving this performance gain. We discover the surprising result that tokenizing the encoder output, rather than the use of multiple experts, is what is behind the efficacy of SoftMoEs. Indeed, we demonstrate that even with an appropriately scaled single expert, we are able to maintain the performance gains, largely thanks to tokenization.

Live content is unavailable. Log in and register to view live content