Skip to yearly menu bar Skip to main content


Poster

Learning Randomized Algorithms with Transformers

Johannes von Oswald · Seijin Kobayashi · Yassir Akram · Angelika Steger

Hall 3 + Hall 2B #582
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT
 
Oral presentation: Oral Session 6C
Sat 26 Apr 12:30 a.m. PDT — 2 a.m. PDT

Abstract:

Randomization is a powerful tool that endows algorithms with remarkable properties. For instance, randomized algorithms excel in adversarial settings, often surpassing the worst-case performance of deterministic algorithms with large margins. Furthermore, their success probability can be amplified by simple strategies such as repetition and majority voting. In this paper, we enhance deep neural networks, in particular transformer models, with randomization. We demonstrate for the first time that randomized algorithms can be instilled in transformers through learning, in a purely data- and objective-driven manner. First, we analyze known adversarial objectives for which randomized algorithms offer a distinct advantage over deterministic ones. We then show that common optimization techniques, such as gradient descent or evolutionary strategies, can effectively learn transformer parameters that make use of the randomness provided to the model. To illustrate the broad applicability of randomization in empowering neural networks, we study three conceptual tasks: associative recall, graph coloring, and agents that explore grid worlds. In addition to demonstrating increased robustness against oblivious adversaries through learned randomization, our experiments reveal remarkable performance improvements due to the inherently random nature of the neural networks' computation and predictions.

Live content is unavailable. Log in and register to view live content