Poster
in
Affinity Workshop: Blog Track Session 7
Fair Model-Based Reinforcement Learning Comparisons with Explicit and Consistent Update Frequency
Albert Thomas · Abdelhakim Benechehab · Giuseppe Paolo · Balázs Kégl
Halle B #1
Abstract:
Implicit update frequencies can introduce ambiguity in the interpretation of model-based reinforcement learning benchmarks, obscuring the real objective of the evaluation. While the update frequency can sometimes be optimized to improve performance, real-world applications often impose constraints, allowing updates only between deployments on the actual system. This blog post emphasizes the need for evaluations using consistent update frequencies across different algorithms to provide researchers and practitioners with clearer comparisons under realistic constraints.
Chat is not available.