Poster
Conflict-Averse Gradient Aggregation for Constrained Multi-Objective Reinforcement Learning
Dohyeong Kim · Mineui Hong · Jeongho Park · Songhwai Oh
Hall 3 + Hall 2B #403
In real-world applications, a reinforcement learning (RL) agent should consider multiple objectives and adhere to safety guidelines.To address these considerations, we propose a constrained multi-objective RL algorithm named constrained multi-objective gradient aggregator (CoMOGA).In the field of multi-objective optimization, managing conflicts between the gradients of the multiple objectives is crucial to prevent policies from converging to local optima.It is also essential to efficiently handle safety constraints for stable training and constraint satisfaction.We address these challenges straightforwardly by treating the maximization of multiple objectives as a constrained optimization problem (COP), where the constraints are defined to improve the original objectives.Existing safety constraints are then integrated into the COP, and the policy is updated by solving the COP, which ensures the avoidance of gradient conflicts.Despite its simplicity, CoMOGA guarantees convergence to global optima in a tabular setting.Through various experiments, we have confirmed that preventing gradient conflicts is critical, and the proposed method achieves constraint satisfaction across all tasks.
Live content is unavailable. Log in and register to view live content