Deep reinforcement learning algorithms have recently achieved impressive results on a range of video games, yet they remain much less efficient than an average human player at learning a new game. What makes humans so good at solving these video games? Here, we study one aspect critical to human gameplay -- their use of strong priors that enable efficient decision making and problem-solving. We created a sample video game and conducted various experiments to quantify the kinds of prior knowledge humans bring in while playing such games. We do this by modifying the video game environment to systematically remove different types of visual information that could be used by humans as priors. We find that human performance degrades drastically once prior information has been removed, while that of an RL agent does not change. Interestingly, we also find that general priors about objects that humans learn when they are as little as two months old are some of the most critical priors that help in human gameplay. Based on these findings, we then propose a taxonomy of object priors people employ when solving video games that can potentially serve as a benchmark for future reinforcement learning algorithms aiming to incorporate human-like representations in their systems.
Live content is unavailable. Log in and register to view live content