Adam's research is focused on understanding the fundamental principles of learning in young humans and animals. Adam seeks to understand the algorithms and representations that allow people to progress from motor babbling, to open-ended play, to purposeful goal-directed behaviours. Adam is interested in continual learning problems where the agent is much smaller than the world and thus must continue to learn, react, and track in order to perform well. In particular, Adam's lab has investigated intrinsic reward and exploration, more efficient algorithms for off-policy learning, practical strategies for automatic hyperparameter tuning and meta learning, representations for online continual prediction in the face of partial observability, and new approaches to planning with learned models. In addition, Adam's group is deeply passionate about good empirical practices and new methodologies to help determine if our algorithms are ready for deployment in the real world.