Poster
in
Workshop: A Roadmap to Never-Ending RL
Self-Constructing Neural Networks through Random Mutation
Samuel Schmidgall
The search for neural architecture is producing many of the most exciting results in artificial intelligence. It has increasingly become apparent that task-specific neural architecture plays a crucial role for effectively solving problems. This paper presents a simple method for learning neural architecture through random mutation. This method demonstrates 1) neural architecture may be learned during the agent’s lifetime, 2) neural architecture may be constructed over a single lifetime without any initial connections or neurons, and 3) architectural modifications enable rapid adaptation to dynamic and novel task scenarios. Starting without any neurons or connections, this method constructs a neural architecture capable of high-performance on several tasks. The lifelong learning capabilities of this method are demonstrated in an environment without episodic resets, even learning with constantly changing morphology, limb disablement, and changing task goals all without losing locomotion capabilities.