Keynote
in
Workshop: Machine Learning Multiscale Processes
Math + AI = AGI
Sergei Gukov
in
Workshop: Machine Learning Multiscale Processes
Abstract. In this talk, we explore the transformative potential of custom reinforcement learning (RL) algorithms in accelerating solutions to complex, research-level mathematical challenges. We begin by illustrating how these algorithms have achieved a 10X improvement in areas where previous advances of the same magnitude required many decades. A comparative analysis of different network architectures is presented to highlight their performance in this context. We then delve into the application of RL algorithms to exceptionally demanding tasks, such as those posed by the Millennium Prize problems and the smooth Poincaré conjecture in four dimensions. Drawing on our experiences, we discuss the prerequisites for developing new RL algorithms and architectures that are tailored to these high-level challenges. Based on a recent work: What makes math problems hard for reinforcement learning: a case study https://arxiv.org/abs/2408.15332
Biography. Director of Merkin Center for Pure and Applied Mathematics; Consulting Director of American Institute of Mathematics; John D. MacArthur Professor of Theoretical Physics and Mathematics at California Institute of Technology. Sergei is a member of the Scientific Board of the American Institute of Mathematics (AIM) and a member of the International Advisory Board of the Centre for Quantum Mathematics (QM). He has served on numerous other scientific committees and advisory boards. He is editor of the journal Communications in Mathematical Physics, Journal of Knot Theory and Its Ramifications, and Letters in Mathematical Physics. Known for Gukov–Vafa–Witten superpotential, Gukov–Witten surface operators, and Gukov–Pei–Putrov–Vafa (GPPV) invariants. Sergei's expertise is uniquely positioned at the intersection of theoretical physics, mathematics and machine learning.