Invited Talk
Workshop: Neurosymbolic Generative Models (NeSy-GeMs)

Learning with Discrete Structures and Algorithms

Mathias Niepert

[ Abstract ] [ Project Page ]
Thu 4 May 1:15 a.m. PDT — 2 a.m. PDT


Machine learning at scale has led to impressive results ranging from text-based image generation, reasoning with natural language, and code synthesis to name but a few. ML at scale is also successfully applied to a broad range of problems in engineering and the sciences. These recent developments make some of us question the utility of incorporating prior knowledge in the form of symbolic (discrete) structures and algorithms. Are computing and data at scale all we need?

We will make an argument that discrete (symbolic) structures and algorithms in machine learning models are advantageous and even required in numerous application domains such as Biology, Material Science, and Physics. Biomedical entities and their structural properties, for example, can be represented as graphs and require inductive biases equivariant to certain group operations. My lab's research is concerned with the development of machine learning methods that combine discrete structures with continuous equivariant representations. We also address the problem of learning and leveraging structure from data where it is missing, combining discrete algorithms and probabilistic models with gradient-based learning. We will show that discrete structures and algorithms appear in numerous places such as ML-based PDE solvers and that modeling them explicitly is indeed beneficial. Especially machine learning models with the aim to exhibit some form of explanatory properties have to rely on symbolic representations. The talk will also cover some biomedical and physics-related applications.

Chat is not available.