Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neurosymbolic Generative Models (NeSy-GeMs)

[Remote poster] Guaranteed Conformance of Neurosymbolic Dynamics Models to Natural Constraints

Kaustubh Sridhar · Souradeep Dutta · James Weimer · Insup Lee


Abstract: In safety-critical robotics and medical applications, it is common to use deep neural networks to capture the evolution of dynamical systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. It is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model $M$. For instance, the unicycle model (which encodes Newton's laws) for an F1 racing car. Here, we wish to best approximate the system model while being only a bounded distance away from $M$ with guarantees. We generate an unlabelled dataset to enforce conformance, where data is absent. Our first step is to distill all our data into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories, we partition the state space into disjoint subsets and compute bounds for each subset utilizing $M$. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified $M$ models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods. Our code can be found at https://github.com/neurosymbolic-models/constrained_dynamics .

Chat is not available.