Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Learning with Logical Constraints but without Shortcut Satisfaction

Zenan Li · Zehua Liu · Yuan Yao · Jingwei Xu · Taolue Chen · Xiaoxing Ma · Jian Lu

MH1-2-3-4 #85

Keywords: [ training with logical constraints ] [ logical formula encoding ] [ stochastic gradient descent ascent ] [ variational learning ] [ Deep Learning and representational learning ]


Abstract:

Recent studies have started to explore the integration of logical knowledge into deep learning via encoding logical constraints as an additional loss function. However, existing approaches tend to vacuously satisfy logical constraints through shortcuts, failing to fully exploit the knowledge. In this paper, we present a new framework for learning with logical constraints. Specifically, we address the shortcut satisfaction issue by introducing dual variables for logical connectives, encoding how the constraint is satisfied. We further propose a variational framework where the encoded logical constraint is expressed as a distributional loss that is compatible with the model's original training loss. The theoretical analysis shows that the proposed approach bears some nice properties, and the experimental evaluations demonstrate its superior performance in both model generalizability and constraint satisfaction.

Chat is not available.