Skip to yearly menu bar Skip to main content


Enhancing Human-AI Collaboration Through Logic-Guided Reasoning

Chengzhi Cao · Yinghao Fu · Sheng Xu · Ruimao Zhang · Shuang Li

Halle B #266
[ ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT


We present a systematic framework designed to enhance human-robot perception and collaboration through the integration of logical rules and Theory of Mind (ToM). Logical rules provide interpretable predictions and generalize well across diverse tasks, making them valuable for learning and decision-making. Leveraging the ToM for understanding others' mental states, our approach facilitates effective collaboration. In this paper, we employ logic rules derived from observational data to infer human goals and guide human-like agents. These rules are treated as latent variables, and a rule encoder is trained alongside a multi-agent system in the robot's mind. We assess the posterior distribution of latent rules using learned embeddings, representing entities and relations. Confidence scores for each rule indicate their consistency with observed data. Then, we employ a hierarchical reinforcement learning model with ToM to plan robot actions for assisting humans. Extensive experiments validate each component of our framework, and results on multiple benchmarks demonstrate that our model outperforms the majority of existing approaches.

Chat is not available.