Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pitfalls of limited data and computation for Trustworthy ML

KNIFE: Distilling Meta-Reasoning Knowledge with Free-Text Rationales

Aaron Chan · Zhiyuan Zeng · Wyatt Lake · Brihi Joshi · Hanjie Chen · Xiang Ren


Abstract:

Recent works have explored using free-text rationales (FTRs)---i.e., natural language explanations of a task output---to teach language models (LMs) how to solve NLP tasks. In these works, the LM is often finetuned or prompted to jointly generate the FTR and task output. However, this approach either involves finetuning LMs on possibly conflicting objectives or prompting prohibitively large LMs. To address this, we propose KNIFE, which guides LM reasoning via FTR knowledge distillation, instead of via FTR generation. KNIFE first finetunes an FTR-augmented teacher LM to predict the task output, then finetunes a student LM so that its hidden states are aligned with the teacher's. As a result, the student LM learns general reasoning knowledge from the FTRs and can be used for inference, without FTR generation or large LMs. On two question answering datasets, we show that KNIFE outperforms various baselines in both fully-supervised and low-resource settings. Also, using two more datasets, we analyze KNIFE's failure modes and identify FTR quality as critical to KNIFE performance.

Chat is not available.