Poster
RuAG: Learned-rule-augmented Generation for Large Language Models
Yudi Zhang · Pei Xiao · Lu Wang · Chaoyun Zhang · Meng Fang · Yali Du · Yevgeniy Puzyrev · Randolph Yao · Si Qin · Qingwei Lin · Mykola Pechenizkiy · Dongmei Zhang · Saravanakumar Rajmohan · Qi Zhang
Hall 3 + Hall 2B #232
In-context learning (ICL) and Retrieval-Augmented Generation (RAG) have gained attention for their ability to enhance LLMs' reasoning by incorporating external knowledge but suffer from limited contextual window size, leading to insufficient information injection. To this end, we propose a novel framework to automatically distill large volumes of offline data into interpretable first-order logic rules, which are injected into LLMs to boost their reasoning capabilities. Our method begins by formulating the search process relying on LLMs' commonsense, where LLMs automatically define head and body predicates. Then, we apply Monte Carlo Tree Search (MCTS) to address the combinational searching space and efficiently discover logic rules from data. The resulting logic rules are translated into natural language, allowing targeted knowledge injection and seamless integration into LLM prompts for LLM's downstream task reasoning. We evaluate our framework on public and private industrial tasks, including Natural Language Processing (NLP), time-series, decision-making, and industrial tasks, demonstrating its effectiveness in enhancing LLM's capability over diverse tasks.
Live content is unavailable. Log in and register to view live content