Skip to yearly menu bar Skip to main content


Poster
in
Workshop: How Far Are We From AGI

CatCode: A Comprehensive Evaluation Framework for LLMs On the Mixture of Code and Text

Zhenru Lin · Yiqun Yao · Yang Yuan

Keywords: [ evaluation ] [ Code ] [ LLM ] [ Category Theory ]


Abstract:

Large language models (LLMs) such as ChatGPT are increasingly proficient in understanding and generating a mixture of code and text. Evaluation based on such mixture can lead to a more comprehensive understanding of the models' abilities in solving coding problems. However, in this context, current evaluation methods are either limited in task coverage or lack standardization. To address this issue, we propose using category theory as a framework for evaluation. Specifically, morphisms within a code category can represent code debugging and transformation, functors between two categories represent code translation, and functors between a code category and a natural language category represent code generation, explanation, and reproduction. We present an automatic evaluation framework called CatCode (Category Code) that can comprehensively assess the coding abilities of LLMs, including ChatGPT, Text-Davinci, and CodeGeeX.

Chat is not available.