Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 3rd DL4C Workshop: Emergent Possibilities and Challenges in Deep Learning for Code

CLOVER: A Test Case Generation Benchmark with Coverage, Long-Context, and Verification

Jiacheng Xu · Bo Pang · Jin Qu · Hiroaki Hayashi · Caiming Xiong · Yingbo Zhou


Abstract:

Software testing is a critical aspect of software development, yet generating test cases remains a routine task for engineers. This paper presents a benchmark, CLOVER, to evaluate models' capabilities in generating and completing test cases under specific conditions. Spanning from simple assertion completions to writing test cases that cover specific code blocks across multiple files, these tasks are based on 12 python repositories, analyzing 845 problems with context lengths ranging from 4k to 128k tokens. Utilizing code testing frameworks, we propose a method to construct retrieval contexts using coverage information. While models exhibit comparable performance with short contexts, notable differences emerge with 16k contexts. Notably, models like GPT-4o and Claude 3.5 can effectively leverage relevant snippets; however, all models score below 35\% on the complex Task III, even with the oracle context provided, underscoring the benchmark's significance and the potential for model improvement.The benchmark is containerized for code execution across tasks, and we will release the code, data, and construction methodologies.

Chat is not available.