Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICLR 2025 Workshop on Human-AI Coevolution

Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback

Niklas Ippisch · Anna-Carolina Haensch · Markus Herklotz · Jan Simson · Jacob Beck · Malte Schierholz


Abstract:

We introduce an evaluation framework to assess the feedback given by large language models (LLMs) under different prompt engineering techniques and conduct a case study, systematically varying prompts to examine their influence on feedback quality for common programming errors in R. Our findings suggest that prompts recommending a stepwise approach improve precision, whereas omitting explicit details on which data to analyze can bolster error identification.

Chat is not available.