Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Tiny Papers Poster Session 1

An Evaluation Benchmark for Autoformalization in Lean4

Jasdeep Sidhu · Shubhra Mishra

#303
[ ] [ Project Page ]
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

In the advancing field of computational mathematics, Large Language Models (LLMs) hold the potential to revolutionize autoformalization, a process crucial across various disciplines. The introduction of Lean4, a mathematical programming language, presents an unprecedented opportunity to rigorously assess the autoformalization capabilities of LLMs. This paper introduces a novel evaluation benchmark designed for Lean4, applying it to test the abilities of state-of-the-art LLMs, including GPT-3.5, GPT-4, and Gemini Pro. Our comprehensive analysis reveals that, despite recent advancements, these LLMs still exhibit limitations in autoformalization, particularly in more complex areas of mathematics. These findings underscore the need for further development in LLMs to fully harness their potential in scientific research and development. This study not only benchmarks current LLM capabilities but also sets the stage for future enhancements in the field of autoformalization.

Live content is unavailable. Log in and register to view live content