Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 3rd DL4C Workshop: Emergent Possibilities and Challenges in Deep Learning for Code

On Pretraining For Project-Level Code Completion

Maksim Sapronov · Evgenii Glukhov


Abstract:

Repository-level pretraining is commonly used to enable large language modelsfor code to leverage codebase-wide context. This enhances their ability to generateaccurate and context-aware code completions. In this work, we investigate howdifferent repository-processing strategies affect in-context learning in OpenCoder,a 1.5B-parameter model. We extend its context window from 4,096 to 16,384 to-kens by training on additional 1B tokens of curated repository-level data. Despiterelying on a smaller dataset than competing models (which often use hundredsof billions of tokens), our model achieves comparable performance on the LongCode Arena benchmark. We find that various repository-processing techniquesyield similarly strong results, with the primary gain coming from adapting to anew rotary positional embedding (RoPE) scaling parameter. Finally, we showthat a simpler file-level training approach at the original sequence length remainshighly effective, opening up repository-level code completion research to settingswith more constrained data and compute resources.

Chat is not available.