Poster
in
Workshop: The 3rd DL4C Workshop: Emergent Possibilities and Challenges in Deep Learning for Code
Contextual Augmented Multi-Model Programming (CAMP): A Local-Cloud Copilot Solution
Yuchen Wang · Shangxin Guo · Chee Wei Tan
The rapid advancement of cloud-based Large Language Models (LLMs) has revolutionized AI-assisted programming, but their integration into local development environments faces trade-offs between performance and cost. Cloud LLMs deliver superior generative power but incur high computational costs and latency, whereas local models offer faster, context-aware retrieval but are limited in scope. To address this, we propose CAMP, a multi-model copilot solution that leverages context-based Retrieval Augmented Generation (RAG) to enhance LLM performance through dynamic context retrieval from local codebases which optimizes context-aware prompt construction. Experimental results show CAMPachieves a 12.5% improvement over context-less generation and 6.3% over the basic RAG approach. We demonstrate the methodology through the development of "Copilot for Xcode," which supports generative programming tasks including code completion, error detection, and documentation. The tool gained widespread adoption and was subsequently integrated into GitHub Copilot, highlighting CAMP's impact on AI-assisted programming and its potential to transform future software development workflows.