Skip to yearly menu bar Skip to main content


Poster

LLMCarbon: Modeling the End-to-End Carbon Footprint of Large Language Models

Ahmad Faiz · Sotaro Kaneda · Ruhan Wang · Rita Osi · Prateek Sharma · Fan Chen · Lei Jiang

Halle B #212
[ ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT
 
Oral presentation: Oral 6B
Thu 9 May 6:45 a.m. PDT — 7:30 a.m. PDT

Abstract:

The carbon footprint associated with large language models (LLMs) is a significant concern, encompassing emissions from their training, inference, experimentation, and storage processes, including operational and embodied carbon emissions. An essential aspect is accurately estimating the carbon impact of emerging LLMs even before their training, which heavily relies on GPU usage. Existing studies have reported the carbon footprint of LLM training, but only one tool, mlco2, can predict the carbon footprint of new neural networks prior to physical training. However, mlco2 has several serious limitations. It cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs, disregards critical architectural parameters, focuses solely on GPUs, and cannot model embodied carbon footprints. Addressing these gaps, we introduce \textit{\carb}, an end-to-end carbon footprint projection model designed for both dense and MoE LLMs. Compared to mlco2, \carb~significantly enhances the accuracy of carbon footprint estimations for various LLMs. The source code is released at \url{https://github.com/SotaroKaneda/MLCarbon}.

Chat is not available.