Skip to yearly menu bar Skip to main content


Poster

DOCS: Quantifying Weight Similarity for Deeper Insights into Large Language Models

Zeping Min · Xinshang Wang

Hall 3 + Hall 2B #254
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

We introduce a novel index, the Distribution of Cosine Similarity (DOCS), for quantitatively assessing the similarity between weight matrices in Large Language Models (LLMs), aiming to facilitate the analysis of their complex architectures. Leveraging DOCS, our analysis uncovers intriguing patterns in the latest open-source LLMs: adjacent layers frequently exhibit high weight similarity and tend to form clusters, suggesting depth-wise functional specialization. Additionally, we prove that DOCS is theoretically effective in quantifying similarity for orthogonal matrices, a crucial aspect given the prevalence of orthogonal initializations in LLMs. This research contributes to a deeper understanding of LLM architecture and behavior, offering tools with potential implications for developing more efficient and interpretable models.

Live content is unavailable. Log in and register to view live content