Poster
in
Workshop: 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models
"I'm not Racist but…": Discovering Bias in the Internal Knowledge of Large Language Models
Abel Salinas · Louis Penafiel · Robert McCormack · Fred Morstatter
Large language models (LLMs) have garnered significant attention for their remarkable performance in a continuously expanding set of natural language processing tasks. However, these models have been shown to harbor inherent societal biases, or stereotypes, which can adversely affect their performance in their many downstream applications. In this paper, we introduce a novel, purely prompt-based approach to uncover hidden stereotypes within any arbitrary LLM. Our approach dynamically generates a knowledge representation of internal stereotypes, enabling the identification of biases encoded within the LLM's internal knowledge. We demonstrate how our approach can be leveraged to design targeted bias benchmarks, enabling rapid identification and mitigation of potential bias in downstream tasks. By illuminating the biases present in LLMs and offering a systematic methodology for their analysis, our work contributes to advancing transparency and promoting fairness in natural language processing systems.