Skip to yearly menu bar Skip to main content


Poster

Prompting Fairness: Integrating Causality to Debias Large Language Models

Jingling Li · Zeyu Tang · Xiaoyu Liu · Peter Spirtes · Kun Zhang · Liu Leqi · Yang Liu

Hall 3 + Hall 2B #534
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Large language models (LLMs), despite their remarkable capabilities, are susceptible to generating biased and discriminatory responses. As LLMs increasingly influence high-stakes decision-making (e.g., hiring and healthcare), mitigating these biases becomes critical. In this work, we propose a causality-guided debiasing framework to tackle social biases, aiming to reduce the objectionable dependence between LLMs' decisions and the social information in the input. Our framework introduces a novel perspective to identify how social information can affect an LLM's decision through different causal pathways. Leveraging these causal insights, we outline principled prompting strategies that regulate these pathways through selection mechanisms. This framework not only unifies existing prompting-based debiasing techniques, but also opens up new directions for reducing bias by encouraging the model to prioritize fact-based reasoning over reliance on biased social cues. We validate our framework through extensive experiments on real-world datasets across multiple domains, demonstrating its effectiveness in debiasing LLM decisions, even with only black-box access to the model.

Live content is unavailable. Log in and register to view live content