Poster
in
Workshop: Navigating and Addressing Data Problems for Foundation Models (DPFM)
Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
Lukas Struppek · Dominik Hintersdorf · Felix Friedrich · Manuel Brack · Patrick Schramowski · Kristian Kersting
Keywords: [ Diffusion Models ] [ fairness ] [ bias ] [ text-to-image synthesis ]
Models for text-to-image synthesis have recently drawn a lot of interest. They are capable of producing high-quality images that depict a variety of concepts and styles when conditioned on textual descriptions. However, these models adopt cultural characteristics associated with specific Unicode scripts from their vast amount of training data, which may not be immediately apparent. We show that by simply inserting single non-Latin characters in the textual description, common models reflect cultural biases in their generated images. We analyze this behavior both qualitatively and quantitatively, and identify a model's text encoder as the root cause of the phenomenon. Such behavior can be interpreted as a model feature, offering users a simple way to customize the image generation and reflect their own cultural background. Yet, malicious users or service providers may also try to intentionally bias the image generation. One goal might be to create racist stereotypes by replacing Latin characters with similarly-looking characters from non-Latin scripts, so-called homoglyphs.