Skip to yearly menu bar Skip to main content


Poster

Erasing Concept Combination from Text-to-Image Diffusion Model

hongyi nie · Quanming Yao · Yang Liu · Zhen Wang · Yatao Bian

Hall 3 + Hall 2B #509
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Advancements in the text-to-image diffusion model have raised security concerns due to their potential to generate images with inappropriate themes such as societal biases and copyright infringements. Current studies make a great process to prevent the model from generating images containing specific high-risk visual concepts. However, these methods neglect the issue that inappropriate themes may also arise from the combination of benign visual concepts. Considering that the same image theme might be represented via multiple different visual concept combinations, and the model's generation performance of the corresponding individual visual concepts is distorted easily while processing the visual concept combination, effectively erasing such visual concept combinations from the diffusion model remains a formidable challenge. To this end, we formulate such challenge as the Concept Combination Erasing (CCE) problem and propose a Concept Graph-based high-level Feature Decoupling framework (CoGFD) to address CCE. CoGFD identifies and decomposes visual concept combinations with a consistent image theme from an LLM-induced concept logic graph, and erases these combinations through decoupling oc-occurrent high-level features. These techniques enable CoGFD to erase visual concept combinations of image content while enjoying a much less negative effect, compared to SOTA baselines, on the generative fidelity of related individual concepts. Extensive experiments on diverse visual concept combination scenarios verify the effectiveness of CoGFD.

Live content is unavailable. Log in and register to view live content