Invited Talk
in
Workshop: ICLR 2025 Workshop on GenAI Watermarking (WMARK)
John Collomosse - Building Safe and Fair Generative AI with Content Provenance
Provenance facts, such as who made an image and how, can provide valuable context for users to make trust decisions about visual content. Emerging standards and provenance enhancing tools such as watermarking promise to play an important role in fighting fake news and the spread of misinformation. In this talk we contrast metadata, fingerprinting and watermarking, and discuss how we can build upon the complementary strengths of these three technology pillars to provide robust trust signals to support stories told by real and generative images. Beyond authenticity, we describe how provenance can also underpin new models for value creation in the age of Generative AI. In doing so we address other risks arising with generative AI such as ensuring training consent, and the proper attribution of credit to creatives who contribute their work to train generative models. We show that provenance may be combined with distributed ledger technology (DLT) to develop novel solutions for recognizing and rewarding creative endeavour in the age of generative AI.
Bio:
Prof. John Collomosse is a Sr. Principal Scientist at Adobe Research, where he leads research for the Content Authenticity Initiative (CAI) and two cross-industry task forces within the C2PA open standards body for media authenticity. He is a professor at the University of Surrey, where he is the founder and director of DECaDE, the UKRI Research Centre for the Decentralized Creative Economy. His research focuses on media provenance to fight misinformation and online harms, and on improving data integrity and attribution for responsible AI.