Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Will Synthetic Data Finally Solve the Data Access Problem?

ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models

Jieyu Zhang · Le Xue · Linxin Song · Jun Wang · Weikai Huang · Manli Shu · An Yan · Zixian Ma · Juan Carlos Niebles · silvio savarese · Caiming Xiong · Zeyuan Chen · Ranjay Krishna · Ran Xu


Abstract:

With the rise of multimodal applications, instruction data has become critical for training multimodal language models capable of understanding complex image-based queries. Existing practices rely on powerful but costly large language models (LLMs) or multimodal language models (MLMs) to produce instruction data. These are often prone to hallucinations and licensing issues, and the generation process is often hard to scale and interpret. In this work, we present a programmatic approach that employs scene graphs as symbolic representations of images and human-written programs to synthesize vision-centric instruction data systematically. Our approach ensures the interpretability and controllability of the data generation process and scales efficiently while maintaining factual accuracy. By implementing a suite of 24 single-image and 14 multi-image instruction generators and a scene graph generation pipeline, we build a scalable, cost-effective system: ProVision, which produces diverse question-answer pairs concerning objects, attributes, relations, depth, etc., for any given image. Applied to Visual Genome and DataComp datasets, we generate over 10 million instruction data points and ProVision-10M and leverage them in both pertaining and instruction tuning stages of MLMs.

Chat is not available.