Skip to yearly menu bar Skip to main content


Poster

A-Bench: Are LMMs Masters at Evaluating AI-generated Images?

Zicheng Zhang · Haoning Wu · Chunyi Li · Yingjie Zhou · Wei Sun · Xiongkuo Min · Zijian Chen · Xiaohong Liu · Weisi Lin · Guangtao Zhai

Hall 3 + Hall 2B #366
[ ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

How to accurately and efficiently assess AI-generated images (AIGIs) remains a critical challenge for generative models. Given the high costs and extensive time commitments required for user studies, many researchers have turned towards employing large multi-modal models (LMMs) as AIGI evaluators, the precision and validity of which are still questionable. Furthermore, traditional benchmarks often utilize mostly natural-captured content rather than AIGIs to test the abilities of LMMs, leading to a noticeable gap for AIGIs. Therefore, we introduce A-Bench in this paper, a benchmark designed to diagnose whether LMMs are masters at evaluating AIGIs. Specifically, A-Bench is organized under two key principles: 1) Emphasizing both high-level semantic understanding and low-level visual quality perception to address the intricate demands of AIGIs. 2) Various generative models are utilized for AIGI creation, and various LMMs are employed for evaluation, which ensures a comprehensive validation scope. Ultimately, 2,864 AIGIs from 16 text-to-image models are sampled, each paired with question-answers annotated by human experts. We hope that A-Bench will significantly enhance the evaluation process and promote the generation quality for AIGIs.

Live content is unavailable. Log in and register to view live content