Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Secure and Trustworthy Large Language Models

WinoViz: Probing Visual Properties of Objects Under Different States

Woojeong Jin · Tejas Srinivasan · Jesse Thomason · Xiang Ren


Abstract:

Humans interpret visual aspects of objects based on contexts. For example, a banana appears brown when rotten and green when unripe. Previous studies focused on language models' grasp of typical object properties. We introduce WinoViz, a text-only dataset with 1,380 examples of probing language models' reasoning about diverse visual properties under different contexts. Our task demands pragmatic and visual knowledge reasoning. We also present multi-hop data, a more challenging version requiring multi-step reasoning chains. Experimental findings include:a) GPT-4 excels overall but struggles with multi-hop data.b) Large models perform well in pragmatic reasoning but struggle with visual knowledge reasoning.c) Vision-language models outperform language-only models.

Chat is not available.