Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

LoR-VP: Low-Rank Visual Prompting for Efficient Vision Model Adaptation

Can Jin · Ying Li · Mingyu Zhao · Shiyu Zhao · Zhenting Wang · Xiaoxiao He · Ligong Han · Tong Che · Dimitris Metaxas

Hall 3 + Hall 2B #68
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract: Visual prompting has gained popularity as a method for adapting pre-trained models to specific tasks, particularly in the realm of parameter-efficient tuning. However, existing visual prompting techniques often pad the prompt parameters around the image, limiting the interaction between the visual prompts and the original image to a small set of patches while neglecting the inductive bias present in shared information across different patches. In this study, we conduct a thorough preliminary investigation to identify and address these limitations. We propose a novel visual prompt design, introducing **Lo**w-**R**ank matrix multiplication for **V**isual **P**rompting (LoR-VP), which enables shared and patch-specific information across rows and columns of image pixels. Extensive experiments across seven network architectures and four datasets demonstrate significant improvements in both performance and efficiency compared to state-of-the-art visual prompting methods, achieving up to 6× faster training times, utilizing 18× fewer visual prompt parameters, and delivering a 3.1% improvement in performance.

Live content is unavailable. Log in and register to view live content