A Study on PAVE Specification for Learnware
Abstract
The Learnware paradigm aims to help users solve machine learning tasks by leveraging existing well-trained models rather than starting from scratch. A learnware comprises a submitted model paired with a specification sketching its capabilities. For an open platform with continuously uploaded models, these specifications are essential to enabling users to identify helpful models, eliminating the requirement for prohibitively costly per-model evaluations. In previous research, specifications based on privacy-preserving reduced sets succeed in enabling learnware identification through distribution matching, but suffer from high sample complexity for learnwares from high-dimensional, unstructured data like images or text. In this paper, we formalize Parameter Vector (PAVE) specification for learnware identification, which utilizes the changes in pre-trained model parameters to inherently encode the model capability and task requirements, offering an effective solution for these learnwares. Theoretically, from the neural tangent kernel perspective, we establish a tight connection between PAVE and prior specifications, providing a theoretical explanation for their shared underlying principles. We further approximate the parameter vector in a low-rank space and analyze the approximation error bound, highly reducing the computational and storage overhead. Extensive empirical studies demonstrate that PAVE specification excels at identifying CV and NLP learnwares for reuse on given user tasks, and succeeds in identifying helpful learnwares from open learnware repository with corrupted model quality for the first time. Reusing identified learnware to solve user tasks can even outperform user-fine-tuned pre-trained models in data-limited scenarios.