Skip to yearly menu bar Skip to main content


Poster

Robust Function-Calling for On-Device Language Model via Function Masking

Qiqiang Lin · Muning Wen · Qiuying Peng · Guanyu Nie · Junwei Liao · Jun Wang · Xiaoyun Mo · Jiamu Zhou · Cheng Cheng · Yin Zhao · Jun Wang · Weinan Zhang

Hall 3 + Hall 2B #285
[ ] [ Project Page ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Large language models have demonstrated impressive value in performing as autonomous agents when equipped with external tools and API calls. Nonetheless, effectively harnessing their potential for executing complex tasks crucially relies on enhancements in their function-calling capabilities. This paper identifies a critical gap in existing function-calling models, where performance varies significantly across benchmarks, often due to over-fitting to specific naming conventions. To address such an issue, we introduce Hammer, a novel family of foundation models specifically engineered for on-device function calling. Hammer employs an augmented dataset that enhances models’ sensitivity to irrelevant functions and incorporates function masking techniques to minimize over-fitting. Our empirical evaluations reveal that Hammer not only outperforms larger models but also demonstrates robust generalization across diverse benchmarks, achieving state-of-the-art results. Our open-source contributions include a specialized dataset for irrelevance detection, a tuning framework for enhanced generalization, and the Hammer models, establishing a new standard for function-calling performance.

Live content is unavailable. Log in and register to view live content