Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

107 Results

<<   <   Page 8 of 9   >   >>
Affinity Workshop
Can LLMs Learn a New Language on the Fly? A Case Study on Zhuang
Chen Zhang · Mingxu Tao · Quzhe Huang · Zhibin Chen · Yansong Feng
Affinity Workshop
Aligners: Decoupling LLMs and Alignment
Lilian Ngweta · Mayank Agarwal · Subha Maity · Alex Gittens · Yuekai Sun · Mikhail Yurochkin
Poster
Fri 1:45 Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models
Shangbin Feng · Weijia Shi · Yuyang Bai · Vidhisha Balachandran · Tianxing He · Yulia Tsvetkov
Poster
Tue 7:30 Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Shahriar Golchin · Mihai Surdeanu
Poster
Wed 1:45 Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks
Vaidehi Ramesh Patil · Peter Hase · Mohit Bansal
Poster
Wed 1:45 DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models
Yongchan Kwon · Eric Wu · Kevin Wu · James Y Zou
Poster
Tue 1:45 PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training
Dawei Zhu · Nan Yang · Liang Wang · Yifan Song · Wenhao Wu · Furu Wei · Sujian Li
Poster
Wed 7:30 Towards Codable Watermarking for Injecting Multi-Bits Information to LLMs
Lean Wang · Wenkai Yang · Deli Chen · Hao Zhou · Yankai Lin · Fandong Meng · Jie Zhou · Xu Sun
Poster
Fri 7:30 Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation
Yangsibo Huang · Samyak Gupta · Mengzhou Xia · Kai Li · Danqi Chen
Poster
Tue 7:30 SmartPlay : A Benchmark for LLMs as Intelligent Agents
Yue Wu · Xuan Tang · Tom Mitchell · Yuanzhi Li
Oral
Tue 1:30 Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs
Suyu Ge · Yunan Zhang · Liyuan Liu · Minjia Zhang · Jiawei Han · Jianfeng Gao
Oral
Tue 7:00 Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions
Satwik Bhattamishra · Arkil Patel · Phil Blunsom · Varun Kanade