Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

107 Results

<<   <   Page 3 of 9   >   >>
Workshop
TOFU: A Task of Fictitious Unlearning for LLMs
Pratyush Maini · Zhili Feng · Avi Schwarzschild · Zachary Lipton · J Kolter
Workshop
tinyBenchmarks: evaluating LLMs with fewer examples
Felipe Polo · Lucas Weber · Leshem Choshen · Yuekai Sun · Gongjun Xu · Mikhail Yurochkin
Workshop
Self-evaluation and self-prompting to improve the reliability of LLMs
Alexandre Piche · Aristides Milios · Dzmitry Bahdanau · Christopher Pal
Workshop
TOFU: A Task of Fictitious Unlearning for LLMs
Pratyush Maini · Zhili Feng · Avi Schwarzschild · Zachary Lipton · J Kolter
Workshop
Quantitative Certification of Knowledge Comprehension in LLMs
Isha Chaudhary · Vedaant Jain · Gagandeep Singh
Workshop
Quantitative Certification of Knowledge Comprehension in LLMs
Isha Chaudhary · Vedaant Jain · Gagandeep Singh
Workshop
Toward Robust Unlearning for LLMs
Rishub Tamirisa · Bhrugu Bharathi · Andy Zhou · Bo Li · Mantas Mazeika
Workshop
Toward Robust Unlearning for LLMs
Rishub Tamirisa · Bhrugu Bharathi · Andy Zhou · Bo Li · Mantas Mazeika
Workshop
Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?
Egor Zverev · Sahar Abdelnabi · Mario Fritz · Christoph Lampert
Workshop
Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?
Egor Zverev · Sahar Abdelnabi · Mario Fritz · Christoph Lampert
Workshop
CatCode: A Comprehensive Evaluation Framework for LLMs On the Mixture of Code and Text
Zhenru Lin · Yiqun Yao · Yang Yuan
Workshop
Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games
Yizhe Zhang · Jiarui Lu · Navdeep Jaitly