Workshop
|
Fri 7:48
|
Emergent Communication Fine-tuning (EC-FT) for Pretrained Language Models
Shane Steinert-Threlkeld · Xuhui Zhou · Zeyu Liu · C. Downey
|
|
Poster
|
Tue 18:30
|
SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
Zirui Wang · Jiahui Yu · Wei Yu · Zihang Dai · Yulia Tsvetkov · Yuan Cao
|
|
Workshop
|
Fri 11:00
|
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code
Nadezhda Chirkova · Sergei Troshin
|
|
Workshop
|
Fri 7:10
|
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code
Nadezhda Chirkova · Sergei Troshin
|
|
Spotlight
|
Mon 10:30
|
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Yoav Levine · Noam Wies · Daniel Jannai · Dan Navon · Yedid Hoshen · Amnon Shashua
|
|
Social
|
Thu 12:00
|
Better Developing Pretraining-based Models and Beyond
Yiyuan Li · Chenghao Yang
|
|
Poster
|
Mon 10:30
|
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Yoav Levine · Noam Wies · Daniel Jannai · Dan Navon · Yedid Hoshen · Amnon Shashua
|
|
Poster
|
Mon 18:30
|
On Robust Prefix-Tuning for Text Classification
Zonghan Yang · Yang Liu
|
|
Poster
|
Thu 10:30
|
Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling
Ada Wan
|
|
Spotlight
|
Thu 10:30
|
Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling
Ada Wan
|
|
Poster
|
Tue 10:30
|
P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts
Benjamin Newman · Prafulla Kumar Choubey · Nazneen Rajani
|
|
Poster
|
Thu 2:30
|
On the Pitfalls of Analyzing Individual Neurons in Language Models
Omer Antverg · Yonatan Belinkov
|
|