Getting Started
Schedule
Main Conference
Invited Talks
Papers
In Person Orals
Awards
Workshops
Community
Affinity Events
Socials
Town Hall
Sponsors
Organizers
Help
Website FAQ
Presenter / Moderator FAQ
HelpDesk
RocketChat Desktop Client
Login
firstbacksecondback
Search All 2022 Events
Results
<<
<
Page 1 of 5
>
>>
Workshop
Fri 7:48
Emergent Communication Fine-tuning (EC-FT) for Pretrained Language Models
Shane Steinert-Threlkeld · Xuhui Zhou · Zeyu Liu · C. Downey
Poster
Tue 18:30
SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
Zirui Wang · Jiahui Yu · Wei Yu · Zihang Dai · Yulia Tsvetkov · Yuan Cao
Workshop
Fri 7:10
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code
Nadezhda Chirkova · Sergei Troshin
Workshop
Fri 11:00
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code
Nadezhda Chirkova · Sergei Troshin
Spotlight
Mon 10:30
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Yoav Levine · Noam Wies · Daniel Jannai · Dan Navon · Yedid Hoshen · Amnon Shashua
Social
Thu 12:00
Better Developing Pretraining-based Models and Beyond
Yiyuan Li · Chenghao Yang
Poster
Mon 10:30
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Yoav Levine · Noam Wies · Daniel Jannai · Dan Navon · Yedid Hoshen · Amnon Shashua
Poster
Mon 18:30
On Robust Prefix-Tuning for Text Classification
Zonghan Yang · Yang Liu
Spotlight
Thu 10:30
Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling
Ada Wan
Poster
Tue 10:30
P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts
Benjamin Newman · Prafulla Kumar Choubey · Nazneen Rajani
Poster
Thu 2:30
On the Pitfalls of Analyzing Individual Neurons in Language Models
Omer Antverg · Yonatan Belinkov
Poster
Thu 10:30
Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling
Ada Wan
ICLR uses cookies to remember that you are logged in. By using our websites, you agree to the placement of these cookies.
Our Privacy Policy »
Accept Cookies