Skip to yearly menu bar Skip to main content


Workshop

Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)

Ananya Kumar · Tengyu Ma · Tiffany Vlaar · Aditi Raghunathan · Hanie Sedghi · Yamini Bansal · Sang Michael Xie · Percy Liang · Mathilde Caron

AD10

Thu 4 May, 12:15 a.m. PDT

Foundation models (FMs) are models that are trained on a large and diverse pool of data and can be adapted to a wide range of tasks. Recent examples of FMs include large language models (GPT-3, BERT, PaLM), image representation encoders (SimCLR), and image-text models (CLIP, DALL-E), which have all revolutionized the way models are built in their domains. Foundation models are poorly understood: the core driving principle behind Foundation Models (FMs) is transfer learning, but scale and modern self supervision techniques have led to emergent capabilities we might not have anticipated. The goal of this workshop is to highlight research that aims to improve our understanding of FMs. We liberally interpret understanding as any research ranging from purely empirical papers that highlight interesting phenomena, to those which attempt to explain or provide theoretical foundations for such phenomena in potentially simplified settings.

Chat is not available.
Timezone: America/Los_Angeles

Schedule