Skip to yearly menu bar Skip to main content


Social

Better Developing Pretraining-based Models and Beyond

Yiyuan Li · Chenghao Yang


Abstract:

Pretraining techniques and models largely advance research in language and vision. Several techniques are designed to better drive the model pretraining and fine-tuning in a more intelligent, robust and efficient manner. Meanwhile, with greater power of models come with greater concerns of their social impacts. Do larger models become more powerful consistently? Can larger models work reliably in larger-scale or even real-world applications? Developing models with fairness and reliability considerations therefore becomes increasingly significant.

This social event is launched in terms of collecting sparks of minds and opening discussions from findings to tips in better developing and fine-tuning large pretrained models, as well as the prospect of those models in social and scaling aspects.

Chat is not available.