Skip to yearly menu bar Skip to main content



Invited Talks
Invited Talk
Jie Tang

[ Halle A 8 - 9 ]

Abstract

Large language models have substantially advanced the state of the art in various AI tasks, such as natural language understanding and text generation, and image processing, multimodal modeling. In this talk, we will first introduce the development of AI in the past decades, in particular from the angle of China. We will also talk about the opportunities, challenges, and risks of AGI in the future. In the second part of the talk, we will use ChatGLM, an alternative but open-sourced model to ChatGPT, as an example to explain our understanding and insights derived during the implementation of the model.

Invited Talk
Kate Downing

[ Halle A 8 - 9 ]

Abstract

This talk will cover fundamental legal principles all AI researchers should understand. This talk will explore why legislators are looking at new laws specifically for AI and the goals they want to accomplish with those laws. It will also cover legal risks related to using training datasets, understanding dataset licenses, and options for licensing models in an open fashion.

Invited Talk
Raia Hadsell

[ Halle A 8 - 9 ]

Abstract

After decades of steady progress and occasional setbacks, the field of AI now finds itself at an inflection point. AI products have exploded into the mainstream, we've yet to hit the ceiling of scaling dividends, and the community is asking itself what comes next. In this talk, Raia will draw on her 20 years experience as an AI researcher and AI leader to examine how our assumptions about the path to Artificial General Intelligence (AGI) have evolved over time, and to explore the unexpected truths that have emerged along the way. From reinforcement learning to distributed architectures and the potential of neural networks to revolutionize scientific domains, Raia argues that embracing lessons from the past offers valuable insights for AI's future research roadmap.

Invited Talk
Devi Parikh

[ Halle A 8 - 9 ]

Abstract

This is going to be an unusual AI conference keynote talk. When we talk about why the technological landscape is the way it is, we talk a lot about the macro shifts – the internet, the data, the compute. We don’t talk about the micro threads, the individual human stories as much, even though it is these individual human threads that cumulatively lead to the macro phenomenon. We should talk about these stories more! So that we can learn from each other, inspire each other. So we can be more robust; more effective in our endeavors. By strengthening our individual threads and our connections, we can weave a stronger fabric together. This talk is about some of my stories from my 20-year journey so far – about following up on all threads, about learnt reward functions, about fleeting opportunities, about multidimensional impact landscapes, and about curiosity for new experiences. It might seem narcissistic, but hopefully it will also feel authentic and vulnerable. And hopefully you will get something out of it.

Invited Talk
Kyunghyun Cho

[ Halle A 8 - 9 ]

Abstract
Invited Talk
Moritz Hardt

[ Halle A 8 - 9 ]

Abstract

Benchmarks are the keystone that hold the machine learning community together. Growing as a research paradigm since the 1980s, there's much we've done with them, but little we know about them. In this talk, I will trace the rudiments of an emerging science of benchmarks through selected empirical and theoretical observations. Specifically, we'll discuss the role of annotator errors, external validity of model rankings, and the promise of multi-task benchmarks. The results in each case challenge conventional wisdom and underscore the benefits of developing a science of benchmarks.

Invited Talk

[ Halle A 8 - 9 ]

Abstract