Invited Talks
Invited Talk: The emerging science of benchmarks
Benchmarks are the keystone that hold the machine learning community together. Growing as a research paradigm since the 1980s, there's much we've done with them, but little we know about them. In this talk, I will trace the rudiments of an emerging science of benchmarks through selected empirical and theoretical observations. Specifically, we'll discuss the role of annotator errors, external validity of model rankings, and the promise of multi-task benchmarks. The results in each case challenge conventional wisdom and underscore the benefits of developing a science of benchmarks.
Moritz Hardt
Hardt is a director at the Max Planck Institute for Intelligent Systems, Tübingen. Previously, he was Associate Professor for Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research contributes to the scientific foundations of machine learning and algorithmic decision making with a focus on social questions. He co-authored Fairness and Machine Learning: Limitations and Opportunities (MIT Press) and Patterns, Predictions, and Actions: Foundations of Machine Learning (Princeton University Press).
Invited Talk - Overflow Room 1: Stories from my life
This is going to be an unusual AI conference keynote talk. When we talk about why the technological landscape is the way it is, we talk a lot about the macro shifts – the internet, the data, the compute. We don’t talk about the micro threads, the individual human stories as much, even though it is these individual human threads that cumulatively lead to the macro phenomenon. We should talk about these stories more! So that we can learn from each other, inspire each other. So we can be more robust; more effective in our endeavors. By strengthening our individual threads and our connections, we can weave a stronger fabric together. This talk is about some of my stories from my 20-year journey so far – about following up on all threads, about learnt reward functions, about fleeting opportunities, about multidimensional impact landscapes, and about curiosity for new experiences. It might seem narcissistic, but hopefully it will also feel authentic and vulnerable. And hopefully you will get something out of it.
Invited Talk - Overflow Room 3: Copyright Fundamentals for AI Researchers
This talk will cover fundamental legal principles all AI researchers should understand about copyright law. This talk will explore the current state of copyright law with respect to AI in the U.S., potential claims and defenses, as well as practical tips for minimizing legal risk.
Invited Talk - Overflow Room 3: Stories from my life
This is going to be an unusual AI conference keynote talk. When we talk about why the technological landscape is the way it is, we talk a lot about the macro shifts – the internet, the data, the compute. We don’t talk about the micro threads, the individual human stories as much, even though it is these individual human threads that cumulatively lead to the macro phenomenon. We should talk about these stories more! So that we can learn from each other, inspire each other. So we can be more robust; more effective in our endeavors. By strengthening our individual threads and our connections, we can weave a stronger fabric together. This talk is about some of my stories from my 20-year journey so far – about following up on all threads, about learnt reward functions, about fleeting opportunities, about multidimensional impact landscapes, and about curiosity for new experiences. It might seem narcissistic, but hopefully it will also feel authentic and vulnerable. And hopefully you will get something out of it.
Together with two other co-founders, Rich Bonneau and Vlad Gligorijevic, I founded Prescient Design in January 2021, in order to build a lab-in-the-loop protein design platform based on our earlier research. Prescient Design was fully acquired by Genentech (Roche) on August 2021, and began to focus more specifically on antibody design. It has been more than three years since its founding and more than 2.5 years since the acquisition. In this talk, I will share Prescient Design's lab-in-the-loop antibody design, both the platform and the outcome, as well as what went behind in building this platform from the perspective of machine learning.
Kyunghyun Cho
Kyunghyun Cho is a professor of computer science and data science at New York University and a senior director of frontier research at the Prescient Design team within Genentech Research & Early Development (gRED). He is also a CIFAR Fellow of Learning in Machines & Brains and an Associate Member of the National Academy of Engineering of Korea. He served as a (co-)Program Chair of ICLR 2020, NeurIPS 2022 and ICML 2022. He is also a founding co-Editor-in-Chief of the Transactions on Machine Learning Research (TMLR). He was a research scientist at Facebook AI Research from June 2017 to May 2020 and a postdoctoral fellow at University of Montreal until Summer 2015 under the supervision of Prof. Yoshua Bengio, after receiving MSc and PhD degrees from Aalto University April 2011 and April 2014, respectively, under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He received the Samsung Ho-Am Prize in Engineering in 2021. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.
Invited Talk - Overflow Room 3: The ChatGLM's Road to AGI
Large language models have substantially advanced the state of the art in various AI tasks, such as natural language understanding and text generation, and image processing, multimodal modeling. In this talk, we will first introduce the development of AI in the past decades, in particular from the angle of China. We will also talk about the opportunities, challenges, and risks of AGI in the future. In the second part of the talk, we will use ChatGLM, an alternative but open-sourced model to ChatGPT, as an example to explain our understanding and insights derived during the implementation of the model.
Invited Talk: Stories from my life
This is going to be an unusual AI conference keynote talk. When we talk about why the technological landscape is the way it is, we talk a lot about the macro shifts – the internet, the data, the compute. We don’t talk about the micro threads, the individual human stories as much, even though it is these individual human threads that cumulatively lead to the macro phenomenon. We should talk about these stories more! So that we can learn from each other, inspire each other. So we can be more robust; more effective in our endeavors. By strengthening our individual threads and our connections, we can weave a stronger fabric together. This talk is about some of my stories from my 20-year journey so far – about following up on all threads, about learnt reward functions, about fleeting opportunities, about multidimensional impact landscapes, and about curiosity for new experiences. It might seem narcissistic, but hopefully it will also feel authentic and vulnerable. And hopefully you will get something out of it.
Invited Talk - Overflow Room 1: Learning through AI’s winters and springs: unexpected truths on the road to AGI
After decades of steady progress and occasional setbacks, the field of AI now finds itself at an inflection point. AI products have exploded into the mainstream, we've yet to hit the ceiling of scaling dividends, and the community is asking itself what comes next. In this talk, Raia will draw on her 20 years experience as an AI researcher and AI leader to examine how our assumptions about the path to Artificial General Intelligence (AGI) have evolved over time, and to explore the unexpected truths that have emerged along the way. From reinforcement learning to distributed architectures and the potential of neural networks to revolutionize scientific domains, Raia argues that embracing lessons from the past offers valuable insights for AI's future research roadmap.
Invited Talk - Overflow Room 2: Why your work matters for climate in more ways than you think
Invited Talk - Overflow Room 3: Learning through AI’s winters and springs: unexpected truths on the road to AGI
After decades of steady progress and occasional setbacks, the field of AI now finds itself at an inflection point. AI products have exploded into the mainstream, we've yet to hit the ceiling of scaling dividends, and the community is asking itself what comes next. In this talk, Raia will draw on her 20 years experience as an AI researcher and AI leader to examine how our assumptions about the path to Artificial General Intelligence (AGI) have evolved over time, and to explore the unexpected truths that have emerged along the way. From reinforcement learning to distributed architectures and the potential of neural networks to revolutionize scientific domains, Raia argues that embracing lessons from the past offers valuable insights for AI's future research roadmap.
Invited Talk - Overflow Room 1: Copyright Fundamentals for AI Researchers
This talk will cover fundamental legal principles all AI researchers should understand about copyright law. This talk will explore the current state of copyright law with respect to AI in the U.S., potential claims and defenses, as well as practical tips for minimizing legal risk.
Climate change is one of the most pressing issues of our time, requiring rapid transformation across virtually every sector of society. In this talk, I describe what this means for research and practice in AI. AI has a multi-faceted relationship with climate change, through a combination of its direct environmental footprint, the impacts of its applications (both good and bad), and the broader systemic shifts it induces. Ultimately, most work in AI has significant implications for climate action, whether or not it is viewed as traditionally “climate-relevant.” Given this, I discuss how the AI community can better align its work with climate action: through the kinds of methods we develop, the kinds of applications we work on, the choices we make while working on these applications, and the ways we communicate with the public about our work.
Priya Donti
Priya Donti is an Assistant Professor and the Silverman (1968) Family Career Development Professor at MIT EECS and LIDS. Her research focuses on machine learning for forecasting, optimization, and control in high-renewables power grids. Methodologically, this entails exploring ways to incorporate relevant physics, hard constraints, and decision-making procedures into deep learning workflows. Priya is also the co-founder and Chair of Climate Change AI, a global nonprofit initiative to catalyze impactful work at the intersection of climate change and machine learning. Priya received her Ph.D. in Computer Science and Public Policy from Carnegie Mellon University, and is a recipient of the MIT Technology Review’s 2021 “35 Innovators Under 35” award, the ACM SIGEnergy Doctoral Dissertation Award, the Siebel Scholarship, the U.S. Department of Energy Computational Science Graduate Fellowship, and best paper awards at ICML (honorable mention), ACM e-Energy (runner-up), PECI, the Duke Energy Data Analytics Symposium, and the NeurIPS workshop on AI for Social Good.
Invited Talk: The ChatGLM's Road to AGI
Large language models have substantially advanced the state of the art in various AI tasks, such as natural language understanding and text generation, and image processing, multimodal modeling. In this talk, we will first introduce the development of AI in the past decades, in particular from the angle of China. We will also talk about the opportunities, challenges, and risks of AGI in the future. In the second part of the talk, we will use ChatGLM, an alternative but open-sourced model to ChatGPT, as an example to explain our understanding and insights derived during the implementation of the model.
Jie Tang
After decades of steady progress and occasional setbacks, the field of AI now finds itself at an inflection point. AI products have exploded into the mainstream, we've yet to hit the ceiling of scaling dividends, and the community is asking itself what comes next. In this talk, Raia will draw on her 20 years experience as an AI researcher and AI leader to examine how our assumptions about the path to Artificial General Intelligence (AGI) have evolved over time, and to explore the unexpected truths that have emerged along the way. From reinforcement learning to distributed architectures and the potential of neural networks to revolutionize scientific domains, Raia argues that embracing lessons from the past offers valuable insights for AI's future research roadmap.
Raia Hadsell
I am VP of Research at DeepMind. I joined DeepMind in 2014 to pursue new solutions for artificial general intelligence. Currently, I oversee the strategy for DeepMind’s exploratory research efforts, leading teams exploring new innovations in AI that might address the open questions that today's techniques cannot answer.
Before joining DeepMind in early 2014, I had found my way into AI research obliquely. After an undergraduate degree in religion and philosophy from Reed College, I veered off-course (on-course?) and became a computer scientist. My PhD with Yann LeCun, at NYU, focused on machine learning using Siamese neural nets (often called a 'triplet loss' today), face recognition algorithms, and on deep learning for mobile robots in the wild. My thesis, 'Learning Long-range vision for offroad robots', was awarded the Outstanding Dissertation award in 2009. I spent a post-doc at CMU Robotics Institute, working with Drew Bagnell and Martial Hebert, and then became a research scientist at SRI International, at the Vision and Robotics group in Princeton, NJ.
After joining DeepMind, then a small 50-person startup that had just been acquired by Google, my research focused on a number of fundamental challenges in AGI, including continual and transfer learning, deep reinforcement learning for robotics and control problems, and neural models of navigation (see publications). I have proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting.
In the broader AI community, I am founder and Editor-in-Chief of a new open journal, TMLR. I sit on the executive board of CoRL, am a fellow of the European Lab on Learning Systems (ELLIS), and a founding organizer of NAISys (Neuroscience for AI Systems). I also serves as a CIFAR advisor and have previously sat on the Executive Board for WiML (Women in Machine Learning).
Invited Talk - Overflow Room 1: The ChatGLM's Road to AGI
Large language models have substantially advanced the state of the art in various AI tasks, such as natural language understanding and text generation, and image processing, multimodal modeling. In this talk, we will first introduce the development of AI in the past decades, in particular from the angle of China. We will also talk about the opportunities, challenges, and risks of AGI in the future. In the second part of the talk, we will use ChatGLM, an alternative but open-sourced model to ChatGPT, as an example to explain our understanding and insights derived during the implementation of the model.
Invited Talk - Overflow Room 2: The ChatGLM's Road to AGI
Large language models have substantially advanced the state of the art in various AI tasks, such as natural language understanding and text generation, and image processing, multimodal modeling. In this talk, we will first introduce the development of AI in the past decades, in particular from the angle of China. We will also talk about the opportunities, challenges, and risks of AGI in the future. In the second part of the talk, we will use ChatGLM, an alternative but open-sourced model to ChatGPT, as an example to explain our understanding and insights derived during the implementation of the model.
Invited Talk - Overflow Room 3: Why your work matters for climate in more ways than you think
Invited Talk - Overflow Room 2: Learning through AI’s winters and springs: unexpected truths on the road to AGI
After decades of steady progress and occasional setbacks, the field of AI now finds itself at an inflection point. AI products have exploded into the mainstream, we've yet to hit the ceiling of scaling dividends, and the community is asking itself what comes next. In this talk, Raia will draw on her 20 years experience as an AI researcher and AI leader to examine how our assumptions about the path to Artificial General Intelligence (AGI) have evolved over time, and to explore the unexpected truths that have emerged along the way. From reinforcement learning to distributed architectures and the potential of neural networks to revolutionize scientific domains, Raia argues that embracing lessons from the past offers valuable insights for AI's future research roadmap.
Invited Talk - Overflow Room 2: The emerging science of benchmarks
Benchmarks are the keystone that hold the machine learning community together. Growing as a research paradigm since the 1980s, there's much we've done with them, but little we know about them. In this talk, I will trace the rudiments of an emerging science of benchmarks through selected empirical and theoretical observations. Specifically, we'll discuss the role of annotator errors, external validity of model rankings, and the promise of multi-task benchmarks. The results in each case challenge conventional wisdom and underscore the benefits of developing a science of benchmarks.
Invited Talk - Overflow Room 1: The emerging science of benchmarks
Benchmarks are the keystone that hold the machine learning community together. Growing as a research paradigm since the 1980s, there's much we've done with them, but little we know about them. In this talk, I will trace the rudiments of an emerging science of benchmarks through selected empirical and theoretical observations. Specifically, we'll discuss the role of annotator errors, external validity of model rankings, and the promise of multi-task benchmarks. The results in each case challenge conventional wisdom and underscore the benefits of developing a science of benchmarks.
Invited Talk - Overflow Room 1: Why your work matters for climate in more ways than you think
Invited Talk - Overflow Room 2: Copyright Fundamentals for AI Researchers
This talk will cover fundamental legal principles all AI researchers should understand about copyright law. This talk will explore the current state of copyright law with respect to AI in the U.S., potential claims and defenses, as well as practical tips for minimizing legal risk.
Invited Talk - Overflow Room 3: The emerging science of benchmarks
Benchmarks are the keystone that hold the machine learning community together. Growing as a research paradigm since the 1980s, there's much we've done with them, but little we know about them. In this talk, I will trace the rudiments of an emerging science of benchmarks through selected empirical and theoretical observations. Specifically, we'll discuss the role of annotator errors, external validity of model rankings, and the promise of multi-task benchmarks. The results in each case challenge conventional wisdom and underscore the benefits of developing a science of benchmarks.
Invited Talk - Overflow Room 2: Stories from my life
This is going to be an unusual AI conference keynote talk. When we talk about why the technological landscape is the way it is, we talk a lot about the macro shifts – the internet, the data, the compute. We don’t talk about the micro threads, the individual human stories as much, even though it is these individual human threads that cumulatively lead to the macro phenomenon. We should talk about these stories more! So that we can learn from each other, inspire each other. So we can be more robust; more effective in our endeavors. By strengthening our individual threads and our connections, we can weave a stronger fabric together. This talk is about some of my stories from my 20-year journey so far – about following up on all threads, about learnt reward functions, about fleeting opportunities, about multidimensional impact landscapes, and about curiosity for new experiences. It might seem narcissistic, but hopefully it will also feel authentic and vulnerable. And hopefully you will get something out of it.
Invited Talk: Copyright Fundamentals for AI Researchers
This talk will cover fundamental legal principles all AI researchers should understand about copyright law. This talk will explore the current state of copyright law with respect to AI in the U.S., potential claims and defenses, as well as practical tips for minimizing legal risk.