Registration and Check-in are located in the lobby of the convention center near the Radisson entrance.
Entanglements, Exploring Artificial Biodiversity
Sofia Crespo shares about her artistic practice and journey using generative systems, especially neural networks, as a means to explore speculative lifeforms, and how technology can bring us closer to the natural world.
Gerhard Neumann
I am a full professor at the KIT and heading the chair "Autonomous Learning Robots" since Jan. 2020. Before that, I was group leader at the Bosch Center for AI and industry on campus professor at the University of Tübingen (from March to Dec. 2019) and full professor at the University of Lincoln in the UK (2016-2019). I completed my PhD in 2012 at the TU Graz and was afterwards PostDoc and Assistant Professor at the TU Darmstadt.
My research is therefore focused on the intersection of machine learning, robotics and human-robot interaction. My goal is to create data-efficient machine learning algorithms that that are suitable for complex robot domains. A strong focus of my research is to develop new methods that allow a human non-expert to intuitively teach a robot complex skills as well as to allow a robot to learn how to assist and collaborate with humans in an intelligent way. In my research, I always aim for a strong theoretical basis for my developed algorithms which are derived from first order principles. Yet, I also believe that an exhaustive assessment of the quality of an algorithm in a practical application is of equal importance.
Ari Morcos
Ari Morcos is a research scientist at Meta AI Research (FAIR Team) in Menlo Park working on understanding the mechanisms underlying neural network computation and function, and using these insights to build machine learning systems more intelligently. Most recently, his work has focused on understanding properties of data and how these properties lead to desirable and useful representations. He has worked on a variety of topics, including self-supervised learning, the lottery ticket hypothesis, the mechanisms underlying common regularizers, and the properties predictive of generalization, as well as methods to compare representations across networks, the role of single units in computation, and on strategies to induce and measure abstraction in neural network representations.
Natalie Schluter
Natalie Schluter is a Machine Learning Researcher with MLR at Apple. Before coming to Apple, she was Senior Research Scientist at Google Brain and Associate Professor in NLP and Data Science at the IT University (ITU), in Copenhagen, Denmark. At ITU she co-developed and led the first Data Science programme in Denmark, a BSc.
Natalie's primary research interests are in algorithms and experimental methodology for the development of statistical and combinatorial models of natural language understanding and generation. This is especially under computationally ``hard'' and language-inclusive settings.
Natalie holds a PhD in NLP from Dublin City University's School of Computing. She holds a further four degrees: an MSc in Mathematics from Trinity College, Dublin, a BSc in Mathematics and MA in Linguistics from the University of Montreal, and a BA in French and Spanish.
Understanding Systematic Deviations in Data for Trustworthy AI
With a growing trend of employing machine learning (ML) models to assist decision making, it is vital to inspect both the models and their corresponding data for potential systematic deviations in order to achieve trustworthy ML applications. Such inspected data may be used in training, testing or generated by the models themselves. Understanding of systematic deviations is particularly crucial in resource-limited and/or error-sensitive domains, such as healthcare. In this talk, I reflect on our recent work which has utilized automated identification and characterization of systematic deviations for various tasks in healthcare, including; data quality understanding; temporal drift; heterogeneous intervention effects analysis; and new class detection. Moreover, AI-driven scientific discovery is increasingly being facilitated using generative models. And I will share how our data-centric and multi-level evaluation framework helps to quantify the capabilities of generative models in both domain-agnostic and interpretable ways, using material science as a use case. Beyond the analysis of curated datasets which are often utilized to train ML models, similar data-centric analysis should also be considered on traditional data sources, such as textbooks. To this end I will conclude by presenting a recent collaborative work on automated representation analysis in dermatology academic materials.
Martha White
Martha White is an Associate Professor of Computing Science at the University of Alberta and a PI of Amii--the Alberta Machine Intelligence Institute--which is one of the top machine learning centres in the world. She holds a Canada CIFAR AI Chair and received IEEE's "AIs 10 to Watch: The Future of AI" award in 2020. She has authored more than 50 papers in top journals and conferences. Martha is an associate editor for TPAMI, and has served as co-program chair for ICLR and area chair for many conferences in AI and ML, including ICML, NeurIPS, AAAI and IJCAI. Her research focus is on developing algorithms for agents continually learning on streams of data, with an emphasis on representation learning and reinforcement learning.
Arthur Gretton
Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit, and director of the Centre for Computational Statistics and Machine Learning (CSML) at UCL. He received degrees in Physics and Systems Engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics, and at the Machine Learning Department, Carnegie Mellon University. Arthur's recent research interests in machine learning include causal inference and representation learning, design and training of generative models (implicit: Wasserstein gradient flows, GANs; and explicit: energy-based models), and nonparametric hypothesis testing. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, an Area Chair for NeurIPS in 2008 and 2009, a Senior Area Chair for NeurIPS in 2018 and 2021, an Area Chair for ICML in 2011 and 2012, a Senior Area Chair for ICML in 2022, a member of the COLT Program Committee in 2013, and a member of Royal Statistical Society Research Section Committee since January 2020. Arthur was program chair for AISTATS in 2016 (with Christian Robert), tutorials chair for ICML 2018 (with Ruslan Salakhutdinov), workshops chair for ICML 2019 (with Honglak Lee), program chair for the Dali workshop in 2019 (with Krikamol Muandet and Shakir Mohammed), and co-organsier of the Machine Learning Summer School 2019 in London (with Marc Deisenroth).
Jascha Sohl-Dickstein
Jascha is a senior staff research scientist in the Brain group at Google, where he leads a research team with interests spanning machine learning, physics, and neuroscience. Jascha is most (in)famous for inventing diffusion models. His recent work has focused on theory of overparameterized neural networks, meta-training of learned optimizers, and understanding the capabilities of large language models. Before working at Google he was a visiting scholar in Surya Ganguli's lab at Stanford University, and an academic resident at Khan Academy. He earned his PhD in 2012 in the Redwood Center for Theoretical Neuroscience at UC Berkeley, in Bruno Olshausen's lab. Prior to his PhD, he worked on Mars.
blog: https://sohl-dickstein.github.io/ (semi-)professional website: http://www.sohldickstein.com/