[ AD4 ]
Ari Morcos is a research scientist at Meta AI Research (FAIR Team) in Menlo Park working on understanding the mechanisms underlying neural network computation and function, and using these insights to build machine learning systems more intelligently. Most recently, his work has focused on understanding properties of data and how these properties lead to desirable and useful representations. He has worked on a variety of topics, including self-supervised learning, the lottery ticket hypothesis, the mechanisms underlying common regularizers, and the properties predictive of generalization, as well as methods to compare representations across networks, the role of single units in computation, and on strategies to induce and measure abstraction in neural network representations.
[ AD6 ]
I am a full professor at the KIT and heading the chair "Autonomous Learning Robots" since Jan. 2020. Before that, I was group leader at the Bosch Center for AI and industry on campus professor at the University of Tübingen (from March to Dec. 2019) and full professor at the University of Lincoln in the UK (2016-2019). I completed my PhD in 2012 at the TU Graz and was afterwards PostDoc and Assistant Professor at the TU Darmstadt.
My research is therefore focused on the intersection of machine learning, robotics and human-robot interaction. My goal is to create data-efficient machine learning algorithms that that are suitable for complex robot domains. A strong focus of my research is to develop new methods that allow a human non-expert to intuitively teach a robot complex skills as well as to allow a robot to learn how to assist and collaborate with humans in an intelligent way. In my research, I always aim for a strong theoretical basis for my developed algorithms which are derived from first order principles. Yet, I also believe that an exhaustive assessment of the quality of an algorithm in a practical application is of equal importance.
[ AD5 ]
Natalie Schluter is a Machine Learning Researcher with MLR at Apple. Before coming to Apple, she was Senior Research Scientist at Google Brain and Associate Professor in NLP and Data Science at the IT University (ITU), in Copenhagen, Denmark. At ITU she co-developed and led the first Data Science programme in Denmark, a BSc.
Natalie's primary research interests are in algorithms and experimental methodology for the development of statistical and combinatorial models of natural language understanding and generation. This is especially under computationally ``hard'' and language-inclusive settings.
Natalie holds a PhD in NLP from Dublin City University's School of Computing. She holds a further four degrees: an MSc in Mathematics from Trinity College, Dublin, a BSc in Mathematics and MA in Linguistics from the University of Montreal, and a BA in French and Spanish.
[ AD7 ]
Jascha is a senior staff research scientist in the Brain group at Google, where he leads a research team with interests spanning machine learning, physics, and neuroscience. Jascha is most (in)famous for inventing diffusion models. His recent work has focused on theory of overparameterized neural networks, meta-training of learned optimizers, and understanding the capabilities of large language models. Before working at Google he was a visiting scholar in Surya Ganguli's lab at Stanford University, and an academic resident at Khan Academy. He earned his PhD in 2012 in the Redwood Center for Theoretical Neuroscience at UC Berkeley, in Bruno Olshausen's lab. Prior to his PhD, he worked on Mars.
blog: https://sohl-dickstein.github.io/ (semi-)professional website: http://www.sohldickstein.com/
[ AD4 ]
Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit, and director of the Centre for Computational Statistics and Machine Learning (CSML) at UCL. He received degrees in Physics and Systems Engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics, and at the Machine Learning Department, Carnegie Mellon University. Arthur's recent research interests in machine learning include causal inference and representation learning, design and training of generative models (implicit: Wasserstein gradient flows, GANs; and explicit: energy-based models), and nonparametric hypothesis testing. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, an Area Chair for NeurIPS in 2008 and 2009, a Senior Area Chair for NeurIPS in 2018 and 2021, an Area Chair for ICML in 2011 and 2012, a Senior Area Chair for ICML in 2022, a member of the COLT Program Committee in 2013, and a member of Royal Statistical Society Research Section Committee since January 2020. Arthur was program chair for AISTATS in 2016 (with …
[ AD5 ]
Martha White is an Associate Professor of Computing Science at the University of Alberta and a PI of Amii--the Alberta Machine Intelligence Institute--which is one of the top machine learning centres in the world. She holds a Canada CIFAR AI Chair and received IEEE's "AIs 10 to Watch: The Future of AI" award in 2020. She has authored more than 50 papers in top journals and conferences. Martha is an associate editor for TPAMI, and has served as co-program chair for ICLR and area chair for many conferences in AI and ML, including ICML, NeurIPS, AAAI and IJCAI. Her research focus is on developing algorithms for agents continually learning on streams of data, with an emphasis on representation learning and reinforcement learning.
[ AD7 ]
Samy Bengio (PhD in computer science, University of Montreal, 1993) is a senior director of machine learning research at Apple since 2021. Before that, he was a distinguished scientist at Google Research since 2007 where he was heading part of the Google Brain team, and at IDIAP in the early 2000s where he co-wrote the well-known open-source Torch machine learning library. His research interests span many areas of machine learning such as deep architectures, representation learning, sequence processing, speech recognition, and image understanding. He is action editor of the Journal of Machine Learning Research and on the board of the NeurIPS foundation. He was on the editorial board of the Machine Learning Journal, has been program chair (2017) and general chair (2018) of NeurIPS, program chair of ICLR (2015, 2016), general chair of BayLearn (2012-2015), MLMI (2004-2006), as well as NNSP (2002), and on the program committee of several international conferences such as NeurIPS, ICML, ICLR, ECML and IJCAI. More details can be found at http://bengio.abracadoudou.com.
[ AD6 ]
Vincent Y. F. Tan (S'07-M'11-SM'15) was born in Singapore in 1981. He received the B.A. and M.Eng. degrees in electrical and information science from Cambridge University in 2005, and the Ph.D. degree in electrical engineering and computer science (EECS) from the Massachusetts Institute of Technology (MIT) in 2011. He is currently an Associate Professor with the Department of Mathematics and the Department of Electrical and Computer Engineering (ECE), National University of Singapore (NUS). His research interests include information theory, machine learning, and statistical signal processing.
Dr. Tan is an elected member of the IEEE Information Theory Society Board of Governors. He was an IEEE Information Theory Society Distinguished Lecturer from 2018 to 2019. He received the MIT EECS Jin-Au Kong Outstanding Doctoral Thesis Prize in 2011, the NUS Young Investigator Award in 2014, the Singapore National Research Foundation (NRF) Fellowship (Class of 2018), the Engineering Young Researcher Award in 2018, and the NUS Young Researcher Award in 2019. A dedicated educator, he was awarded the Engineering Educator Award in 2020 and 2021 and the (university level) Annual Teaching Excellence Award in 2022. He is currently serving as a Senior Area Editor for the IEEE Transactions on Signal Processing and as …
[ AD5 ]
Adam's research is focused on understanding the fundamental principles of learning in young humans and animals. Adam seeks to understand the algorithms and representations that allow people to progress from motor babbling, to open-ended play, to purposeful goal-directed behaviours. Adam is interested in continual learning problems where the agent is much smaller than the world and thus must continue to learn, react, and track in order to perform well. In particular, Adam's lab has investigated intrinsic reward and exploration, more efficient algorithms for off-policy learning, practical strategies for automatic hyperparameter tuning and meta learning, representations for online continual prediction in the face of partial observability, and new approaches to planning with learned models. In addition, Adam's group is deeply passionate about good empirical practices and new methodologies to help determine if our algorithms are ready for deployment in the real world.
[ AD4 ]
This is Lin at RIKEN AIP. I am doing research on computational photography, medical imaging and continuous learning.
[ Radisson Blu Hotel - Landre Terrace ]
[ AD5 ]
Kyunghyun Cho is an associate professor of computer science and data science at New York University and CIFAR Fellow of Learning in Machines & Brains. He is also a senior director of frontier research at the Prescient Design team within Genentech Research & Early Development (gRED). He was a research scientist at Facebook AI Research from June 2017 to May 2020 and a postdoctoral fellow at University of Montreal until Summer 2015 under the supervision of Prof. Yoshua Bengio, after receiving MSc and PhD degrees from Aalto University April 2011 and April 2014, respectively, under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He received the Samsung Ho-Am Prize in Engineering in 2021. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.
[ AD4 ]
Benjamin Roth is a professor in the area of deep learning & statistical NLP, leading the WWTF Vienna Research Group for Young Investigators "Knowledge-Infused Deep Learning for Natural Language Processing". Prior to this, he was an interim professor at LMU Munich. He obtained his PhD from Saarland University and did a postdoc at UMass, Amherst. His research interests are the extraction of knowledge from text with statistical methods and knowledge-supervised learning.
[ AD7 ]
Gintare Karolina Dziugaite is a senior research scientist at Google Brain, based in Toronto, an adjunct professor in the McGill University School of Computer Science, and an associate industry member of Mila, the Quebec AI Institute. Her research combines theoretical and empirical approaches to understanding deep learning, with a focus on generalization, data and network compression. Gintare obtained my Ph.D. in machine learning from the University of Cambridge, under the supervision of Zoubin Ghahramani. Before that, she studied Mathematics at the University of Warwick and read Part III in Mathematics at the University of Cambridge, receiving a Masters of Advanced Study (MASt) in Applied Mathematics.
[ AD6 ]
Kush R. Varshney was born in Syracuse, New York in 1982. He received the B.S. degree (magna cum laude) in electrical and computer engineering with honors from Cornell University, Ithaca, New York, in 2004. He received the S.M. degree in 2006 and the Ph.D. degree in 2010, both in electrical engineering and computer science at the Massachusetts Institute of Technology (MIT), Cambridge. While at MIT, he was a National Science Foundation Graduate Research Fellow.
Dr. Varshney is a distinguished research scientist and manager with IBM Research at the Thomas J. Watson Research Center, Yorktown Heights, NY, where he leads the machine learning group in the Trustworthy Machine Intelligence department. He was a visiting scientist at IBM Research - Africa, Nairobi, Kenya in 2019. He is the founding co-director of the IBM Science for Social Good initiative. He applies data science and predictive analytics to human capital management, healthcare, olfaction, computational creativity, public affairs, international development, and algorithmic fairness.
He and his team created several well-known open-source toolkits, including AI Fairness 360, AI Explainability 360, Uncertainty Quantification 360, and AI FactSheets 360. AI Fairness 360. He conducts academic research on the theory and methods of trustworthy machine learning. He independently-published a …
[ AD6 ]
Parikshit Ram is a Principal Research Staff Member in IBM Research, NY with research expertise in similarity search, efficient all-pairs algorithms, density estimation, computational geometry, kernel methods, decision trees, ensembles, automated machine learning and data science. He currently conducts basic mathematical and applied computational research on topics pertinent to automated machine learning and automated decision optimization as well as various aspects of generalization and learning with less data. Prior to joining IBM Research, he was a Senior Research Staff Member at Skytree, a machine learning company focused on providing high performance machine learning tools for large scale modeling and data analysis, which was subsequently acquired by Infosys. Parikshit received his Ph.D. in machine learning at Georgia Institute of Technology and a B.Sc. and M.Sc. in Mathematics and Computing from the Indian Institute of Technology. He has been in the program committee of top conferences and recognized as a top reviewer at ICML and NeurIPS multiple times.
[ Virtual ]
[ Virtual ]