Registration and Check-in are located in the lobby of the convention center near the Radisson entrance.
Importance-Weighting Approach to Distribution Shift Adaptation
For reliable machine learning, overcoming the distribution shift is one of the most important challenges. In this talk, I will first give an overview of the classical importance weighting approach to distribution shift adaptation, which consists of an importance estimation step and an importance-weighted training step. Then, I will present a more recent approach that simultaneously estimates the importance weight and trains a predictor. Finally, I will discuss a more challenging scenario of continuous distribution shifts, where the data distributions change continuously over time.
Blog Track Poster Session
Lin Gu
This is Lin at RIKEN AIP. I am doing research on computational photography, medical imaging and continuous learning.
Vincent Tan
Vincent Y. F. Tan (S'07-M'11-SM'15) was born in Singapore in 1981. He received the B.A. and M.Eng. degrees in electrical and information science from Cambridge University in 2005, and the Ph.D. degree in electrical engineering and computer science (EECS) from the Massachusetts Institute of Technology (MIT) in 2011. He is currently an Associate Professor with the Department of Mathematics and the Department of Electrical and Computer Engineering (ECE), National University of Singapore (NUS). His research interests include information theory, machine learning, and statistical signal processing.
Dr. Tan is an elected member of the IEEE Information Theory Society Board of Governors. He was an IEEE Information Theory Society Distinguished Lecturer from 2018 to 2019. He received the MIT EECS Jin-Au Kong Outstanding Doctoral Thesis Prize in 2011, the NUS Young Investigator Award in 2014, the Singapore National Research Foundation (NRF) Fellowship (Class of 2018), the Engineering Young Researcher Award in 2018, and the NUS Young Researcher Award in 2019. A dedicated educator, he was awarded the Engineering Educator Award in 2020 and 2021 and the (university level) Annual Teaching Excellence Award in 2022. He is currently serving as a Senior Area Editor for the IEEE Transactions on Signal Processing and as an Associate Editor in Machine Learning and Statistics for the IEEE Transactions on Information Theory.
Samy Bengio
Samy Bengio (PhD in computer science, University of Montreal, 1993) is a senior director of machine learning research at Apple since 2021. Before that, he was a distinguished scientist at Google Research since 2007 where he was heading part of the Google Brain team, and at IDIAP in the early 2000s where he co-wrote the well-known open-source Torch machine learning library. His research interests span many areas of machine learning such as deep architectures, representation learning, sequence processing, speech recognition, and image understanding. He is action editor of the Journal of Machine Learning Research and on the board of the NeurIPS foundation. He was on the editorial board of the Machine Learning Journal, has been program chair (2017) and general chair (2018) of NeurIPS, program chair of ICLR (2015, 2016), general chair of BayLearn (2012-2015), MLMI (2004-2006), as well as NNSP (2002), and on the program committee of several international conferences such as NeurIPS, ICML, ICLR, ECML and IJCAI. More details can be found at http://bengio.abracadoudou.com.
Adam White
Adam's research is focused on understanding the fundamental principles of learning in young humans and animals. Adam seeks to understand the algorithms and representations that allow people to progress from motor babbling, to open-ended play, to purposeful goal-directed behaviours. Adam is interested in continual learning problems where the agent is much smaller than the world and thus must continue to learn, react, and track in order to perform well. In particular, Adam's lab has investigated intrinsic reward and exploration, more efficient algorithms for off-policy learning, practical strategies for automatic hyperparameter tuning and meta learning, representations for online continual prediction in the face of partial observability, and new approaches to planning with learned models. In addition, Adam's group is deeply passionate about good empirical practices and new methodologies to help determine if our algorithms are ready for deployment in the real world.
AI, History and Equity
Large datasets are increasing used to train AI models for addressing social problems, including problems in health. The societal impact of biased AI models has been widely discussed. However, sometimes missing in the conversation is the role of historical policies and injustices in shaping available data and outcomes. Evaluating data and algorithms through a historical lens could be critical for social change.
Large Language Model Research on an Academic Budget
Parikshit Ram
Parikshit Ram is a Principal Research Staff Member in IBM Research, NY with research expertise in similarity search, efficient all-pairs algorithms, density estimation, computational geometry, kernel methods, decision trees, ensembles, automated machine learning and data science. He currently conducts basic mathematical and applied computational research on topics pertinent to automated machine learning and automated decision optimization as well as various aspects of generalization and learning with less data. Prior to joining IBM Research, he was a Senior Research Staff Member at Skytree, a machine learning company focused on providing high performance machine learning tools for large scale modeling and data analysis, which was subsequently acquired by Infosys. Parikshit received his Ph.D. in machine learning at Georgia Institute of Technology and a B.Sc. and M.Sc. in Mathematics and Computing from the Indian Institute of Technology. He has been in the program committee of top conferences and recognized as a top reviewer at ICML and NeurIPS multiple times.
Kush R. Varshney
Kush R. Varshney was born in Syracuse, New York in 1982. He received the B.S. degree (magna cum laude) in electrical and computer engineering with honors from Cornell University, Ithaca, New York, in 2004. He received the S.M. degree in 2006 and the Ph.D. degree in 2010, both in electrical engineering and computer science at the Massachusetts Institute of Technology (MIT), Cambridge. While at MIT, he was a National Science Foundation Graduate Research Fellow.
Dr. Varshney is a distinguished research scientist and manager with IBM Research at the Thomas J. Watson Research Center, Yorktown Heights, NY, where he leads the machine learning group in the Trustworthy Machine Intelligence department. He was a visiting scientist at IBM Research - Africa, Nairobi, Kenya in 2019. He is the founding co-director of the IBM Science for Social Good initiative. He applies data science and predictive analytics to human capital management, healthcare, olfaction, computational creativity, public affairs, international development, and algorithmic fairness.
He and his team created several well-known open-source toolkits, including AI Fairness 360, AI Explainability 360, Uncertainty Quantification 360, and AI FactSheets 360. AI Fairness 360. He conducts academic research on the theory and methods of trustworthy machine learning. He independently-published a book entitled 'Trustworthy Machine Learning' in 2022, available at http://www.trustworthymachinelearning.com.
Kyunghyun Cho
Kyunghyun Cho is an associate professor of computer science and data science at New York University and CIFAR Fellow of Learning in Machines & Brains. He is also a senior director of frontier research at the Prescient Design team within Genentech Research & Early Development (gRED). He was a research scientist at Facebook AI Research from June 2017 to May 2020 and a postdoctoral fellow at University of Montreal until Summer 2015 under the supervision of Prof. Yoshua Bengio, after receiving MSc and PhD degrees from Aalto University April 2011 and April 2014, respectively, under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He received the Samsung Ho-Am Prize in Engineering in 2021. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.
Benjamin Roth
Benjamin Roth is a professor in the area of deep learning & statistical NLP, leading the WWTF Vienna Research Group for Young Investigators "Knowledge-Infused Deep Learning for Natural Language Processing". Prior to this, he was an interim professor at LMU Munich. He obtained his PhD from Saarland University and did a postdoc at UMass, Amherst. His research interests are the extraction of knowledge from text with statistical methods and knowledge-supervised learning.
Gintare Karolina Dziugaite + Daniel Roy
Gintare Karolina Dziugaite is a senior research scientist at Google Brain, based in Toronto, an adjunct professor in the McGill University School of Computer Science, and an associate industry member of Mila, the Quebec AI Institute. Her research combines theoretical and empirical approaches to understanding deep learning, with a focus on generalization, data and network compression. Gintare obtained my Ph.D. in machine learning from the University of Cambridge, under the supervision of Zoubin Ghahramani. Before that, she studied Mathematics at the University of Warwick and read Part III in Mathematics at the University of Cambridge, receiving a Masters of Advanced Study (MASt) in Applied Mathematics.