ICLR 2023
Skip to yearly menu bar Skip to main content


Workshop

ICLR 2023 Workshop on Sparsity in Neural Networks: On practical limitations and tradeoffs between sustainability and efficiency

Baharan Mirzasoleiman · Zhangyang Wang · Decebal Constantin Mocanu · Elena Mocanu · Utku Evci · Trevor Gale · Aleksandra Nowak · Ghada Sokar · Zahra Atashgahi

AD12

Deep networks with billions of parameters trained on large datasets have achieved unprecedented success in various applications, ranging from medical diagnostics to urban planning and autonomous driving, to name a few. However, training large models is contingent on exceptionally large and expensive computational resources. Such infrastructures consume substantial energy, produce a massive amount of carbon footprint, and often soon become obsolete and turn into e-waste. While there has been a persistent effort to improve the performance of machine learning models, their sustainability is often neglected. This realization has motivated the community to look closer at the sustainability and efficiency of machine learning, by identifying the most relevant model parameters or model structures. In this workshop, we examine the community’s progress toward these goals and aim to identify areas that call for additional research efforts. In particular, by bringing researchers with diverse backgrounds, we will focus on the limitations of existing methods for model compression and discuss the tradeoffs among model size and performance. The main goal of the workshop is to bring together researchers from academia, and industry with diverse expertise and points of view on network compression, to discuss how to effectively evaluate and enforce machine learning pipelines to better comply with sustainability and efficiency constraints. Our workshop will consist of a diverse set of speakers (ranging from researchers with hardware background to researchers in neurobiology, and algorithmic ML community) to discuss sparse training algorithms and hardware limitations in various machine learning domains, ranging from robotics and task automation, to vision, natural language processing, and reinforcement learning. The workshop aims to further develop these research directions for the machine learning community.

Chat is not available.
Timezone: America/Los_Angeles

Schedule