Workshop
Hardware-Aware Efficient Training of Deep Learning Models
Ghouthi BOUKLI HACENE · Vincent Gripon · François Leduc-Primeau · Vahid Partovi Nia · Fan Yang · Andreas Moshovos · Yoshua Bengio
Fri 7 May, 4:45 a.m. PDT
To reach top-tier performance, deep learning architectures usually rely on a large number of parameters and operations, and thus require to be processed using considerable power and memory. Numerous works have proposed to tackle this problem using quantization of parameters, pruning, clustering of parameters, decompositions of convolutions, or using distillation. However, most of these works aim at accelerating only the inference process and disregard the training phase. In practice, however, it is the learning phase that is by far the most complex. There has been recent efforts in introducing some compression on the training process, however, it remains challenging. In this workshop, we propose to focus on reducing the complexity of the training process. Our aim is to gather researchers interested in reducing energy, time, or memory usage for faster/cheaper/greener prototyping or deployment of deep learning models. Due to the dependence of deep learning on large computational capacities, the outcomes of the workshop could benefit all who deploy these solutions, including those who are not hardware specialists. Moreover, it would contribute to making deep learning more accessible to small businesses and small laboratories. Indeed, training complexity is of interest to many distinct communities. A first example is training on edge devices, where training can be used to specialize to data obtained online when the data cannot be transmitted back to the cloud because of constraints on privacy or communication bandwidth. Another example is accelerating training on dedicated hardware such as GPUs or TPUs.
Schedule
Fri 4:45 a.m. - 5:00 a.m.
|
Opening welcome speech: introducing the aims of the workshop, and briefly introducing the speakers. ( Introduction ) > link | 🔗 |
Fri 5:00 a.m. - 5:30 a.m.
|
Keynote 1: Warren Gross. Title: Stochastic Computing for Machine Learning towards an Intelligent Edge ( Keynote ) > link | 🔗 |
Fri 5:30 a.m. - 6:00 a.m.
|
Keynote 2: Julie Grollier. Title: Spiking Equilibrium Propagation for Autonomously Learning Hardware ( Keynote ) > link | 🔗 |
Fri 6:00 a.m. - 6:30 a.m.
|
Keynote 3: Ehsan Saboori. Title: Deep learning model compression using neural network design space exploration ( Keynote ) > link | 🔗 |
Fri 6:30 a.m. - 7:00 a.m.
|
Break
|
🔗 |
Fri 7:00 a.m. - 8:30 a.m.
|
Poster session and open discussion. ( Poster session ) > link | 🔗 |
Fri 8:30 a.m. - 9:30 a.m.
|
Panel discussion. ( Panel discussion. ) > link | 🔗 |
Fri 9:30 a.m. - 10:00 a.m.
|
Break
|
🔗 |
Fri 10:00 a.m. - 10:30 a.m.
|
Keynote 4: Yunhe Wang. Title: AdderNet: Do we really need multiplications in deep learning? ( Keynote ) > link | 🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Keynote 5: Song Han. Title: TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Training ( Keynote ) > link | 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Keynote 6: Liangwei Ge. Title: Deep learning challenges and how Intel is addressing them ( Keynote ) > link | 🔗 |
Fri 11:30 a.m. - 12:00 p.m.
|
Announcement of the different award winners and closing remarks. ( Conclusion ) > link | 🔗 |