Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Integration of Deep Neural Models and Differential Equations
Tan M Nguyen · Richard Baraniuk · Animesh Garg · Stanley J Osher · Anima Anandkumar · Bao Wang





Workshop Home Page

Differential equations and neural networks are not only closely related to each other but also offer complementary strengths: the modelling power and interpretability of differential equations, and the approximation and generalization power of deep neural networks. The great leap forward in machine learning empowered by deep neural networks has been primarily relying on the increasing amounts of data coupled with modern abstractions of distributed computing. When the models and problems grow larger and more complex, the need for ever larger datasets becomes a bottleneck.

Differential equations have been the principled way to encode prior structural assumptions into nonlinear models such as deep neural networks, reducing their need for data while maintaining the modelling power. These advantages allow the models to scale up to bigger problems with better robustness and safety guarantee in practical settings.

While progress has been made on combining differential equations and deep neural networks, most existing work has been disjointed, and a coherent picture has yet to emerge. Substantive progress will require a principled approach that integrates ideas from the disparate lens, including differential equations, machine learning, numerical analysis, optimization, and physics.

The goal of this workshop is to provide a forum where theoretical and experimental researchers of all stripes can come together not only to share reports on their progress but also to find new ways to join forces towards the goal of coherent integration of deep neural networks and differential equations. Topics to be discussed include, but are not limited to:
- Deep learning for high dimensional PDE problems
- PDE and stochastic analysis for deep learning
- PDE and analysis for new architectures
- Differential equations interpretations of first order optimization methods
- Inverse problems approaches to learning theory
- Numerical tools to interface deep learning models and ODE/PDE solver

Accepted Papers

Contributed Talks
1. Solving ODE with Universal Flows: Approximation Theory for Flow-Based Models. Chin-Wei Huang, Laurent Dinh, Aaron Courville (Paper, Slides).
2. Neural Operator: Graph Kernel Network for Partial Differential Equations. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar (Paper, Slides).
3. A Mean-field Analysis of Deep ResNet and Beyond:Towards Provable Optimization Via Overparameterization From Depth. Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying (Paper, Slides).
4. A Free-Energy Principle for Representation Learning. Pratik Chaudhari, Yansong Gao (Paper, Slides).
5. Amortized Finite Element Analysis for Fast PDE-Constrained Optimization. Tianju Xue, Alex Beatson, Sigrid Adriaenssens, Ryan P. Adams (Paper, Slides).
6. Nano-Material Configuration Design with Deep Surrogate Langevin Dynamics. Thanh V. Nguyen, Youssef Mroueh, Samuel Hoffman, Payel Das, Pierre Dognin, Giuseppe Romano, Chinmay Hegde (Paper, Slides).

Poster Lightning Talks 1
7. Nonlinear Differential Equations with External Forcing. Paul Pukite (Paper, Slides).
8. On the space-time expressivity of ResNets. Johannes Christoph Müller (Paper, Slides).
9. Enforcing Physical Constraints in CNNs through Differentiable PDE Layer. Chiyu “Max” Jiang, Karthik Kashinath, Prabhat, Philip Marcus (Paper, Slides).
10. Deep Ritz revisited. Johannes Müller, Marius Zeinhofer (Paper, Slides).
11. Differential Equations as Model Prior for DeepLearning and Applications to Robotics. Michael Lutter, Jan Peters (Paper, Slides).
12. Differentiable Physics Simulation. Junbang Liang, Ming C. Lin (Paper, Slides).
13. Comparing recurrent and convolutional neural networks for predicting wave propagation. Stathi Fotiadis, Eduardo Pignatelli, Mario Lino Valencia, Chris Cantwell, Amos Storkey, Anil A. Bharath (Paper, Slides).
14. Time Dependence in Non-Autonomous Neural ODEs. Jared Quincy Davis, Krzysztof Choromanski, Vikas Sindhwani, Jake Varley, Honglak Lee, Jean-Jacques Slotine, Valerii Likhosterov, Adrian Weller, Ameesh Makadia (Paper, Slides).
15. Learning To Solve Differential Equations Across Initial Conditions. Shehryar Malik, Usman Anwar, Ali Ahmed, Alireza Aghasi (Paper, Slides).
16. How Chaotic Are Recurrent Neural Networks?. Pourya Vakilipourtakalou, Lili Mou (Paper, Slides).
17. Learning-Based Strong Solutions to Forward and Inverse Problems in PDEs. Leah Bar, Nir Sochen (Paper, Slides).
18. Embedding Hard Physical Constraints in Convolutional Neural Networks for 3D Turbulence. Arvind T. Mohan, Nicholas Lubbers, Daniel Livescu, Michael Chertkov (Paper, Slides).
19. Wavelet-Powered Neural Networks for Turbulence. Arvind T. Mohan, Daniel Livescu, Michael Chertkov (Paper, Slides).
20. Can auto-encoders help with filling missing data?. Marek Śmieja, Maciej Kołomycki, Łukasz Struski, Mateusz Juda, Mário A. T. Figueiredo (Paper, Slides).
21. Neural Differential Equations for Single Image Super-Resolution. Teven Le Scao (Paper, Slides).
22. Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View. Yiping Lu, Zhuohan Li, Di He, Zhiqing Sun, Bin Dong, Tao Qin, Liwei Wang, Tie-yan Liu (Paper, Slides).
23. Neural Dynamical Systems. Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Andrew Oakleigh Nelson, Mark D Boyer, Jeff Schneider (Paper, Slides).

Poster Lightning Talks 2
24. Progressive Growing of Neural ODEs. Hammad A. Ayyubi, Yi Yao, Ajay Divakaran (Paper, Slides).
25. Fast Convergence for Langevin with Matrix Manifold Structure. Ankur Moitra, Andrej Risteski (Paper, Slides).
26. Bringing PDEs to JAX with forward and reverse modes automatic differentiation. Ivan Yashchuk (Paper, Slides).
27. Urban air pollution forecasts generated from latent space representation. Cesar Quilodran Casas, Rossella Arcucci, Yike Guo (Paper, Slides).
28. Dissipative SymODEN: Encoding Hamiltonian Dynamics with Dissipation and Control into Deep Learning. Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty (Paper, Slides).
29. Neural Ordinary Differential Equation Value Networks for Parametrized Action Spaces. Michael Poli, Stefano Massaroli, Sanzhar Bakhtiyarov, Atsushi Yamashita, Hajime Asama, Jinkyoo Park (Paper, Slides).
30. Stochasticity in Neural ODEs: An Empirical Study. Alexandra Volokhova, Viktor Oganesyan, Dmitry Vetrov (Paper, Slides).
31. Generating Control Policies for Autonomous Vehicles Using Neural ODEs. Houston Lucas, Richard Kelley (Paper, Slides).
32. Generative ODE Modeling with Known Unknowns. Ori Linial, Uri Shalit (Paper, Slides).
33. Encoder-decoder neural network for solving the nonlinear Fokker-Planck-Landau collision operator in XGC. Marco Andres Miller, Randy Michael Churchill, Choong-Seock Chang, Robert Hager (Paper, Slides).
34. Differentiable Molecular Simulations for Control and Learning. Wujie Wang, Simon Axelrod, Rafael Gómez-Bombarelli (Paper, Slides).
35. Port-Hamiltonian Gradient Flows. Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park (Paper, Slides).
36. Lagrangian Neural Networks. Miles Cranmer, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, Shirley Ho (Paper, Slides).
37. Constrained Neural Ordinary Differential Equations with Stability Guarantees. Aaron Tuor, Jan Drgona, Draguna Vrabie (Paper, Slides).
38. Stochastic gradient algorithms from ODE splitting perspective. Daniil Merkulov, Ivan Oseledets (Paper, Slides).
39. The equivalence between Stein variational gradient descent and black-box variational inference. Casey Chu, Kentaro Minami, Kenji Fukumizu (Paper, Slides).
40. Towards Understanding Normalization in Neural Ordinary Differential Equations. Julia Gusak, Larisa Markeeva, Talgat Daulbaev, Alexander Katrutsa, Andrzej Cichocki, Ivan Oseledets (Paper, Slides).

Call for Papers and Submission Instructions

We invite researchers to submit anonymous extended abstracts of up to 4 pages (including abstract, but excluding references). No specific formatting is required. Authors may use the ICLR style file, or any other style as long as they have standard font size (11pt) and margins (1in).

Submissions should be anonymous and are handled through the OpenReview system. Please note that at least one coauthor of each accepted paper will be expected to attend the workshop in person to present a poster or give a contributed talk.

Papers can be submitted at the address:

https://openreview.net/group?id=ICLR.cc/2020/Workshop/DeepDiffEq

Important Dates

Submission Deadline (EXTENDED): 23:59 pm PST, Tuesday, February 18th
Acceptance notification: Tuesday, February 25th
Camera ready submission: Sunday, April 19th
Workshop: Sunday, April 26th