A non-gradient method for solving partial differential equations with deep neural networks
Speaker
Prof. Dan Hu
Shanghai Jiao Tong University
Abstract

Deep learning has achieved wide success in solving Partial Differential Equations (PDEs),  with particular strength in handling high dimensional problems and parametric problems. Nevertheless, there is still a lack of a clear picture on the designing of network architecture and the training of network parameters. In this work, we developed a non-gradient framework for solving elliptic PDEs based on Neural Tangent Kernel (NTK): 1. ReLU activation function is used to control the compactness of the NTK so that solutions with relatively high frequency components can be well expressed;  2. Numerical discretization is used for differential operators to reduce computational cost; 3. A dissipative evolution dynamics corresponding to the elliptic PDE is used for parameter training instead of the gradient-type descent of a loss function. The dissipative dynamics can guarantee the convergence of the training process while avoiding employment of loss functions with high order derivatives. It is also helpful for both controlling of kernel property and reduction of computational cost. Numerical tests have shown excellent performance of the non-gradient method. 

About the Speaker

胡丹,理学博士,教授,博士生导师,2017年教育部青年长江学者。北京大学数学学士(2002)和博士(2007),美国纽约大学库朗研究所博士后。2010 年 1 月进入上海交通大学自然科学研究院和数学科学学院工作。主要从事血管与血流、生命科学中的稀有事件等问题的建模、模拟和分析和人工智能基础理论研究。代表性工作发表于Phys. Rev. Lett.、Nature Commun.和PLoS Biol.等顶级杂志,其中关于血管适应性生长方面的工作被Nature选为年度工作亮点。

Date&Time
2022-09-22 2:00 PM
Location
Room: Tencent Meeting
CSRC 新闻 CSRC News CSRC Events CSRC Seminars CSRC Divisions