学术讲座

您所在的位置: 首页» 首页栏目» 学术讲座

学术讲座

校庆讲座之三十一:美国爱荷华州立大学刘海亮教授应邀来我校讲学


题目:Differential Equations and Deep Learning
时间:2019年7月2日(二)16:30-17:30
地点:立志楼A422
主办:数学与计算科学学院
报告人简介:
      
Dr. Hailiang Liu is a Mathematics Professor at the Iowa State University (ISU) and the Holl Chair in Applied Mathematics from 2002-2012. He received his Master degree in Applied Mathematics from Tsinghua University of China in 1988, and Ph.D. degree from the Chinese Academy in 1995; while he held professorship positions at Henan Normal University from 1989-1996. He received an Alexander von Humboldt-Research Fellowship in 1996 that allowed him to conduct research in Germany from 1997-1999. He joined UCLA as a CAM Assistant Professor from 1999-2002.  He then came to Iowa State University as an Associate Professor in 2002, moving up to Full Professor in 2007.  Liu’s primary research interests include analysis of applied partial differential equations, the development of novel, high order algorithms for the approximate solution of these problems, and the interplay between analytical theory and computational aspects of such algorithms with applications to shock waves, kinetic transport, level set closure, propagation of critical thresholds and recovery of high frequency wave fields. Liu serves on the editorial board of the JMAA journal and has given many invited lectures, including the invited addresses in the international conference on hyperbolic problems in 2002 and 2018. Liu published more than 120 research papers, mostly in Numerical Analysis and Applied Partial Differential Equations.

报告摘要:
      
Deep learning is machine learning using neural networks with many hidden layers, and it has become a primary tool in a wide variety of practical learning tasks.   In this talk we begin with a simple optimization problem, and show how it can be reformulated as gradient flows, which in turn lead to different optimization solvers.  We further introduce the mathematical formulation of deep residual neural networks as a PDE optimal control problem. We  state and prove optimality conditions for the inverse deep learning problem, using the Hamilton-Jacobi-Bellmann equation and the Pontryagin maximum principle.