Accuracy and Efficiency in Scientific Machine Learning

Share the event

IVADO and the Montreal Mathematical Research Center are pleased to welcome some twenty international experts in the field of physics-informed machine learning who will share their latest advances in the mathematical analysis of methods using Neural Networks. Despite ongoing efforts to integrate machine learning techniques into computer simulations, significant challenges remain in developing accurate and efficient SciML methodologies for routine use in science and engineering applications.

The workshop aims to address these challenges by creating novel mathematical theories for error estimation and control. These developments are intended to improve the accuracy and efficiency of Neural Network approximations for initial- and boundary-value problems and operator learning.

During the workshop, students, postdocs, and researchers will have multiple opportunities for discussions with scientific leaders on these issues. A poster session and an informal cocktail reception are also on the agenda.

We encourage you, as well as your collaborators and students, to present your work during the poster session and to take this opportunity to engage in discussions with our guests and gain new perspectives.

Click HERE to register.

Comité Organisateur

Marta d’Elia (Stanford University and Atomic Machines)
David Pardo (Basque Center for Applied Mathematics)
Serge Prudhomme (IVADO, CRM, Polytechnique Montréal)
Foutse Khomh (IVADO, Polytechnique Montréal)

Confirmed Speakers

Ben Adcock (Simon Fraser University), Ziad Aldirany (MAG Energy Solutions), Ramin Bostanabad (University of California, Irvine), Simone Brugipaglia (Concordia University), Tan Bui-Thanh, (The University of Texas at Austin), Zhiqiang Cai (Purdue University), Roger Ghanem (University of Southern California), Somdatta Goswami (Johns Hopkins University), Anthony Gruber, (Sandia National Laboratories), Eldad Haber (University of British Columbia), Amanda Howard (Pacific Northwest National Laboratory), Prashant Jha, (South Dakota School of Mines and Technology), Anastasis Kratsios (McMaster University), Ching-Yao Lai (Stanford University), Romit Maulik (Pennsylvania State University), Ignacio Muga (Pontificia Universidad Católica de Valparaíso), Habib Najm (Sandia National Laboratories), Adam Oberman (McGill University), Azzeddine Soulaïmani (École de technologie supérieure ÉTS), Panos Stinis (Pacific Northwest National Laboratory), Nat Trask (University of Pennsylvania), Kristoffer van der Zee, (University of Nottingham)

Agenda

Wednesday, June 25th, 2025

9 – 9:15 a.m.: Welcome and Check-In
9:15 – 9:30 a.m.: Welcome Address
9:30 – 10 a.m.: Multi-Stage Neural Networks Achieving Machine Precision
Ching-Yao Lai (Stanford University)

10 – 10:30 a.m.: Through Residual Correction and Beyond
Anastasis Kratsios (McMaster University)

10:30 – 11 a.m.: Break
11 a.m. – 11:30 a.m.: Data-Driven Particle Dynamics: Structure Preserving Coarse-Graining for Non-Equilibrium Systems
Nat Trask (University of Pennsilvania)

11:30 a.m. – 12:00 p.m.: When Big Neural Networks Are Not Enough: Physics, Multifidelity and Kernels
Panos Stinis (Pacific Northwest National Laboratory)

12:00 – 2 p.m.: Lunch Break
2 – 2:30 p.m.: Hybrid Solvers: AI-Integrated Numerical Simulators for Reliable Real-Time Inference
Somdatta Goswami (Johns Hopkins University)

2:30 – 3 p.m.: Recent Advances in Neural Control of Finite Element Methods
Ignacio Muga (Pontificia Universidad Católica de Valparaíso)

3 – 3:30 p.m.: TBD
Tan Bui-Thanh (The University of Texas at Austin)
3:30 – 4 p.m.: Break and Poster Set-Up
4 – 4:30 p.m.: Poster Lightninjg Presentations
4:30 – 6 p.m.: Poster Session and Cocktail

Thursday, June 26th, 2025

8:45 – 9 a.m.: Welcome and Check-In
9 – 9:30  a.m.: Challenges When Integrating Neural Networks for Solving Parametric PDEs
David Pardo (Basque Center for Applied Mathematics)

9:30 – 10 a.m.: Quasi-Optimal Convergence Analysis of Minimal-Residual Neural-Network Methods for PDEs,
Kristoffer van der Zee (University of Nottingham)

10 a.m. – 10:30 a.m.: CS4ML: A General Framework for Active Learning with Arbitrary Data Based on Christoffel Functions
Ben Adcock (Simon Fraser University)

10:30 – 11 a.m.: Break
11 a.m. – 11:30 p.m.: Generalized Learning via Gaussian Processes
Ramin Bostanabad (University of California, Irvine)

11:30 – 12 p.m.: Probabilistic Estimates for Tail Probabilities
Roger Ghanem (University of Southern California)

12 – 2 p.m.: Lunch Break
2 – 2:30 p.m.: More of a Good Thing: Combining Multifidelity, Domain Decomposition, and New Architectures for Improved Physics-Informed Training
Amanda Howard (Pacific Northwest National Laboratory)

2:30 – 3 p.m.: Efficient Physics-Preserved Neural Network (P2NN) Methods for Interface Problems,
Zhiqiang Cai (Purdue University)

3 – 3:30 p.m.: Reliable Neural Operators: Error Control
Prashant Jha (South Dakota School of Mines and Technology)

3:30 – 4 p.m.: Coffee Break
4 – 4:30 p.m.: Generative Flow Models
Eldad Haber (University of British Columbia)

4:30 – 5 p.m.: Model Reduction for Accelerating Bracket-Based SciML,
Anthony Gruber (Sandia National Laboratories)

Friday, June 27th, 2025

8:45 – 9 a.m.: Welcome and Check-In
9 – 9:30  a.m.: Practical Existence Theorems for Deep Learning Approximation in High Dimensions,
Simone Brugiapaglia (Concordia University)

9:30 – 10 a.m.: Advancements in PINNs for Fluid Mechanics: The PirateNet Architecture
Azzeddine Soulaïmani (École de technologie supérieure ÉTS)

10 a.m. – 10:30 a.m.: Predicting the Long-Term Behavior of Chaotic Dynamical Systems with Scientific Machine Learning
Romit Maulik (Pennsylvania State University)

10:30 – 11 a.m.: Break
11 a.m. – 11:30 p.m.: A Multi-Level Approach to Error Reduction in Physics-Based Neural Networks
Ziad Aldirany (MAG Energy Solutions)

11:30 – 12 p.m.: Part I: Machine Learning and PDEs. Part II: AI Safety
Adam Oberman (McGill University)

12 – 12:30 p.m.: Closing Address

Click HERE to register.